Andy Oram

Andy Oram is an editor at O'Reilly Media. An employee of the company since 1992, Andy currently specializes in open source technologies and software engineering. His work for O'Reilly includes the first books ever released by a U.S. publisher on Linux, the 2001 title Peer-to-Peer, and the 2007 best-seller Beautiful Code.

Survey on the Future of Open Source, and Lessons from the Past

Quality and security drive adoption, but community is rising fast

I recently talked to two managers of Black Duck, the first company formed to help organizations deal with the licensing issues involved in adopting open source software. With Tim Yeaton, President and CEO, and Peter Vescuso, Executive Vice President of Marketing and Business Development, I discussed the seventh Future of Open Source survey, from which I’ll post a few interesting insights later. But you can look at the slides for yourself, so this article will focus instead on some of the topics we talked about in our interview. While I cite some ideas from Yeaton and Vescuso, many of the observations below are purely my own.

The spur to collaboration

One theme in the slides is the formation of consortia that develop software for entire industries. One recent example everybody knows about is OpenStack, but many industries have their own impressive collaboration projects, such as GENIVI in the auto industry.

What brings competitors together to collaborate? In the case of GENIVI, it’s the impossibility of any single company meeting consumer demand through its own efforts. Car companies typically take five years to put a design out to market, but customers are used to product releases more like those of cell phones, where you can find something enticingly new every six months. In addition, the range of useful technologies—Bluetooth, etc.—is so big that a company has to become expert at everything at once. Meanwhile, according to Vescuso, the average high-end car contains more than 100 million lines of code. So the pace and complexity of progress is driving the auto industry to work together.

All too often, the main force uniting competitors is the fear of another vendor and the realization that they can never beat a dominant vendor on its own turf. Open source becomes a way of changing the rules out from under the dominant player. OpenStack, for instance, took on VMware in the virtualization space and Amazon.com in the IaaS space. Android attracted phone manufacturers and telephone companies as a reaction to the iPhone.

A valuable lesson can be learned from the history of the Open Software Foundation, which was formed in reaction to an agreement between Sun and AT&T. In the late 1980s, Sun had become the dominant vendor of Unix, which was still being maintained by AT&T. Their combination panicked vendors such as Digital Equipment Corporation and Apollo Computer (you can already get a sense of how much good OSF did them), who promised to create a single, unified standard that would give customers increased functionality and more competition.

The name Open Software Foundation was deceptive, because it was never open. Instead, it was a shared repository into which various companies dumped bad code so they could cynically claim to be interoperable while continuing to compete against each other in the usual way. It soon ceased to exist in its planned form, but did survive in a fashion by merging with X/Open to become the Open Group, an organization of some significance because it maintains the X Window System. Various flavors of BSD failed to dislodge the proprietary Unix vendors, probably because each BSD team did its work in a fairly traditional, closed fashion. It remained up to Linux, a truly open project, to unify the Unix community and ultimately replace the closed Sun/AT&T partnership.

Collaboration can be driven by many things, therefore, but it usually takes place in one of two fashions. In the first, somebody throws out into the field some open source code that everybody likes, as Rackspace and NASA did to launch OpenStack, or IBM did to launch Eclipse. Less common is the GENIVI model, in which companies realize they need to collaborate to compete and then start a project.

A bigger pie for all

The first thing on most companies’ minds when they adopt open source is to improve interoperability and defend themselves against lock-in by vendors. The Future of Open Source survey indicates that the top reasons for choosing open source is its quality (slide 13) and security (slide 15). This is excellent news because it shows that the misconceptions of open source are shattering, and the arguments by proprietary vendors that they can ensure better quality and security will increasingly be seen as hollow.
Read more…

A very serious game that can cure the orphan diseases

Fit2Cure taps the public's visual skills to match compounds to targets

In the inspiring tradition of Foldit, the game for determining protein shapes, Fit2Cure crowdsources the problem of finding drugs that can cure the many under-researched diseases of developing countries. Fit2Cure appeals to the player’s visual–even physical–sense of the world, and requires much less background knowledge than Foldit.

There about 7,000 rare diseases, fewer than 5% of which have cures. The number of people currently engaged in making drug discoveries is by no means adequate to study all these diseases. A recent gift to Harvard shows the importance that medical researchers attach to filling the gap. As an alternative approach, abstracting the drug discovery process into a game could empower thousands, if not millions, of people to contribute to this process and make discoveries in diseases that get little attention to scientists or pharmaceutical companies.

The biological concept behind Fit2Cure is that medicines have specific shapes that fit into the proteins of the victim’s biological structures like jig-saw puzzle pieces (but more rounded). Many cures require finding a drug that has the same jig-saw shape and can fit into the target protein molecule, thus preventing it from functioning normally.

Read more…

Data sharing drives diagnoses and cures, if we can get there (part 2)

How the field of genetics is using data within research and to evaluate researchers

Editor’s note: Earlier this week, Part 1 of this article described Sage Bionetworks, a recent Congress they held, and their way of promoting data sharing through a challenge.

Data sharing is not an unfamiliar practice in genetics. Plenty of cell lines and other data stores are publicly available from such places as the TCGA data set from the National Cancer Institute, Gene Expression Omnibus (GEO), and Array Expression (all of which can be accessed through Synapse). So to some extent the current revolution in sharing lies not in the data itself but in critical related areas.

First, many of the data sets are weakened by metadata problems. A Sage programmer told me that the famous TCGA set is enormous but poorly curated. For instance, different data sets in TCGA may refer to the same drug by different names, generic versus brand name. Provenance–a clear description of how the data was collected and prepared for use–is also weak in TCGA.

In contrast, GEO records tend to contain good provenance information (see an example), but only as free-form text, which presents the same barriers to searching and aggregation as free-form text in medical records. Synapse is developing a structured format for presenting provenance based on the W3C’s PROV standard. One researcher told me this was the most promising contribution of Synapse toward the shared used of genetic information.

Read more…

Data sharing drives diagnoses and cures, if we can get there (part 1)

Observations from Sage Congress and collaboration through its challenge

The glowing reports we read of biotech advances almost cause one’s brain to ache. They leave us thinking that medical researchers must command the latest in all technological tools. But the engines of genetic and pharmaceutical innovation are stuttering for lack of one key fuel: data. Here they are left with the equivalent of trying to build skyscrapers with lathes and screwdrivers.

Sage Congress, held this past week in San Francisco, investigated the multiple facets of data in these field: gene sequences, models for finding pathways, patient behavior and symptoms (known as phenotypic data), and code to process all these inputs. A survey of efforts by the organizers, Sage Bionetworks, and other innovations in genetic data handling can show how genetics resembles and differs from other disciplines.

An intense lesson in code sharing

At last year’s Congress, Sage announced a challenge, together with the DREAM project, intended to galvanize researchers in genetics while showing off the growing capabilities of Sage’s Synapse platform. Synapse ties together a number of data sets in genetics and provides tools for researchers to upload new data, while searching other researchers’ data sets. Its challenge highlighted the industry’s need for better data sharing, and some ways to get there.

Read more…

Building native apps from JavaScript using Appcelerator Titanium

An interview with John Anderson

In this interview, the author of Appcelerator Titanium: Up and Running describes how Titanium can be used to generate native mobile apps from JavaScript code. He distinguishes the Titanium platform from native API programming and from other popular JavaScript platforms for mobile devices. We look at the way Titanium exploits the expressiveness and flexibility of JavaScript, and some of the directions that the Appcelerator company is taking Titanium.
Read more…

Designing resilient communities

Establishing an effective organization for large-scale growth

In the open source and free software movement, we always exalt community, and say the people coding and supporting the software are more valuable than the software itself. Few communities have planned and philosophized as much about community-building as ZeroMQ. In the following posting, Pieter Hintjens quotes from his book ZeroMQ, talking about how he designed the community that works on this messaging library.

How to Make Really Large Architectures (excerpted from ZeroMQ by Pieter Hintjens)

There are, it has been said (at least by people reading this sentence out loud), two ways to make really large-scale software. Option One is to throw massive amounts of money and problems at empires of smart people, and hope that what emerges is not yet another career killer. If you’re very lucky and are building on lots of experience, have kept your teams solid, and are not aiming for technical brilliance, and are furthermore incredibly lucky, it works.

But gambling with hundreds of millions of others’ money isn’t for everyone. For the rest of us who want to build large-scale software, there’s Option Two, which is open source, and more specifically, free software. If you’re asking how the choice of software license is relevant to the scale of the software you build, that’s the right question.

The brilliant and visionary Eben Moglen once said, roughly, that a free software license is the contract on which a community builds. When I heard this, about ten years ago, the idea came to me—Can we deliberately grow free software communities?

Read more…

Broadening consults and narrowing queries: HealthTap’s social network

Innovations keep a community growing

Noting the power of social media in situations ranging from the marketing of sneakers to the overthrow of autocratic regimes, many health care thinkers have suggested a greater use of social media by doctors and people seeking information on health care. One of the companies moving fastest in this area is HealthTap, which I reviewed shortly before their launch and most recently after an intriguing initiative in rating doctors.

Studies and casual observations show that all sorts of mobile and messaging services are on the increase among doctors, but this in itself doesn’t constitute the kind of diverse, group problem solving that social media implies. One of the new initiatives at HealthTap is called “Curbside Consult,” and represents in my mind a big step toward the supple information sharing suggested in the book #SOCIALQI, which I reviewed last month.
Read more…

Saint James Infirmary: checking the pulse of health IT at HIMSS

Signs of the field's potential along with self-imposed limits

I spent most of the past week on my annual assessment of the progress that the field of health information technology is making toward culling the benefits offered by computers and Internet connectivity: instant access to data anywhere; a leveling of access for different patient populations and for health care providers big and small; the use of analytics to direct resources and attack problems better.

The big HIMSS conference in New Orleans, a crossroads for doctors, technologists, and policy-makers, provided a one-stop check-in. I already covered several aspects of the conference in two earlier postings, Singin’ the Blues: visions deferred at HIMSS health IT conference and Slow & Steady: looking toward a better health IT future at HIMSS. Here I’ll summarize a couple more trends in data exchange and basic functions of health IT systems.

Read more…

Slow & Steady: looking toward a better health IT future at HIMSS

Participatory medicine and hospital technologies take steps forward

After my funereal disparagement yesterday of the opening of the HIMSS health care conference in New Orleans, I decided to pick up the beat today and talk about some of the people and ideas I encountered with promise for the future.

Nobody Knows The Way I Feel This Morning: patient engagement and all that jazz

Yesterday I spoke of the gap between the reform-minded leaders of health care and the institutions that mostly take care of us. The latest battleground between these peaks of care is the movement variously called patient engagement, patient empowerment, and participatory medicine.

There’s nothing new about this concept. Desperate patients have been self-educating, negotiating with health care systems, and creating advocacy groups forever. On the self-help front, Prevention Magazine began (according to Wikipedia) in 1950. The Society for Participatory Medicine was founded in the mid 2000’s, around the time e-Patient Dave made the concept into a meme through his brave online sharing of his care.

HIMSS has thrown its support behind the Society for Participatory Medicine, which had a lunchtime meeting at the conference yesterday to discuss increasing membership and grass-roots promotional activities. (Folks, consider yourselves promoted.) HIMSS also invited teh popular author Eric Topol to deliver yesterday’s keynote. And the first statement offered by Topol was praise for Regina Holliday, a consumately self-educated patient advocate and creator of the famous artwork and painted jackets in the Walking Gallery. Read more…

Singin’ the Blues: visions deferred at HIMSS health IT conference

The main concerns of health reformers don't rise to the top of health provider agendas

HIMSS, the leading health IT conference in the US, drew over 32,000 people to New Orleans this year (with another thousand or two expected to register by the end of the conference). High as this turn-out sounds, it represents a drop from last year, which exceeded 37,000.

Maybe HIMSS could do even better by adding a “Clueless” or “I don’t believe in health IT” track. Talking to the people who promote health IT issues to the doctors and their managers, I sense a gap–and to some extent, a spectrum of belief–in the recognition of the value of gathering and analyzing data about health care.

I do believe that American health care providers have evolved to accept computerization, if only in response to the HITECH act (passed with bipartisan Congressional support) and the law’s requirements for Meaningful Use of eleectronic records. Privately, many providers may still feel that electronic health records are a bad dream that will go away. This article presents a radically different view. I think electronic health records are a bad dream that will go on for many years to come. I’ll expand on this angle when blogging from HIMSS this year.

Read more…