Mon

Aug 7
2006

Tim O'Reilly

Tim O'Reilly

Open Source: Architecture or Goodwill?

There are a lot of reasons why people make their code open source. I believe that one of the strongest original motivations has often been overlooked. Our hagiography tells the tale of how it all started with the quest for software freedom. But contemporaneous with Richard Stallman's story, other people were taking the same path (releasing source code) for a very different reason: the architecture of Unix.

That architecture made two enormous contributions to the founding of the open source movement. First, and perhaps most important, Unix was architected as a communications-oriented system, with the idea of modular, cooperating processes and programs. That architecture meant that it was very easy for an individual or a small group to add a piece to the overall system without having too much coordination with the people adding other pieces. And so, we see a vibrant culture of cooperation and contribution in early Unix development, with many of the most important advances developed outside of ATT Bell Labs, the owner of Unix, even though the license was not open source by any standard.

But second, because Unix was an open system in a world where vendors were competing on the basis of their hardware, with slightly modified versions of Unix as the OS, the Unix platform became incredibly fragmented. You had to distribute source code in order to give someone else the ability to run your program (unless they happened to be on the same hardware as you.) By contrast, standardized platforms like the PC and Mac developed a binary freeware culture, but never an open source culture. While Linux now runs on the standard PC architecture, and thus the requirement for source has faded, it's important to remember the history as a backdrop to the social patterns of open source, because as everyone knows, social patterns persist longer than the circumstances that give rise to them, but do eventually fade away as the world changes.

Understanding this history is critical to framing the recent dustup between Matt Asay and Jeremy Zawodny. Reporting on the session at OSCON that I held with Chris diBona of Google and Jeremy (of Yahoo!) and Jim Buckmaster of Craigslist, Matt wrote his analysis of Why Google and Yahoo! Can't Be Better Open Source Citizens. Jeremy was rather put off by the spin Matt put on the session, and wondered "how much open source code he's been publishing."

Jeremy had every right to be incensed by the title of Matt's piece, because both Yahoo! and Google have made enormous efforts to give back to the open source community that gave them such a good start in life and is so important a part of their foundation. I think it's fair to say that both companies have contributed far more back to open source than Matt's company, Alfresco, even though the latter flies the flag of being an "open source company." And Jeremy makes some very strong arguments as to why Yahoo! can't release more of its core code.

But the point I tried to bring out in the session, and that Matt picked up on in his blog, remains: in the PC era, you have to distribute software in order to get other people to use it. You can distribute it in binary form or you can distribute it in source form, but no one escapes the act of distribution. And when software is distributed, open source companies have proven that giving access to the source makes good business strategy.

But in the world of Web 2.0, applications never need to be distributed. They are simply performed on the internet's global stage. What's more, they are global in scope, often running on hundreds or thousands or even hundreds of thousands of servers. They have vast databases, and complex business processes required to keep those databases up to date.

As a result, one of the motivations to share -- the necessity of giving a copy of the source in order to let someone run your program -- is truly gone. Not only is it no longer required, in the case of the largest applications, it's no longer possible.

That's why companies are having to think about new ways to "open source" their product. In the O'Reilly Radar Executive Briefing at OSCON, we looked at three of those ways:

  1. Keep the app proprietary but open source the framework used to build it. Some examples include Basecamp: Ruby on Rails; Ellington: Django; DabbleDB: Seaside. You can also tease apart some of the most important tools originally developed for your app. For example, Livejournal: Memcached. Yahoo! and Google have done a lot of this. But as the argument between Matt and Jeremy illustrates, it's now an act of goodwill rather than an act of necessity. You can also argue that it's an act of forethought, because keeping the open source developer ecosystem healthy is good for the internet giants. It keeps the talent pool strong, and outside-in innovation happening. But because both goodwill and forethought tend to be in shorter supply than necessity, the architectural change in how software is developed may eventually lead to a weakening of the open source culture. These are some of the points that Matt was quite correctly hammering on based on the conversation at the session.

    To the extent possible, this also means that it's good practice for Web 2.0 companies to think about the internal modularity of their code, so it's easy to extract pieces that it makes sense to distribute. In this regard, I strongly recommend reading an ACM Queue interview with Werner Vogels, the CTO of Amazon, from earlier this year. Vogels argues that companies must have a "relentless commitment to a modular computer architecture that makes it possible for the people who build the applications to also be responsible for running and deploying those systems within a common IT framework." Modularity enables participation by outside developers; it also allows swift action by individuals and small internal teams. It's one of the keys to competitive success, especially in an era where traditional PC-based software is so complex that a new release is years in the making.

  2. Figure out what "open services" mean. Both Yahoo! and Google are doing a lot of great work here, as the mashup phenomenon attests. As I wrote recently, though, we need an open services definition that codifies best practices in the same way that the original open source definition did.

  3. Adopt the "clonable apps" model pioneered by Ning. While Ning is set up as a consumer play, and its source visibility leaves much to be desired in the way of implementation (code is fragmented and hard to get all at once), the idea of small clonable and modifiable open source apps on a web platform seems to me to be a very attractive one. I'd love to see more people playing with this idea.

It's a very interesting time to be in open source. Open source zealots need to realize that open source needs to be reinvented for the new platform architecture, and web 2.0 companies need to remember that open source isn't just goodwill, but an integral part of keeping the developer ecosystem healthy. And everyone needs to experiment with new models, and not believe that the story has already been written.


tags: open source, web 2.0  | comments: 39   | Sphere It
submit:

 
Previous  |  Next

0 TrackBacks

TrackBack URL for this entry: http://blogs.oreilly.com/cgi-bin/mt/mt-t.cgi/4831

Comments: 39

  Matt Asay [08.07.06 09:04 AM]

Tim writes:

Jeremy had every right to be incensed by the title of Matt's piece, because both Yahoo! and Google have made enormous [strong word - I would be grateful for a list of "enormous" results to these "enormous" efforts - I think it's fair to say that I can name a few startups that have easily exceeded Yahoo!'s contributions, and that's without stretching] efforts to give back to the open source community that gave them such a good start in life and is so important a part of their foundation. I think it's fair to say that both [multi-billion dollar]companies have contributed far more back to open source than Matt's [startup] company, Alfresco, even though the latter flies the flag of being an "open source company." And Jeremy makes some very strong arguments as to why Yahoo! can't release more of its core code [e.g., "It's so hard!"].
Incensed, Tim? Give me a break. Here's the title of the piece: "Why Google and Yahoo! can't be better open source citizens." I suppose it does imply that they're not exceptional open source citizens, but that's because that is precisely what I believe. They're not. It doesn't say they're bad people or anything worth getting "incensed" about. It says they're not as active in the open source world as they might be. And they're not.

Despite your attempts to salvage that (by comparing a multi-billion dollar company to an open source startup - when did that start to seem like an apt comparison?) by denigrating Alfresco, I would argue that you're wrong. As I note above, it shouldn't be too hard to compare the contributions of Alfresco (a world-beating content repository that pushes the envelope on scalability, design elegance, and performance) and a few others (Digium, MySQL, Greenplum, etc.) and hold them up against Yahoo! and Google. Whatever your preference for Web 2.0, I think you'd be hard-pressed to showcase the "enormous" contributions these net freeriders give back when compared to the sometimes incremental, sometimes revolutionary innovations brought on by the comparatively microscopic open source startup community.

As for me, you conveniently forget that my companies' contributions extend from Lineo to Novell to Alfresco. Would I put the collective contributions of these companies up against a Google? Any day. And this despite being a few billion dollars short of your big Web 2.0 examples and every incentive to keep things closed. Please don't mime Jeremy's "but it's so hard to open up!" argument, because it's hard for everyone. Harder, in fact, for people who sell bits, not services.

As I like to think with my personal charity, it's not how much someone gives, it's how much they keep for themselves. I don't measure whether I'm a charitable person based on the $10 I give, but by the million I keep for myself. I think that's a far more accurate and rigorous metric. By it, my employers look pretty good. Yahoo! and Google? Not so good. Not to mention the general universe of Web 2.0 companies, who benefit much, and give little, from/to open source.

  Tim O'Reilly [08.07.06 09:31 AM]

Matt --

There's certainly some justice to what you say. The contributions of a small company can't be measured in absolute volume against the contributions of a giant. But I think you underestimate the amount by which Yahoo! and Google and Amazon do contribute to existing open source projects by funding contributors.

But even more importantly, they really are thinking (I think) about the future of open source, and new models for open.

That being said, looking at the percentage of their IP that is released by "web 2.0" companies like SixApart/LiveJournal, or 37signals/RoR, or Ellington/Django, you're absolutely right that Y! and Google could do more. But comparing them against, say IBM, which is a darling of the open source community for its support of Linux, I'd say that they are in at least the same ballpark.

And you're also absolutely right (and this *was* the point in the session and in this blog entry) that architecting web 2.0 properties so that they are modular and can release big blocks of what they do is good practice -- and one that apparently is less followed by Yahoo!, if Jeremy's comments are to be believed, than by, say Amazon (if Werner's comments are to be believed.)

But my main point remains: the architecture has changed, and therefore the motivations have changed.

(Although it's worthy of note that even downloaded OSS projects can be monolithic. E.g. openoffice.org, which didn't take the time that Mozilla took to re-architect the monolithic closed code. Mozilla took the behemoth, and redesigned it for participation, with four years where they got no love, and then a sudden rebirth.)

  Don Marti [08.07.06 10:40 AM]

Has the requirement for source really faded, or is it still there, just motivated by a different layer of the system? Today instead of multiple incompatible hardware architectures, we have generic hardware running Linux distributions with multiple incompatible library versions. (Yes, ISV, your application is running on the same x86 hardware that everyone has, but are you using the same GStreamer? The same ALSA libraries?)

Mark Shuttleworth writes, "We don't aim for "binary compatibility" with any other distribution."

People who are used to the proprietary OS model think that binary incompatibility across distributions is a big problem. But, Tim, I think you're explaining the reason for the failure of efforts such as LSB. They go against the grain of the technical limitations that encourage the collaboration norms that people are used to. Not that anyone goes out and deliberately breaks binary compatibility, but if you buy into open source norms, breaking binary compatibility doesn't seem like a big problem worth compromising on other goals for.

So architecture drives norms and norms drive the next generation of architecture.

  Peter H. Salus [08.07.06 01:57 PM]

Tim, My problem with your simple history is that it's too simple.
Up to and including v6, there was only one manufacturer on which
UNIX ran -- DEC; then there were the two Interdata ports (to the
7 and the 8). But sharing of software, hacks and tricks began
far earlier, right after Ken and Dennis gave their SOSP paper in
October 1973.

And the first UNIX Users met in May 1974 -- before the first paper
was published.

  David [08.07.06 04:22 PM]

> But in the world of Web 2.0, applications never need to be distributed.

That's not true. The back end database is probably running an open source DBMS, and an open source OS. The developers are probably using open source tools.

The end user, Jo Average, running IE on Windows, probably doesn't know what open-source means (or even what source code is), and doesn't care.

  Yoz [08.07.06 05:01 PM]

Tim --

With regard to Ning's app-cloning model, I gave a short talk on this a couple of weeks back. Cloning not only revolutionises software distribution (making it much easier for both authors and users) but gives rise to whole new application design patterns for social software. The talk is online as a seven-minute Flash presentation here. Note that, while I mostly talk about Ning (I'm paid to), we're not the only consumer-oriented service to fully implement user-to-user software cloning (Second Life and LambdaMOO being other examples). As far as I know, we are the first to do it outside of virtual worlds, for web applications.

As regards the problems you identify, I'd like to hear more about this. It's true that Ning's code is separated between app and API code, but this is true of most frameworks. And besides, the point of app cloning is that everything you need to run the chosen app is cloned for you; you're not left scrabbling around for dependencies in order to get the app running. That said, there are always potential improvements that would make the system friendlier, and I'm very keen to hear any observations or suggestions.

  Don Dodge [08.07.06 08:01 PM]

Tim, Great post. You really got me thinking about software development, delivery, licensing, and business models. They are four different things.

Open Source can be used to describe all four models, but can be used selectively. For example, you could have software that was developed by an open source project team, delivered on an OEM server, licensed per server, and paid for on a term basis.

In a world where software can be delivered as a service does it matter how it was developed?

For example software could be developed internally by a software company, delivered as a hosted service, licensed for consumer use only, and paid for by advertisements, or by subscription.

The way software is developed is completely independent to how it is delivered, licensed, or paid for.

After reading your post I wrote an in depth blog on this subject today. For more details see http://dondodge.typepad.com/the_next_big_thing/2006/08/open_source_vs_.html

  Anshu Sharma [08.07.06 10:15 PM]

See Through Kitchens and Cooking


Tim,

Good job at bringing forth the economics of SaaS and Web2.0 in the following sense: Open-source emerged as an effective distribution mechanism due to the disconnected nature of our world, a decade ago. And now with the pendulum moving to where the consumer is again able to simply enjoy the service without bothering about how it was built or is maintained- the open source may loose steam in certain areas. In some sense, one could argue that customers (individuals and companies) had to deal with software and therefore looked to open source. Given a choice most customers would rather use a service than download code and modify it. When was the last time you tried to figure out how Fedex ships a package to Alaska and then tried to optimize it for your package by re-orchestrating their business process so that your package can get to Alaska faster? Open source is like one of those see through kitchens that we have fun looking at through the glass while we enjoy our air-conditioned seating area and white table cloth. Waiter!!




This gives me a good topic to blog on.

  frameworker [08.07.06 10:53 PM]

It seems pertinent, here, that Unix source code was quite easy to obtain; and further, that the Lions' Commentary provided an impressive overview of that source.

  Scott Johnson of FuzzyBlog [08.07.06 11:53 PM]

Tim --

I think you're missing a point as to the importance of a founder / engineer who's passionate about Open Source as being vital to the process.

You reference SixApart/LiveJournal and bear in mind that the credit for the Open Source strategy in that respect goes solely to Brad Fitzgerald. SixApart is, of course, vastly more than just LiveJournal but what has the company given back except in the case of LiveJournal (which led to BMP, LJ itself, Memcached, Mogile, etc). Sure there's been a few perl tools but that's basically it.

Without Brad Fitzgerald I'd argue that there would not really be an Open Source strategy at SixApart.

The role of a passionate engineer in getting an OpenSource strategy is not to be underestimated in any way.

Note: I'm not diss'ing SixApart at all here. I like those guys a lot. I just don't see it happening without Brad.

I'd also argue that companies establish their Open Source position early on -- or not at all. LiveJournal went Open Source from like day 1 as I understand it and once its out there its hard to take back. My new company, Ookles, has already started to release bits of our stuff as Open Source and we plan to continue; we built our modularity approach specificially with that in mind. Kevin Burton's done the same with parts of TailRank, specifically, his MySQL load balancer for Java.

  Yoz [08.08.06 05:27 AM]

Scott --

SixApart's main products aren't open source (other than LiveJournal) but they've released quite a bit of their architecture code, such as the recent Data::ObjectDriver modules which Ben Trott spoke about at OSCON and Tatsuhiko Miyagawa's Plagger (feed processor engine).

  Tim O'Reilly [08.08.06 07:48 AM]

Don, interesting point about the continued fragmentation, just at a higher level. In fact, that's Microsoft's latest marketing spin on Linux. I was just reading Is Open Source Too Complex? over on slashdot.

I don't buy it. Microsoft used to have a rich developer ecosystem, but they really killed a lot of it off. I guess I'm in Classics mode, because I'm once again going to cite an ancient author. Tacitus, writing about the Roman army, said, "They built a wasteland, and called it peace."

Now that's probably a bit strong, since neither the Roman Empire nor the Microsoft empire was a wasteland, but enforced stasis is not a good thing.

I believe that what appears to Microsoft to be fragmentation is a vibrant marketplace -- an architecture that includes low barriers to entry always is more disorderly but also more creative than one that has high barriers to entry set up by entrenched players. It's also better for consumers.

  Tim O'Reilly [08.08.06 07:54 AM]

Peter -- your history with Unix certainly goes back further than mine. I first got involved around Version 7, when the situation I describe was in full swing.

And it's certainly true that there was a source culture around IBM mainframes and DEC minicomputer OSes as well, and they didn't have the architecture issue. It was the early invention period for software before people had realized that it was going to be valuable.

But I still maintain my point. What happened to those other source sharing cultures? They went away, as Richard Stallman famously discovered. But Unix's remained. Why? I think it was the reason I outlined.

It's not the only reason, but no fact of history ever is.

I also like frameworker's point that source was easy to obtain (I paid $50 for my first BSD source distribution tapes, if I recall), and documentation was available. But even when it comes to documentation, I give far more credit to the humble man page than to Lions commentary.

If windows had a man page for every dll and component, which documented its use and dependencies, it would be a heck of a lot easier to swap things out without breaking the whole system.

The man page is a reflection of the modularity ideals of Unix applied to documentation.

  Tim O'Reilly [08.08.06 07:56 AM]

David, you make my point while saying you contradict it. I never said that the infrastructure software used by web players didn't need to be distributed. I said their applications don't need to be distributed.

  Tim O'Reilly [08.08.06 07:58 AM]

Scott -- you'll get no argument from me. Open source always depends on passionate individuals, and Brad's work with LiveJournal is no exception. But my point is that changes in computer architecture mean that it depends more on the passion of individuals than it used to, because formerly, there was a structural market imperative that drove a fair amount of open source activity.

  Graham Stiles [08.08.06 09:45 AM]

Off topic, but I hope you are going to address the issue of the comments Caitlin Martin is getting over on her blog here. These people are a disgrace to the Open Source and general Computing community.
I know it was always thus, but still...

  Tim O'Reilly [08.08.06 09:55 AM]

Graham -- don't know what you're talking about. It would help if you left a link.

  Graham Stiles [08.08.06 10:04 AM]

See http://www.oreillynet.com/linux/blog/2006/08/why_is_firefox_for_linux_so_te.html

Caitlin is discussing problems she has with Firefox on Linux

The comments range from the inevitable 'it works for me' style, through the 'you must be an idiot then' style to the actively offensive.

  casey [08.08.06 10:12 AM]

Another good reason for developing an open source app is kind of selfish but win-win. Open source is a great way to build a name for yourself. No one will know how good you are if all your work is kept proprietary.

  cloudy [08.08.06 12:51 PM]

The whole article is based on at least three false premises:

1) That the "open" source movement started in the 80s. There has been "open" software distributed since the fifties.

2) That Unix software developers didn't maintain products in binary form for the variant Unix platforms. There was plenty of commercial software that was only available in binary and sometimes even served as the differentiator for Unix vendors.

3) That there was no open-source community for PCs. What, then, was all that source in the Simtel archive?

As you say, understanding history is important. I'd suggest that a history of "open" source should start with SHARE, not with the 1980s.

  Rich Steiner [08.08.06 02:01 PM]

The culture of releasing program source along with the executables predates UNIX. It's an engineering thing. Folks in the past appreciated that their current successes were highly dependent on the past efforts of others, and in that spirit the release of source was done to further enable that process.

Both the UNIVAC and IBM mainframe worlds were doing this on a regular basis in the early and nmid 1960's, and computer hobbyists of various flavors were also doing so in the early 1970's.

  Rich Steiner [08.08.06 02:05 PM]

FWIW, the source-sharing culture in the UNIVAC world still exists. We're just disguised as Unisys Clearpath Dorado users now and are only found in places like major airlines and government agencies, not universities. Less visible, but no less real.

  cloudy [08.08.06 03:08 PM]

Following up on the last two comments:

Stallman didn't find that the past open source community was gone. IBM SHARE is still around, and probably the oldest, certainly the oldest continuous source of source. ACM still publishes collected algorithms, the Simtel archive is still active, and so forth.

What Stallman found, and what ironically you missed, was that the key thing that happened in the 80s and even more so in the 90s was that first Usenet, and then the Internet, made distribution of source easier, and more so, made collaborative software development easier even when the players were geographically dispersed.

It is this last, more than any of the others, that make the "explosion" of open source in the last couple of decades.

  Tim O'Reilly [08.08.06 05:27 PM]

cloudy --

A couple of points. 1. As I said in the response to Peter Salus, source sharing was common in the early days of the mainframe and minicomputer. But it did "dry up" in the 80s (your SHARE and Simtel examples notwithstanding) on mainstream market platforms (i.e. mac, windows). The only widely deployed platform from the 80s that maintained the source culture of the earlier era was Unix, and I believe it was for the reasons I outlined. Perhaps it's better to say "it never took hold in the PC or Mac worlds," and have less of an argument.

As to what "ironically I missed," I can only say that the fact that I didn't repeat it doesn't mean I've missed it. See for example my keynote to the Computers, Freedom and Privacy conference, in Toronto on April 6, 2000. This talk, entitled Open Source: The Model for Collaboration in the Age of the Internet, focused on the underappreciated role of usenet as the "mother" of open source. I wrote:

"I'd like to argue that open source is the "natural language" of a networked community, that the growth of the Internet and the growth of open source are interconnected by more than happenstance. As individuals found ways to communicate through highly leveraged network channels, they were able to share information at a new pace and a new level. Just as the spread of literacy in the late middle ages disenfranchised old power structures and led to the flowering of the renaissance, it's been the ability of individuals to share knowledge outside the normal channels that has led to our current explosion of innovation. Just as ease of travel helped new ideas to spread, wide area networking has allowed ideas to spread and take root in new ways. Open source is ultimately about communication."

I'm surprised you give Stallman credit for recognizing it. It's seemed to me that he always gave usenet, Berkeley Unix, and the whole non-FSF side of open source history short shrift.

I totally agree that it's the network that made open source explode. BUT it's also true that the network made open source explode on one platform and not on others. Why was that? One reason for that is the subject of this post.

Rich Steiner -- like cloudy, I hear you that there is still a vibrant source culture on some of these platforms that didn't have Unix's architectural imperative, but had cultural roots in the earlier era of software sharing, before the binary culture of the PC took over. However, I still think that architecture plays a major role in the success of open source.

The most successful open source projects have modular architectures. Those that are monolithic struggle to build a real community.

  cloudy [08.09.06 12:37 AM]

I'm sorry, but no matter how you spin it, the openness never dried up in any community, and only a very weird view point would consider either windows or mac/os "mainstream" and not consider VMS or MVS mainstream for any part of the 80s. The decade was almost half over with before the Mac was even introduced.

Simtel, comp.sources, hundreds of bulletin boards, all kept the source for the pc fans alive. The commodore 64 and then Amiga fan base made a lot of source software available. Even the Atari, my favorite lost cause, had free compilers, and applications available as source.

Sorry I wasn't clear about RMS. I was disagreeing with your characterization of what he "found", not describing what he did believe. I've known Richard since shortly after the first time I sent him a bug fix for Emacs in '83, and you are definitely right about his short sightedness with respect to non-FSF open source.

You wrote "it's been the ability of individuals to share knowledge outside the normal channels that has led to our current explosion of innovation." which is amusing, given that the explosion of innovation was mostly over with by the time that the internet was available, and there's only about a decade in which it can be described as "outside the normal channels".

Why did the internet cause open source to explode on Unix and not other platforms? Sorry, it had nothing to do with the architecture of Unix, as much as I like that architecture. It had everything to do with the ubiquity of Unix. The internet and Unix grew up together, and while they were growing up, it wasn't all that easy to compile code from one Unix variant to run on another. (does termcap versus terminfo ring a bell? sockets versus streams? berkeley line discipline?)

I had just as good luck getting Fortran code to run properly on Crays, Amdahls, and VMS based Vaxes, portably, as most people had writing portable Unix code, especially if they had the misfortune of having to write for SunOS, HPUX *and* Irix. Do you think the abominations autotool and friends came about because it was easy to port source between Unix variants?

Anyway, we're all putting the cart before the horse here. Before we congratulate ourselves on how successful open source is now, shouldn't we calibrate it against how it was doing then?

Sure, the open source movement is very visible, but then the whole industry is very visible. Talk to Guy Steele about Lisp collaborations leading up to the specification of portable common lisp, for example, and then ask your self if the percentage of people developing or the percentage of people using open source is really that much higher than it ever was.

The only great success I've seen so far is companies like RedHat and MontaVista figuring out how to make money from packaging and servicing the software. That's new and different than before.

  Jeff Kubina [08.09.06 07:26 AM]

Matt Asay wrote:

As I like to think with my personal charity, it's not how much
someone gives, it's how much they keep for themselves. I don't
measure whether I'm a charitable person based on the $10 I give, but
by the million I keep for myself. I think that's a far more accurate
and rigorous metric.

i don't think this is a good metric to use on how much a company should
give back to open source; a company's monetary worth is based on too
many factors that do not reflect their value to the community. perhaps it
would be better to base it more on the man-hours the company contributes back
into open source. i would not be surprised if google, yahoo, and
craigslist have about 10% of their overall workforce contributing back
into open source. even with this metric i still agree with Jeremy,
measuring how much someone or a group of people contribute to a project
is really hard to do (one of the holy-grails of managers). further, i
agree with Tim that, "Open source is ultimately about communication."
all the companies have contributed enormously to enabling people to
communicate better with all their free (no fee) services.

that said, i have to disagree somewhat with all three about "the code
would be useless without the underlying infrastructure". the open
source community is quite innovative. i would not be to surprised if an
open source community developed a distributed storage system based on
file-sharing technology given the right seed, like gDisk.

  Tim O'Reilly [08.09.06 08:23 AM]

Jeff -- You make a really good point. A lot of the pieces of these big apps would make great open source projects. But not all of them. That's why my strategy 1, keep the app proprietary but open source the framework (or tools), is one of the viable ones.

I formulated this idea during my brief tenure on the board of the Nutch Foundation. Nutch was an attempt to make an open source search engine based on lucene, the open source search tool. It quickly became clear that while lucene the tool was powerful and useful for a whole host of applications, nutch could never be more than a research tool for exploring new search algorithms, because it was too expensive to build more than a small index.

There are many, many pieces of great code that I'd love to see spun out of web companies. LiveJournal is a great example. They've put out a lot of pieces that are useful to everyone.

But my point remains, the applications themselves are likely never to be open sourced, and one of the motivations for open source that used to be a driving force no longer exists, so companies may have a different calculus than they once did.

  Rufus Polson [08.09.06 04:28 PM]

It seems to me that the cogency of the particular comment about architecture shift requires that "Web 2.0" is a real and important shift, and the PC era is over or reaching sunset.
I'm not at all clear that that's true, frankly. The concept of programs that run out there somewhere on the web and data that's stored out there somewhere on the web has come up again and again, and again and again it has failed to make as much impact as expected. And while the technology is doubtless now largely there to make it work, political, economic, infrastructure and psychological factors will I think continue to inhibit this sort of change. Infrastructurally, the internet doesn't actually bring the bandwidth for seamless webcentricness to everyone's doorstep. Having a program "out there" remains, for most people, clunkier than having it "right here" and I see no signs of that changing. Psychologically, people generally prefer to have their data and programs physically present--it gives them a feeling of control and ownership. Politically, there is the question of trust--and the more initiatives like "trusted computing" there are, the more serious the trust deficiency will be. Trust issues range from broad ones like the potential demise of network neutrality, to privacy issues in the face of intrusive governments, to more specific questions like "Can I really trust a commercial entity with my data, when it might decide to sell it or simply go belly-up and take my files with it?"

The point about the importance of architecture to design style may be real--but the conclusion may nonetheless be false if architecture is not changing very significantly.

And of course at the same time, the open source development style has in the past been consistently enabled and enhanced by improvements in communication and increased feasibility of distributed effort. To the extent that "Web 2.0" is real, it may only encourage the open source approach.

  anjan bacchu [08.09.06 04:50 PM]

hi tim,

nice post.

I've been a regular radar reader. I've often wanted to print your posts along with the comments for offline reading.

Can you make it easy for us to print RADAR posts -- as easy as InfoQ does ? It's really cool that InfoQ has managed what no other site has ? Just PRINT -- no need to do print preview and see what you need to trim out, etc.

Thanks much,

BR,
~A

  Stomfi [08.09.06 11:41 PM]

There are some issues here which need to be explored.
One is that the world needs a universal operating system, and what better way to do this than an open source solution.

Two is that applications only need to be open source when development of functions or correction of performance and security holes is done by third parties.

Three is that one of the forgotten reasons for publishing under the GPL is to stop propietary people from using your ideas.

Four is that computing resources that can connect at the hardware level over a network to form a heterogenous whole, i.e. one virtual computer, make the idea of Web 2 services a practical reality. The Cell architecture has this ability.

Five is that when all this comes about, "it is the medium that is the message", and user driven content under a Creative Commons License or not will be the topic you will be writing about.

  Soyapi Mumba [08.10.06 12:44 AM]

It's true that Yahoo and Google don't need to open source their web-based applications since they don't have to be distributed to be used.

But what about their distributable desktop applications like toolbars, browser extensions, IM clients and the like? These are already free-as-in-beer tools yet they still restrict users of these tools! That's why we still have open source alternatives like Googlebar and Companion.

I think Google and Yahoo can easily be better citizens by open sourcing their distributable tools.

And btw, the javascript code in their web-based apps also gets distributed for the app to run. That means, although one can read the source, one is still not allowed to excercise all 4 software freedoms.

  Tim O'Reilly [08.10.06 08:12 AM]

Rufus, surely you jest. A great many of the killer apps of the last decade are on this model: Google, Amazon, ebay, MapQuest and its sequels right up to Google Maps, craigslist, wikipedia, myspace...

Stomfi, I don't disagree with all of the continuing motivations for open source that you outline. Many people work on free or open source software for those reasons. I'm just making the point that ONE of the imperatives for releasing source is less important than it once was.

Soyapi, I agree that toolbars etc. as well as the javascript in ajax apps are distributed code. But as you yourself note, in today's software ecosystem, where you need to produce only two client binaries (Windows and Mac, and Linux if you're feeling generous), free as in beer gets you the benefits of viral distribution (one of the strengths of open source) without having to release your source. P2P apps like skype have this same advantage. All of this emphasizes my point that the source culture of Unix/Linux was in part driven by the NEED to give source in order to have someone else compile and run the program.

As to people not being able to exercise the four software freedoms, that is true (although look at google's great documentation and advice on how to modify the downloadable bits for their mapping API, at google.com/apis/maps. They are trying to be good citizens, without necessarily embracing all of the ideals of free software.) But in any event, I'm not talking about this as a moral prescription. My arguments are "reality based" (i.e. I'm just trying to describe what is) rather than prescriptive and moral.

Anjan -- I've got someone looking into it.

  Michael Bernstein [08.10.06 01:07 PM]

"A great many of the killer apps of the last decade are on this model: Google, Amazon, ebay, MapQuest and its sequels right up to Google Maps, craigslist, wikipedia, myspace..."

True, but consider that this may have more to do with the difficulties of producing a killer app for the desktop. There have been a few nevertheless: Doom, Netscape, Napster, and iTunes are examples.

It is very hard to produce a killer app these days that does not get you a) sued into oblivion, or b) have it's market obliterated by the platform provider. Deploying a Web 2.0 service only really takes care of the second problem (note what happened to MP3.com).

I have a strong intuition that killer desktop apps will make a resurgence if desktop linux starts making significant inroads to the mainstream market.

  Rufus Polson [08.11.06 11:00 AM]

There's something else all those killer apps have in common: They all involve accessing outside information, generally information subject to ongoing change and update (which is thus hard to simply keep on one's hard drive). Tax software is an interesting edge case--it's split between web and desktop because tax rules tend to change a little bit every year, and that seems to be just infrequently enough that some people find it worthwhile to keep getting updates of destop-oriented packages, while others find it less hassle to just access an up-to-date web package.
Things which *can* be self-contained on a desktop have remained that way. Attempts to do web-centrically things which could be done on the desktop have had little success--and the limited success they have had has generally been in hierarchical environments with LANs, where IT can tell people what to use and have a motive to centralize. But that's not really the web--that's keeping apps on the mainframe-equivalent, like in the really old days.

Most of the "killer apps" mentioned (Google, Map stuff, Amazon, for any given person even Wikipedia) are really just more sophisticated ways of presenting what comes down to web pages, with a bit of interactive/community content creation thrown in. Web 1.0 plus usenet plus a bit of user friendliness and pretty. Everything where the point is the user doing something rather than finding something or talking to people has stayed firmly on the desktop.

  Mark Pilgrim [08.11.06 01:05 PM]

Tim wrote: "You had to distribute source code in order to give someone else the ability to run your program (unless they happened to be on the same hardware as you.)"

OK, so companies targeting those architectures had to distribute source code. So what? It does not logically follow that they had to distribute source code under a license that allowed others to modify it and redistribute it. Surely companies could make such distinctions, even back then. If they chose to distribute it under an open source license, that was a choice that extended above and beyond architectural requirements.

A modern example: Six Apart distributes Movable Type with full source code, but that doesn't make it open source. Some people take advantage of the source code availability to patch it themselves; Jacques Distler maintains a 1000-line patch to enable XHTML+MathML compatibility. But Jacques can't legally redistribute his patched version of Movable Type. "Open source" > "comes with source"

  Tim O'Reilly [08.11.06 01:24 PM]

Mark -- You're right that the need to distribute source didn't provide the requirement for free and open source licenses, but it created more fertile ground for them than environments where it was not required. As numerous people have pointed out, there are open source cultures on other platforms, but nowhere else than Unix (and the internet, which in many ways was the killer app for Unix) did it become the mainstream.

I've never said that architecture is the only determinant of the flowering of open source, just that it's one factor, and one that is often overlooked.

  Michael Bernstein [08.12.06 11:56 AM]

Hmm. It occurs to me that there is an analogy lurking in here to Jared Diamond's thesis in 'Guns, Germs, and Steel', in that the 'geography' of source distribution helped drive certain technical 'cultures' to success, and not others.

  James [11.23.06 09:45 AM]

So, when will folks start to hold Tim accountable for not just talking about open source but in getting his own company to contribute? There are a variety of open documentation licenses in which books could be published under. How does O'Reilly compare in this regard relative to say the Pearson family...

  Tim O'Reilly [11.24.06 08:54 PM]

James -- go look at openbooks.oreilly.com.

Post A Comment:

 (please be patient, comments may take awhile to post)






Type the characters you see in the picture above.

RECOMMENDED FOR YOU

RECENT COMMENTS