• Print

Open Source and Cloud Computing

I’ve been worried for some years that the open source movement might fall prey to the problem that Kim Stanley Robinson so incisively captured in Green Mars: “History is a wave that moves through time slightly faster than we do.” Innovators are left behind, as the world they’ve changed picks up on their ideas, runs with them, and takes them in unexpected directions.

In essays like The Open Source Paradigm Shift and What is Web 2.0?, I argued that the success of the internet as a non-proprietary platform built largely on commodity open source software could lead to a new kind of proprietary lock-in in the cloud. What good are free and open source licenses, all based on the act of software distribution, when software is no longer distributed but merely performed on the global network stage? How can we preserve freedom to innovate when the competitive advantage of online players comes from massive databases created via user contribution, which literally get better the more people use them, raising seemingly insuperable barriers to new competition?

I was heartened by the program at this year’s Open Source Convention. Over the past couple of years, open source programs aimed at the Web 2.0 and cloud computing problem space have been proliferating, and I’m seeing clear signs that the values of open source are being reframed for the network era. Sessions like Beyond REST? Building Data Services with XMPP PubSub, Cloud Computing with BigData, Hypertable: An Open Source, High Performance, Scalable Database, Supporting the Open Web, and Processing Large Data with Hadoop and EC2 were all full. (Due to enforcement of fire regulations at the Portland Convention Center, many of them had people turned away, as SRO was not allowed. Brian Aker’s session on Drizzle was so popular that he gave it three times!)

But just “paying attention” to cloud computing isn’t the point. The point is to rediscover what makes open source tick, but in the new context. It’s important to recognize that open source has several key dimensions that contribute to its success:

  1. Licenses that permit and encourage redistribution, modification, and even forking;

  2. An architecture that enables programs to be used as components where-ever possible, and extended rather than replaced to provide new functionality;

  3. Low barriers for new users to try the software;

  4. Low barriers for developers to build new applications and share them with the world.

This is far from a complete list, but it gives food for thought. As outlined above, I don’t believe we’ve figured out what kinds of licenses will allow forking of Web 2.0 and cloud applications, especially because the lock-in provided by many of these applications is given by their data rather than their code. However, there are hopeful signs like Yahoo! Boss that companies are at beginning to understand that in the era of the cloud, open source without open data is only half the application.

But even open data is fundamentally challenged by the idea of utility computing in the cloud. Jesse Vincent, the guy who’s brought out some of the best hacker t-shirts ever (as well as RT) put it succinctly: “Web 2.0 is digital sharecropping.” (Googling, I discover that Nick Carr seems to have coined this meme back in 2006!) If this is true of many Web 2.0 success stories, it’s even more true of cloud computing as infrastructure. I’m ever mindful of Microsoft Windows Live VP Debra Chrapaty’s dictum that “In the future, being a developer on someone’s platform will mean being hosted on their infrastructure.” The New York Times dubbed bandwidth providers OPEC 2.0. How much more will that become true of cloud computing platforms?

That’s why I’m interested in peer-to-peer approaches to delivering internet applications. Jesse Vincent’s talk, Prophet: Your Path Out of the Cloud describes a system for federated sync; Evan Prodromou’s Open Source Microblogging describes identi.ca, a federated open source approach to lifestreaming applications.

We can talk all we like about open data and open services, but frankly, it’s important to realize just how much of what is possible is dictated by the architecture of the systems we use. Ask yourself, for example, why the PC wound up with an ecosystem of binary freeware, while Unix wound up with an ecosystem of open source software? It wasn’t just ideology; it was that the fragmented hardware architecture of Unix required source so users could compile the applications for their machine. Why did the WWW end up with hundreds of millions of independent information providers while centralized sites like AOL and MSN faltered?

Take note: All of the platform as a service plays, from Amazon’s S3 and EC2 and Google’s AppEngine to Salesforce’s force.com — not to mention Facebook’s social networking platform — have a lot more in common with AOL than they do with internet services as we’ve known them over the past decade and a half. Will we have to spend a decade backtracking from centralized approaches? The interoperable internet should be the platform, not any one vendor’s private preserve. (Neil McAllister provides a look at just how one-sided most platform as a service contracts are.)

So here’s my first piece of advice: if you care about open source for the cloud, build on services that are designed to be federated rather than centralized. Architecture trumps licensing any time.

But peer-to-peer architectures aren’t as important as open standards and protocols. If services are required to interoperate, competition is preserved. Despite all Microsoft and Netscape’s efforts to “own” the web during the browser wars, they failed because Apache held the line on open standards. This is why the Open Web Foundation, announced last week at OScon, is putting an important stake in the ground. It’s not just open source software for the web that we need, but open standards that will ensure that dominant players still have to play nice.

The “internet operating system” that I’m hoping to see evolve over the next few years will require developers to move away from thinking of their applications as endpoints, and more as re-usable components. For example, why does every application have to try to recreate its own social network? Shouldn’t social networking be a system service?

This isn’t just a “moral” appeal, but strategic advice. The first provider to build a reasonably open, re-usable system service in any particular area is going to get the biggest uptake. Right now, there’s a lot of focus on low level platform subsystems like storage and computation, but I continue to believe that many of the key subsystems in this evolving OS will be data subsystems, like identity, location, payment, product catalogs, music, etc. And eventually, these subsystems will need to be reasonably open and interoperable, so that a developer can build a data-intensive application without having to own all the data his application requires. This is what John Musser calls the programmable web.

Note that I said “reasonably open.” Google Maps isn’t open source by any means, but it was open enough (considerably more so than any preceding web mapping service) and so it became a key component of a whole generation of new applications that no longer needed to do their own mapping. A quick look at programmableweb.com shows google maps with about 90% share of mapping mashups. Google Maps is proprietary, but it is reusable. A key test of whether an API is open is whether it is used to enable services that are not hosted by the API provider, and are distributed across the web. Facebook’s APIs enable applications on Facebook; Google Maps is a true programmable web subsystem.

That being said, even though the cloud platforms themselves are mostly proprietary, the software stacks running on them are not. Thorsten von Eicken of Rightscale pointed out in his talk Scale Into the Cloud, that almost all of the software stacks running on cloud computing platforms are open source, for the simple reason that proprietary software licenses have no provisions for cloud deployment. Even though open source licenses don’t prevent lock-in by cloud providers, they do at least allow developers to deploy their work on the cloud.

In that context, it’s important to recognize that even proprietary cloud computing provides one of the key benefits of open source: low barriers to entry. Derek Gottfried’s Processing Large Data with Hadoop and EC2 talk was especially sweet in demonstrating this point. Derek described how, armed with a credit card, a sliver of permission, and his hacking skills, he was able to put the NY Times historical archive online for free access, ramping up from 4 instances to nearly 1,000. Open source is about enabling innovation and re-use, and at their best, Web 2.0 and cloud computing can be bent to serve those same aims.

Yet another benefit of open source – try before you buy viral marketing – is also possible for cloud application vendors. During one venture pitch, I was asking the company how they’d avoid the high sales costs typically associated with enterprise software. Open source has solved this problem by letting companies build a huge pipeline of free users, who they can then upsell with follow-on services. The cloud answer isn’t quite as good, but at least there’s an answer: some number of application instances are free, and you charge after that. While this business model loses some virality, and transfers some costs from the end user to the application provider, it has a benefit that open source now lacks, of providing a much stronger upgrade path to paid services. Only time will tell whether open source or cloud deployment is a better distribution vector, but it’s clear that both are miles ahead of traditional proprietary software in this regard.

In short, we’re a long way from having all the answers, but we’re getting there. Despite all the possibilities for lock-in that we see with Web 2.0 and cloud computing, I believe that the benefits of openness and interoperability will eventually prevail, and we’ll see a system made up of cooperating programs that aren’t all owned by the same company, an internet platform, that, like Linux on the commodity PC architecture, is assembled from the work of thousands. Those who are skeptical of the idea of the internet operating system argue that we’re missing the kinds of control layers that characterize a true operating system. I like to remind them that much of the software that is today assembled into a Linux system already existed before Linus wrote the kernel. Like LA, 72 suburbs in search of a city, today’s web is 72 subsystems in search of an operating system kernel. When we finally get that kernel, it had better be open source.

tags: , , ,
  • http://www.fridgebuzz.com Vanessa Williams

    >> That’s why I’m interested in peer-to-peer approaches to delivering internet applications.

    That’s the second time I’ve heard that (or something very much like it) this week. The first time was from some EFF folks. The irony is that my partners and I started a company (oponia networks) and developed a technology on precisely this premise: all devices should be first-class Web citizens and thus be able to be providers as well as consumers of web services. This allows the federated free world everyone suddenly seems to love. However, NOT ONE investor could we find (other than a wide circle of family and friends who thought it was brilliant.) Everyone was so fixated on the cloud, they would not even consider anything else. Even when we actually built the thing and conducted a successful public alpha to prove that it worked, no one cared. Now we’ve had to close our doors and the technology will be repurposed for remote device management/smart services.

    My only point is that if such things are to come to be, there needs to be much wider support for it. It’s an idea which is still before its practical time.

    >>Google Maps isn’t open source by any means, but it was open enough

    I have a slight quibble with that. Although they have been more generous in providing free mapping data than just about anyone else, all of their APIs are JavaScript (that is: browser-based.) This allows web mashups but prevents any kind of application which requires data but not the immediate display of a map. To try to explain, think of the way that the Twitter API has enabled people to write both web and desktop applications, mashups and otherwise, using Twitter’s services and data. Google maps doesn’t provide this flexibility and neither do most (any?) of Google’s other services.

    Finally, as for the cloud platforms and their lock-in… there is great truth to this. We can only hope that some standards for services similar to S3 and EC2 will emerge. I think we’re already seeing that with the proposed open virtual appliance standards for virtualized servers (like EC2.) Some things, like SimpleDB, might remain proprietary, however, and re-architecting an application would be a real nightmare, especially if you had to do it in a hurry.

    Cheers.

  • http://blog.rightscale.com Thorsten – CTO RightScale

    Tim, thanks for a very thoughtful article! I think the licensing model for for-pay software in the cloud will be a very interesting and controversial topic over the coming years. We’re already talking to quite a number of ISVs almost all of which are struggling with that, as we have ourselves.

    Amazon has set a very good and tough to follow standard by pricing their services just by the hour. It really enables all the benefits of the cloud where servers may run for a few hours at a time, whether it’s for scaling in response to load, for testing, or whatnot. But it’s a really difficult model to grok if you’re used to charging $5000/yr/server! And it’s clear that if you insist on the $/yr/server pricing model then you prevent your customers from taking advantage of many of the benefits of the cloud.

    We’ve settled on a model that charges by the hour but establishes a floor for the first N resources. Lots of reasons I should blog about at some point but too much to put into the comment field here…

    I like your reasoning around the programmable/interoperable web and in particular the ‘This isn’t just a “moral” appeal, but strategic advice’. We’re working hard on making RightScale more interoperable and open, but at the same time we also need to ensure we can continue feeding everyone… So observations about the different avenues for making revenue with an open system are always well received.

    - Thorsten

    NB: could you correct my first name in your article, no ‘i’?

  • http://epeus.blogspot.com Kevin Marks

    I think your API openness test is not strong enough. As I wrote in An API is a bespoke suit, a standard is a t-shirt, for me the key test is that implementations can interoperate without knowing of each others’ existence, let alone having to have a business relationship. That’s when you have an open spec.
    The other thing I resist in the idea of an internet operating system is that that the net is composable, not monolithic. You can swap in and implementations of different pieces, and combine different specs that solve one piece of the problem without having to be connected to everything else.
    The original point of the cloud was a solved piece of the problem that means you don’t have to worry about the internal implementation.
    Thus, the answer to “shouldn’t social networking be a system service?” is yes, it should be a Social Cloud. That’s exactly what we are working on in OpenSocial.

  • http://justingibbs.com Justin Gibbs

    I’m not a developer, but might Croquet be the operating system kernel you’re looking for? It’s open source and peer-to-peer.

    http://www.opencroquet.org

  • http://blog.jeffreymcmanus.com/ Jeffrey McManus

    “A key test of whether an API is open is whether it is used to enable services that are not hosted by the API provider”

    Incorrect. In saying this, you seem to assume that any API that could ever be dreamed up has already been dreamed up, and that we should only write to vendor-independent, open, horizontally-oriented standards. Down this path lies madness, I assure you. Some APIs need to be vertical. Some APIs are only meaningful to be hosted by one vendor. Would it be useful if eBay’s marketplace APIs were 100% call-compatible with Amazon’s? Maybe. Would it help either business? Probably not. Would it help users, by making their data automatically interoperable? Of course not. It would definitely impose a tax on innovation, though.

  • Phil Lighten

    Seriously, have you not read the GPL version 3 license, and all the commentary surrounding it? Many of the most vigorous supporters of the open source approach have decried his assault on web services.

  • http://tim.oreilly.com Tim O'Reilly

    @Jeffrey – I didn’t say that all APIs needed to be open, just that an API that only lets you build on top of someone else’s app, rather than using their functionality as a component in your completely separate app, is by definition not open. By all means create useful APIs that allow add ons to a particular site, but don’t confuse them with APIs that create real programmable web subsystems.

    There need to be lots more APIs created, but more of them should be thinking “I’m a component” not “I’m a platform.” Look at amee as a great example.

    @Vanessa, Really good point about browser-based APIs using Javascript vs richer APIs that support multiple clients and platforms.

    @Kevin Marks, you’re right, my test is not strong enough. What you said.

    @Phil Lighten, I have indeed followed the GPL v 3 debate and am convinced that it’s completely the wrong approach. The internet could never have been built on GPL — it took Berkeley/Apache style licensing and open standards enforcement. GPL type systems have their place, but I’m convinced that they are actually a dead end.

    I’ve never been a fan of licenses — proprietary or free — that try to enforce behavior. What drew me to the ideas of open source was the idea that this is really a better way to develop and distribute software, an approach that could win in the marketplace, not through a legal trick. Apache demonstrates that this approach can work.

    The GPL is the mirror image of Microsoft. It may be sadly necessary when faced with a ruthless monopolist, but not in an atmosphere that is based on interoperability. I’m hoping that if we can establish the right kind of architectures and business practices, we won’t need a heavy-handed GPL approach.

    Meanwhile, I want licenses that encourage innovation, and I don’t think the GPL does that. I don’t see how it solves any problem for cloud computing at all. It’s just an attempt to throw sand in the gears, and that’s not really good for anyone.

    There may yet be a GPL strength license that addresses cloud computing and Web 2.0 issues, but I’m fairly certain that neither GPL v3 nor AGPL is it.

  • http://www.openlinksw.com/blog/~kidehen Kingsley Idehen

    Tim,

    Very very insightful and accurate.

    1. Licenses that permit and encourage redistribution, modification, and even forking;
    2. An architecture that enables programs to be used as components where-ever possible, and extended rather than replaced to provide new functionality;
    3. Low barriers for new users to try the software;
    4. Low barriers for developers to build new applications and share them with the world.
  • http://www.openlinksw.com/blog/~kidehen Kingsley Idehen

    Tim,

    Very very insightful and accurate.

    1. Licenses that permit and encourage redistribution, modification, and even forking;
    2. An architecture that enables programs to be used as components where-ever possible, and extended rather than replaced to provide new functionality;
    3. Low barriers for new users to try the software;
    4. Low barriers for developers to build new applications and share them with the world.

    We need to produce the same thing for Linked Data en route to crystallizing Web 3.0 etc.. The good news is that this is coming, and I will need to write a full post about it with examples.

  • http://taint.org Justin Mason

    It’s worth remembering that EC2 is basically just a veneer of automation over Xen, and there’s already an open-source clone in the form of Eucalyptus. That greatly improves the situation for that technology, and makes me feel more comfortable using EC2 without locking myself into Amazon’s trunk.

  • http://tim.oreilly.com Tim O'Reilly

    @Justin,

    You’re right, but unless there are compatible providers, that doesn’t help developers to move. It’s not like the old days when you were just compiling and running the code locally.

    I’ll be much more comfortable when I see these services coming from a whole bunch of service providers, the way hosting providers offer a LAMP stack, or Joomla or Drupal.

  • http://www.blurringborders.com Kevin Donovan

    If this is something people truly care about, where’s the take-up of Identi.ca? Or am I missing it? Twitter has failed often enough that you’d think Identi.ca would have more traction and people would be supporting it.

  • http://tim.oreilly.com Tim O'Reilly

    @Kevin –

    First off, I don’t think people care about this issue as much as they should. That’s a big part of my point. identi.ca hasn’t taken off precisely because Web 2.0 applications have network effects that make the big get bigger, thus creating a kind of lock in.

    But I would also say that while I do like the idea of federated services like identi.ca, twitter also isn’t a roach motel. They’ve built an API that supports a lot of 3rd party apps that they don’t control. And while the summize API shutdown/acquisition might seem a bit squirrelly, I don’t really know the backstory there. It could end up being great for summize.

    Twitter really does act like a system service, not just as a destination web site. Just as an example, I hardly ever use the twitter website, and access the service via a third-party application (twhirl).

  • http://eatsleeppublish.com Jason Preston

    Unfortunately most of the technical aspects of platforms and modules on the internet are lost on me, but I understand and completely agree with your assessment that there is a need for APIs and services on the web to be as interoperable as possible.

    The “platforms” that are springing up all over the place right now—facebook, myspace, even (I think) OpenSocial—are all exoskeletons on the internet; they sit on top of what’s already there and add functionality but do not actually strengthen the internet as an OS.

    What we need are more APIs like Twitter’s API (to continue my metaphor, this is more like working out and getting stronger muscles).

    And of course, to me this all begs the question of what needs to happen to the browser…

  • http://www.blurringborders.com Kevin Donovan

    @Tim -

    I agree on Twitter being better than they could be, but they still could do much better – as I move to Identi.ca, it’s a real hassle to have the people I follow be locked in.

    Regardless, thanks for elucidating this issue.

  • http://taint.org/ Justin Mason

    @Tim — agreed, I’d *love* to see ISPs installing Eucalyptus and offering their own EC2s. A little competition for Amazon’s grid, without breaking API compatibility, would be great!

  • http://blog.jamesurquhart.com James Urquhart

    @Tim and @Justin, regarding Eucalyptus and opening the “EC2 market”: I wrote a post a little while back pointing out that markets will form “solar system” like groupings of cloud providers, with large central players surrounded by others who graft onto their platform and/or API. This is clearly already happening with Amazon AWS (read the post), but Google AppEngine and others are also seeing some support out there.

    My point is that Eucalyptus’s success will drive people towards Amazon’s APIs, which will give Amazon *more* control over that section of the market unless they choose to either release control of the API to an independent body, or add extreme levels of extensibility to the API specs. How can the API evolve to meet the needs of the market over the needs of just Amazon otherwise?

    Having a Eucalyptus is no panacea when it comes to lock-in, it just shifts the problem from the vendor to the API–which in the end could be a no-op.

  • Kevin Hartig

    @Tim
    Keeping the software world an open and easily accessible environment is indeed important for both users and developers. Your post has good thoughts and I found the links to related information very useful! Although open source and standards may be part of a solution here, it is just a small part. I posted some additional ideas on this topic at http://khartig.wordpress.com/2008/08/01/open-source-and-cloud-computing

  • http://tim.oreilly.com Tim O'Reilly

    @James -

    Good point that Amazon still controls the API. But I believe that the real lock in for internet applications comes from data, and applications that have network effects in data. It’s not clear yet that anyone has understood that for cloud computing.

  • http://domino.research.ibm.com/comm/research_projects.nsf/pages/kittyhawk.index.html Chris Ward

    I’m sitting here with a ‘Project Kittyhawk’ box http://domino.research.ibm.com/comm/research_projects.nsf/pages/kittyhawk.index.html . It’s operational in the corporate research lab; all it needs is someone to buy it … ideally someone willing to back their research scientists and developers with ‘development dollars’ … and then you can start selling it to paying customers like ‘Amazon-web-services-in-a box’, or deploying it for your internal commercial or academic purposes.

    So far, it’s open source. No-one has invested the software development resource to develop anything proprietary. Linux (tens of thousands of images), Python, MPICH, TCP/IP.

    How are we going to get it deployed and in productive public use ? What more do we need, and how is it going to come into existence ?

  • http://www.groovie.org/ Ben Bangert

    Of course the lock-in is the data. If you were able to pack up and move on at will, or easily get your data out and access it via API’s that’d severely affect most of these websites ability to make any money at all.

    Open source in general only makes sense for companies that then lock up something else…. the part that actually makes money. Whether its the data, or the proprietary website code (built on an open-source stack), or access to special features.

    If FaceBook was as open as you indicate, it’d be trivial to duplicate, propagate, distribute, etc…and you’d ruin the only profit they have, you going to their site (where your data is). In essence you’d end up turning all of the social network sites into commodities, and it really sucks to compete in a commodity market. The social network sites will only be as open as they can while still retaining their lock-in, without it they’re toast.

    Consider one such interoperable internet based platform…. e-mail. It’s such a commodity that no business says, “We’ll get rich giving people email!”. There’s dozens of sites with free e-mail, no barrier to entry for a new one, and such a distributed user-base that none of them are really getting rich (Sure some of them struck it big back in the day). The social network sites like Facebook don’t want to become relegated to the same importance as GMail, Hotmail, Yahoo mail, etc. etc.

    I believe as you do, that someday social networking type functionality, and other services will be built in a distributed and federated system possibly similar to email… but none of the big players that are centralized right now are going to try and make that happen soon.

  • http://tim.oreilly.com Tim O'Reilly

    @Ben Bangert -

    Of course the big players won’t go for this, any more than Microsoft went for free and open source software (at first.) My exhortation of this type has always been to open source developers, to make sure that they understand where the lock in is now.

    That being said, I’m not convinced that open data is an open and shut case, so to speak, in terms of destroying value. Over the long term, yes. But consider this: let’s say I make it easy for any individual user to withdraw his or her own data (see the Wesabe Data Bill of Rights.) The value is still in the collective of all data, not any individual’s data. And unless a company goes so far as to say “anyone can take *all* of the data” (and I don’t think they can do that, given their own privacy contracts with the users), the biggest players still have the inherent data lock in driven by network effects that we could call “the biggest pile.”

    That’s why I say that building services that get better the more people use them is the key to success in the internet era. If you really do this right, people WANT to have their data with other people’s data on your platform. Ideally, there’s a kind of benevolent lock-in.

    Further, given that it’s not just data but business processes to keep collecting and grooming and creating interfaces to the changing data that are important, there’s lots of opportunity for value-adds on top of commodity data.

    Just ask Bloomberg.

    But you’re right, there’s a real tension between openness (which creates value) and restriction (which captures value), and lots of experiments about how to get the balance right. Create too much value and capture too little, and you’re Linus Torvalds; create too little value and capture too much, you’re some of the guys who timed the dotcom bust just right. In between are companies that, by definition, have found some version of right, because they are prospering.

    It’s rare that even a company to succeed — even one like Microsoft that has been the target of so many people — without creating value for its users.

    But you can see that, over time, Microsoft began to capture more value than it created, and that was the source of its decline as the hub of the industry. Google has long been creating more value than it captured; like Microsoft, it is now toying with approaches that may tip it in the other direction. (Knol is one that comes to mind.)

  • James Kilmartin

    Why do you charge for books on open source technology? Shouldn’t they be free to anyone who wants a copy?

    Why is software the only technology people give away for free? What does “OPEN” really mean? Can you name one SIGNIFICANT deployment of Open Source that everyone benefits from both enconomically and non-economically.

  • http://eedious.blogspot.com friarminor

    I get this feeling that unless there emerges more competition to EC2 and bunch of smaller players sooner than later, the open standards that everyone is wishing would just be a matter of semantics.

    It isn’t too hard to guess that ideas are still as fragmented and loose with regards to innovation on a standard that developers and businesses would embrace that will support ‘fuzz-free’ migration’ in and out of a certain cloud.

    As EQ (as opposed to IQ) oftentimes work, cloud innovators just have to do it and not spend days wondering how to tackle the ‘open standards’ issue. I don’ think anyone can get it right the first time and most will do lots of continuous reconfiguration even Amazon to stay in front.

    Best.
    alain
    http://www.mor.ph

  • http://blog.gardeviance.org Simon Wardley

    The question of large scale monopolies in what is today called “cloud computing” has been raised many time before in the past by yourself and others. There does exist a real danger here of network effects particularly through mechanisms such as aggregated services (i.e. market reports, preferential pricing or performance) along with the more obvious danger from the creation of a large proprietary technology based cloud.

    My reasoning, back in 2006, for the need to open source Zimki (a now defunct utility computing cloud) was based upon much earlier conversations that were raging about the dangers of a lack of portability & interoperability in a future utility computing world. None of this was new stuff then, it was just not widely known or as you say “the future is already here just not evenly distributed”

    The move towards the cloud is almost inevitable (the business equivalent of the Red Queen Effect) due to cost efficiencies and speed of new service release. The use of open sourced standards (operational open source reference models of services to be provided) such as Eucalyptus is a way to prevent the formation of artificial markets but it does require concerted community action. Open standards are not enough, as we have previously discussed you need working examples of code as the standard or reference model.

    Such action is in the interest of general business consumers for all the reasons of second sourcing. It was clear from CloudCamp London that potential consumers are concerned on the issues of interoperability and portability and none thought that proprietary technology would resolve this issue. However some would argue that those same users tend to vote for something somewhat proprietary.

    I feel compelled to defend those users, as the assertion is that the average CIO lacks the strategic wit of their counterparts in manufacturing. The move by some companies over the last few years to put former heads of manufacturing in charge of IT and to bring in those skills honed through years of cut-throat commoditised activities and the strategic imperative of second sourcing may well turn out to be wise moves indeed. However, the shift of business activities along the S-Curve of transformation from innovation to commoditisation should lead to greater componentisation and therefore more innovation. It would be unwise to simply focus on the commoditisation of IT and both extremes need to managed simultaneously.

    I remain hopeful, that those governing consumption of IT within business will realise the need to form consumer associations to push the industry towards open sourced standards for ubiquitous activities. What I cheekily called “Gang up now before the *aaS cloud gets you”

    Time will tell. Good post Tim.

  • http://www.dancingbison.com Vasudev Ram

    1. Very thought-provoking and far-seeing post – thanks!

    2. I agree – in a way – with Vanessa Williams on the point about Google API’s – though not about the Maps API, since I haven’t used it. Thinking more about their Web Search API which they dropped some time ago in favor of the AJAX Search API. On a personal level, I was disappointed when they did that, because not all code is written for the browser / in JavaScript and AJAX. There’s plenty of server-side apps or even desktop apps that could benefit from using the Web Search API. Also having it available in multiple languages would be helpful. Its a good thing that they have now sort of restored it via the REST Search API, which, though technically a “part” of the AJAX Search API (don’t know why they call it that when it doesn’t involve AJAX, as far as I could make out), can be used from server-side code or basically, any code (since it’s just an HTTP GET call with parameters).

    - Vasudev

  • Brent

    There is an inherent bias built into technology and the Web as we know it: it is expensive, energy-intensive, and depreciates rapidly. Its sustainability is questionable. It is inaccessible to much of humanity. Although widely criticized, the OLPC is at least an attempt to address some of these issues. We need to think about new ways to network which will allow humanity to escape the chokehold that ISPs currently have on access and bandwidth. The greatest innovation is likely to come from ‘unlikely’ sources: those individuals and nations who are not given a place at the table.

  • http://news.elgg.org/ Ben Werdmuller

    I completely support the amended test of an API by you and Kevin above.

    For me, one of the interesting things to have come out of the success of the open source movement has been the adoption of its terms and ideas by more proprietary software vendors. On its face, that isn’t a bad thing – except when said terms are co-opted to add an air of openness to what is, essentially, a closed business model.

    Cloud computing as the term has been widely used is a misnomer, to take one example. Services like EC2 might be a cloud internally, but to all intents and purposes it’s a finite box. Because it does have some cloud-like properties, the appropriateness of the name can be argued, but what of the truly peer-to-peer cloud systems? How do they describe themselves?

    Because of the way this happens, with the simpler, less open technologies often winning more favour in the popular consciousness, and because of the wisdom of crowds effect, the definition of each term slowly changes towards the more closed, easier to understand version. This is a problem, because in turn, as has been described in the comments above, the business models for the more open approaches will dry up. If you’re cynically minded, you could even argue that the community can even be gamed to deliberately achieve this.

    I’d also argue that this process shows very short term thinking. The web has succeeded precisely because it’s open. It needs to be. To be in it for the long game is to think about how you can embrace openness as part of a business model, rather than “embracing and extending” it.

  • http://www.gandalf-lab.com/ Niraj J

    Tim ,

    Agreed that – the high margin business might now be the traditional software and that RDBMS might end up being a commodity.
    refer: http://www.gandalf-lab.com/blog/2008/01/database-20.html

    But elsewhere – is not necessarily going to be where data resides(the network effect). It could be back in the hardware-innovation(what makes us so sure that x86 will continue to prevail). It could be in an integrated but proprietary stack offering like that AppEngine , or energy consumption.

  • http://tim.oreilly.com Tim O'Reilly

    Oyun,

    I think the secret of making money with open is to be in a business where you benefit because more people use or contribute to your product.

    Think about the internet: open, yet lots of opportunities to monetize. (Verisign et al: the DNS registry; google, the “link registry.”)

    Unfortunately, many of these are not direct monetization models.

  • http://iksirmucizesi.blogspot.com/2009/03/ahmet-maranki.html maranki

    - Many Linux OSes redistribute Firefox, and at that point, they can (and should!) be applying a theme to be as consistent as possible with their HIGs

  • http://www.dobrepliki.pl/Download-Plik-143-Ares.html Ares

    awesome idea. i remember this site.