The GPL and Software as a Service

Linux Magazine’s article The GPL Has No (Networked) Future recognizes a point that I’ve been making for years: that free software license requirements to release source code are all triggered by the act of distribution, and that web applications, which are not actually “distributed,” are therefore not bound by these licenses. (See, for example, my 1999 debate with Richard Stallman at the Wizards of OS conference in Berlin.)

The article describes how during the GPL v3 discussions, there was a move to close the “SaaS loophole” by including some of the provisions of the Affero General Public License or AGPL:

the FSF supported the creation of the Affero GPL and attempted to integrate it into the early drafts of the GPL3. However, that plan backfired and the FSF not only struck the text that would extend the GPL to software delivered as a service but clarified just what “to ‘convey’ a work” actually means.

Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.

In other words, software delivered as service is now officially not covered by the GPL.

…the community forced the provision out as indicated in the FSF’s 61-page rationale document [pdf] that accompanies this latest draft.

We have made this decision in the face of irreconcilable views from different parts of our community. While we had known that many commercial users of free software were opposed to the inclusion of a mandatory Afferolike requirement in the body of GPLv3 itself, we were surprised at their opposition to its availability through section 7. Free software vendors allied to these users joined in their objections, as did a number of free software developers arguing on ethical as well as practical grounds.

The article concludes that while this is the right decision, it places real limits on the long-term significance of the GPL: “The future is networked. The GPL isn’t.”

At the O’Reilly Radar Executive Briefing on Open Source in Portland the week after next, we’ll be talking with Eben Moglen about GPLv3, with a specific focus on this decision, and the general issues of Web 2.0 applications and free software licenses.

On the one hand, I agree completely with Linux Magazine about the correctness of this decision. Web-delivered applications are just too important to too many people for the horse to be taken back to the barn. It would have been a death blow for GPLv3, making it impossible to adopt. At the same time, I believe that there are important free software issues to be addressed in the Web 2.0 space. On the radar backchannel, Nat wrote:

I keep coming back to Stallman’s printer, the origins of the GPL. He didn’t want to be left without the ability to improve and continue using software he paid for. An implementation of the equivalent ability for services is still not defined.

We talked about open services and open data at the last open source briefing. This is still an unsolved area for both open source and Web 2.0. I believe that there will come a time when we will need to rediscover for Web 2.0 the freedoms that led Richard Stallman to the GPL, but I don’t think it will grow out of the current crop of free software licenses. It will be closer, perhaps to Wesabe’s open data bill of rights.

Your thoughts?

tags:
  • Tim, I’m not sure how you can argue that

    “Web-delivered applications are just too important to too many people for the horse to be taken back to the barn. It would have been a death blow for GPLv3, making it impossible to adopt.”

    How is this any different from Linux? Or MySQL? Or other GPL’d software that has had massive adoption, despite the fact that (in the 20th Century software world) it required reciprocity?

    If I parse your argument, you’re saying, “Fortunately for the web its denizens were able to pilfer free software because if web companies were made to be open source citizens there would be no web.” Or something like that. I just don’t get it. Open source (a la the GPL) has thrived in the non-web world, despite (or, as I’ll argue at the Executive Radar, because of) the GPL, but you’re saying that the laws of the web are somehow different.

    They are not.

    Sure, everyone loves to get something for nothing. But Google would still be Google if it had to (gasp!) pay for some of its infrastructure. It would not cripple Yahoo! to (gasp!) more fully support the LAMP stack upon which it’s built.

    And, taking this one step further, GPLv3 could adopt SaaS-style reciprocity without dismantling existing infrastructure. A new license is prospective in its view. I somehow doubt that the web companies would die tomorrow if they suddenly had to treat open source as a public good rather than as a private good to be used but not replenished.

    I think you need to reconsider the web. It really doesn’t deserve preferential treatment. Why should Google get a free ride but Citigroup should not?

  • Re-reading your post, Tim, and now I wonder if you were quite making the argument I initially inferred. Sorry if I misread you.

  • ndg

    Scenario A:

    You write an open-source text-editor. A company takes the source of your program, adds a few bells and whistles, then sells it on without giving the improvements back to the community.

    Scenario B:

    You write an open-source blogging engine. A company takes the source of your program, adds a few bells and whistles, then sells access to a hosted copy without giving the improvements back to the community.

    They feel much the same to me. The fact that SaaS developers got used to exploiting this loophole doesn’t make it any less reprehensible. That said, some kind of distinction between platforms and applications might be in order: improvements to the core of Ruby on Rails should be shared; applications built with Rails, not so much. (Like GPL vs LGPL.)

    The Data Bill of Rights is just as important, but not a replacement. I’d support an AGPL derivative requiring that data be kept “free” in this way.

  • Matt: from my conversations with Tim, I think he meant that people had already been building web applications with GPLed code under the expectation that they would not have to release their source. If GPLv3 changed that, users of toolkits would have resisted a GPLv3’d version and so developers would have avoided making a GPLv3 version. In that sense the horse was out of the barn.

  • Asa Hopkins

    The Affero General Public License (the most recent verion, currently in draft form, in based otherwise on GPLv3) does close this loophole, giving developers the choice to respect this idea, and require folks using their code to make their modifications available. I am planning on releasing two Ruby on Rails apps under this license, so we’ll see how it goes…

  • The original Affero GPL has been out since 2002, and pre-dates much of the popular web-based software out there. Why haven’t more projects chosen to use it?

  • Matt — what Nat said. But further, who knows what intermixtures of free and proprietary software exist at web companies. Because there was a clear loophole, I’m sure that there is lots of GPL’d code that was re-used, secure in the knowledge that the software was not being distributed, and thus the license terms requiring source code would never be triggered.

    To the extent that a loophole exists for many years, and people come to depend on it, an about-face is difficult. If RMS had been paying attention back in the late 90s, before web applications became the dominant mode of application delivery, there would have been an opportunity to do something different, but as he told me in Berlin in 1999, he thought the fact that Google could mingle free software with proprietary without violating the GPL “didn’t matter, because the software is not on my computer.”

    Don, I know about the AGPL, but how widely used is it? Not very. It would need to be the dominant license for free software used in web construction for it to have an effect.

    But in any event, I’m not sure that even this is the right answer. After all, if “using the code” to deliver services is the trigger, how do you distinguish that from use within an organization?

    And given the compound, multi-tier nature of many web applications, what level of “use” would trigger the license? I mean, in a distributed binary world, you have a single executable. It either includes GPL’d code or it doesn’t. But web apps tend to be built by using GPL’d programs as components, with free and proprietary components working together to deliver results.

    What’s more (and this is why I point to the Wesabe data bill of rights), even if you had the source code for a web app, you wouldn’t have the app itself, with the right to “fix” it in the way that RMS wanted to fix his printer back in 1984. If you have a problem with Google, it’s likely to be a problem with the data. Having the complete source tree won’t fix that. Having the right to remove or modify incorrect data is probably more important than having the right to modify the code.

    Having the source to Google or Amazon or eBay or CraigsList also won’t let you replicate the service, unless you have millions of dollars to spend on infrastructure, employees to manage the ongoing services, etc. etc.

    It’s a profoundly different world. And how to preserve user rights and freedoms requires completely fresh thinking not just about the mechanisms but about the objectives of the effort.

  • Thomas Lord

    Tim, thank you for raising what is one of my favorite questions lately. I think the GPL remains relevant in the long run, and below is why. It’s a simple-minded take but I find it pretty convincing:

    The production of SaaS products has roughly three components:

    1. Provision of software. People have to write, maintain, and continuously improve the applications.

    2. Provision of hosting. SaaS products include rent on hosting. Customers rent the platforms on which SaaS software runs.

    3. Aggregation of sets of users into Web 2.0 crowds. While this is enabled by the technical features of the software, it is a separate step to manage who is and who isn’t participating in which Web 2.0 aggregation of data.

    The models of MySpace, Google, Facebook, Yahoo, Microsoft, and others all assume that those three components are concentrated in vertically integrated firms. Google write’s Google’s software. Google hosts Google’s apps. Google manages the borders between various crowds using these apps.

    Vertical integration is hopelessly inefficient in the long run. The expenses of hosting are huge but that investment, in the vertical model, pays of only as well as a single firm’s software permits.

    Amazon’s S3 is perhaps the most widely known example of an alternative approach. It is an attempt to commoditize provision of hosting. In this model, multiple firms concentrate on efficiencies in providing general-purpose hosting, multiple firms compete to develop software for that platform. Customer-facing SaaS comes to be a competition between integrators who buy software and buy hosting and sell SaaS instances.

    One of many available analogies could be drawn to Sun, ca. 1990, and the “unix market” today. What we learned there is that as platforms (hosts) become commodities, and free software becomes available, system integrators who buy those and sell customer-facing products tend to win against vertically integrated firms that want to produce and sell (as a monolithic unit) all three of integration, software, and hardware.

    There is a chicken-and-egg problem in the room and it’s a big one: we have, as yet, no generally accepted standard for hosting-component commodities. For example, S3 clearly aspires to be a commodity but it won’t really be until lots of people are selling the same thing and customers can switch providers without interrupting service about as easily as they can change long-distance carriers for the land-line telephones.

    How do we produce such a definition of a commodity? Proprietary software vendors can do it only one way and that is through formal specifications by widely respected bodies — a hopelessly slow, expensive, error-prone process.

    Another way to define the host platform commodity for SaaS is to grow it organically: to offer up free software components which, in effect, propose infrastructure software for hosting and application software that runs on that host. To the extent such efforts can be started small, linked from the start to a functioning demand for integration of those components, and then grown, this approach to commoditization can succeed quickly, paying its own way, and having the benefit of learning from experience as the commodities evolve.

    -t

  • Joshua Daniel Franklin

    A strange thing about web-based applications, though, is that you can modify them already, for example through Greasemonkey or standalone customizations like Better Gmail. It does bother me that small customizations are a lot of trouble, though.

  • Affero GPL is the answer. Developers and publishers that want to close the loophole can do so through Affero. There’s no need to incorporate these changes into the GPL. If anything, it will just cause GPL v2 licensers not to upgrade to v3 (because they elected to use GPL, not Affero GPL, in the first place).

    I’m glad Affero exists, and I used it to license a PHP project I worked on back in ’05. But, I think Affero’s existence alone is enough.

  • Tim,
    Thanks for the very well-reasoned article. Wikipedia is a model of what “open source” could look like in the web world.

    I don’t view software in the moral terms a la Richard Stallman, because it is so obvious that software is now pretty much *everything* – so would Stallman avoid driving a car or even toasting a piece of bread because some proprietary software is running in those software-disguised-as-machines? This is a relevant question: after all, there is a moral (if you believe in global warming) consequence to the precise way in which your engine is programmed to burn gasoline, and a GPL type of argument can be made that the car owner should have access to that program.

    Instead, I would argue that the emergence of open source software was very much a rational, adaptive economic response to the monopoly Microsoft had built, and not based on any moral qualm about proprietary software. Economic efficiency won.

    As a perhaps unintended consequence, the availability of such software enabled Yahoo and Google to emerge and challenge the established order. It is that consequence GPL is grappling with, and I agree with you that the danger in “closing that loophole” is that GPL will lose its relevance.

    As the Microsoft threat has faded (it has faded!), and new kinds of monopoly threats emerge, the rational economic response is taking the shape of a Wikipedia.

    Sridhar

  • Thomas,

    Provocative insights. I wrote my own take on the relationship between open source and commoditization back in 2004 in The Open Source Paradigm Shift, but that thinking definitely needs to be updated to take into account the kind of commodity provisioning you talk about with S3.

    Your comment suggests the depth and complexity of the issues, and why we need to think holistically about the changes in the way software is deployed, and not try to constrain it to what are fundamentally 1980’s-era models.

  • Sridar,

    I agree that wikipedia is the closest thing we’ve got to an open source web 2.0 play. (Well, maybe CraigsList is a close second.) But it’s not clear how the wikipedia model is extensible to other types of projects, and we certainly don’t have a codification of the principles that could be used by other projects.

    Again, though, a good direction for study and reflection.

  • I’m not sure I’d describe the SaaS situtation a “loophole”. It really is a new and different circumstance that has arisen as a consequence of the maturation of technology and the IT industry.

    However before you can consider GPL as it relates to SaaS, there are three fundamental entities that need to be considered:

    1. The OPENness of application source code and any web APIs – e.g. Google Maps. The Web API’s might be open but the actual application code is not.

    2. The FREEness of the solution. ie do you pay for either the SaaS application code (if you want to host the app yourself) or for the service that application provides?

    3. The “openess” and freeness” of the DATA used in the service. e.g. How free are you to migrate your data from one service to an alternate with respect to ease and cost? Gmail/Yahoo/Hotmail may be free to use, but how free am I export my mail from 1 to the other? Also it’s not beyond the realms of possibility that I get charged for doing so. re. Think “early termination” fee.

    Currently The GPL is closest to #1 and a long, long way away from #3

  • Think you’re bang on Tim.

    In a future with SaaS at the core of non-commercial user applications, I suggest that the key things that users will care about are the openness of:

    1] their data

    2] the application GUI

    As you mention, the source to Gmail will not greatly help me replicate Gmail. But an open GUI api will help me (or someone else) create things like better Gmail without (messy & brittle) hacks.

  • In a networked world, open services and data are indeed the key to rediscover the GPL freedoms for the future service oriented information society. A very important point of departure for scouting this new territory is the inevitable separation of the user, application and data in a disseminated manner. This is to prevent the creation of the “holy grail of information” of which one or few entities will be trying to create and own. This of course will kill any “open” approach needed to guarantee the freedoms.

  • Tim,
    I mentioned Wikipedia in the context of the “Wikipedia eats Google” meme. Squidoo, Mahalo etc. are all commercial attempts at the same thing: use organic search results in a way that directs people to other authoritative indexes of information, compiled using voluntary/paid human effort. Just as we saw in the software universe, there are varying types of trade-offs made between openness and control, but they all do have one end in common: undermine Google’s as “the” authoritative answer to every query.

    Sridhar

  • Thomas Lord

    Google’s data is an interesting case and perhaps a fairly unique case. The main source of their competitive power is that they is that they closely hold their raw data (the page cache, the book scans, etc). On that basis, their hackers have a privileged platform on which to launch closely held software applications. In this view, the results ranking algorithms or ad placement algorithms of Google aren’t obviously any better than “usable” because they are the only apps we’ve ever seen on this platform. Were their competition for the platform, with non-Googler researchers inventing applications, perhaps we’d find that Google search and Google ads are brilliant, or perhaps we’d find they contain lots of clumsy, “beginner’s errors”.

    It’s very hard to imagine an open form of Google’s privileged platform. Collecting the raw data is easy enough — nothing but bandwidth costs there. But serving the product data quickly and efficiently running platform facilities like map-reduce really requires very customized hosting at a very large scale. What incentive could there be to build such a platform other than to hoard it?

    Here is my prediction:

    Google itself points the way of its own demise. From their proliferation of experiments with competing beta projects, we can see that Google has (1) engineered their platform to permit the concurrent execution of mutually isolated and non-interfering applications (their internal platform is a time-sharing, multi-“user” sytem (really “multi-application”)); (2) tried to optimize use of that platform by creating an internal, competitive marketplace to win installs and cycles on that platform (c.f., Google’s flat structure and particular form of a culture of innovation).

    Just as S3 attempts to commoditize storage, other services can attempt to commoditize things like “map-reduce (in our safe language) over a global page cache” or things like “create an index for the page cache”.

    Such commodities won’t be cheap anytime soon but here’s the twist: Google isn’t alone in building out solutions for huge data centers and global bandwidth assurances. Yet, none of the players yet have any especially bright ideas yet about how they’ll keep these centers busy, in the long run. Many of Google’s competitors are in that position. None of them can, through some heroic innovation, easily overcome Google’s brand and leveraging of Google’s currently privileged platform. Therefore, all of these competitors have incentive to play spoilers: if they can’t make heaps of money taking Google’s apps on directly, they can at least make sure that nobody, including Google, can make heaps of money in that market. They can do that by forcing that market to be efficient. They can force that market to be efficient by offering a Google-like platform up for rent and experimentation, so that every little super-genious dweeb and his brother can start hacking in head-to-head, public-facing competition with the hackers at Google who turn the platform into search results and ad placements. In other words: Google’s pricing power is doomed to converge on no more than the cost of producing various search and ad placement results. (In which regard, people thinking of buying Google today as a long-term value play, as if they didn’t already have enough reasons, ought to pause and think: how efficiently, exactly, is Google’s spending today raising the baseline cost of the profitable services they float?)

    Finally, I have to make a pitch here. I make all of this platform-commoditization stuff sound easy. It isn’t rocket science and it’s very far from “out of reach” but it also isn’t a done deal: it’s going to require actual, banal, ordinary, R&D in programming language design, systems programming, and network engineering. Some of that is in progress by small groups at various universities but it will be to the shame of todays free software firms if they are latecomers to the upcoming party because the traditional-style R&D labs at certain proprietary vendors beat them too the punch.

    -t

  • I’m firmly in the camp that GPL requires no modification. (at least for SaaS)

    The confusion comes in the equal weighting of the S’s in the acronym. But Software as a Service isn’t Software at all, it’s a Service. period.

    Salesforce.com no more sells software than does Goldman Sachs. Both, I assume, use and modify open source code to deliver service to their customers.

    And it’s not merely a “deliverability” semantic. Salesforce sells services that helps me optimize my sales. The fact that I used to have to buy software to do that myself is irrelevant.

  • Mike —

    On the one hand I agree. When a vendor delivers SaaS, they are delivering a service not the software itself. And that’s why the GPL and other licenses conditioned on the act of distribution have no force. But if we step back, and think about the issue of freedom, and ask ourselves what freedoms we’ll be wanting, say in a world in which Google has the monopoly power that Microsoft had in the 90s, they are very different freedoms than the original freedoms sought by the GPL, but no less important. The freedom to improve and modify are truly valuable — the question is how to make those freedoms available in a world of SaaS, and more importantly, SaaS in which the service includes not just software but collective intelligence in the form of vast databases — that include data that I’ve contributed, and that is no longer under my control….

  • Tim and Don,

    The main reason that the original AGPL did not see more adoption was that it had several deep flaws in it’s wording, and because of those flaws, stood no chance of being approved as OSD compliant. Affero (as an organization) did not really have the resources to re-draft the license and then submit it for approval. Without that imprimatur and considerign the AGPLs incompatibility with GPLv2, there was never much chance for widespread adoption.

    I believe that the newly drafted GAGPLv3 will pass OSD muster, and that (in part because of the explicit compatibility with GPLv3) we will see a growing number of web applications (and perhaps more importantly, web-app components) released under the new license (in large part because it enables the now reasonably well-understood dual licensing business model for web applications). I myself intend to release some code under the GAGPL.

    In general, I think we will see web-app developers align themselves into similar rhetorical camps as we currently see dividing copyleft vs. permissive licensing, and that in general, over time, web-app software that is closer to the end-user will be relatively more likely to be licensed GAGPL than infrastructural software that is closer to the metal.

    I suspect that we will also see some projects fork (perhaps rather spectacularly) around the issue of relicensing under the GAGPLv3.

    Note that there does not seem to be any demand for a license that bears the same relationship to the GAGPL as the LGPL does to GPL, as the GPL itself partially fills this role.

    Because of this, as software functionality is delivered more and more over the network, and the GPL itself becomes a de-facto permissive license from this point of view, I have a feeling that, as a side-effect, this may end up somewhat marginalizing the de-jure permissive licenses and the projects that surround them such as the *BSDs.

  • > Your thoughts?

    My only concern is the spurious use of the term “Web 2.0”. What is it? Nobody is able to even define it properly (let alone *consistently*).

  • Outstanding post and fabulous points raised.

    I also agree with much of what Thomas Lord has commented on.

    My company has been providing a utility computing environment since before March ’06 and we planned to open source at OSCON ’07. Unfortunately this isn’t going to happen now – for reasons published – however I examined with the whole GPLv3 or AGPL question, so I thought I’d add my view.

    A number of utility computing environments use Xen or at least a modified version of Xen in the background. This is an example of HaaS (hardware as a service) but it suffers from the same problem as identified by AGPL. The modifications made don’t need to be released to the community.

    With Zimki, we wanted to create a competitive utility computing market. The key essential component of which is portability from one provider to another. However, how does a user know there is portability if each provider modifies the basic engine and does not release the change? So AGPL seemed like a natural choice.

    But, a competitive market should encourage improvements. Obviously if the engine is being updated, there is a natural pressure (the cost of constant modification of each release) to return the changes back to the source however this doesn’t mean you want to put barriers in the way of companies attempting to improve the engine, to gain some advantage over another.

    The key issue therefore is, compliance to a standard. In the utility computing world as long as a user can freely move their application and data from one provider to another and know that the application and data will run, any enhancements or improvements a single provider wishes to make don’t need to be released back to the source. If anything it creates further competition within providers. So GPLv3 now makes more sense.

    So how do you ensure that a provider complies to the standard and inform the user that they can freely transfer their data and applications (a more extended version of Wasabe’s Bill of Data Rights) from one provider to another? You can use trademarking to provide standards compliance.

    Whether the standard engine is a specific language based development platform, or a raw computer resource platform or a SaaS – in order to generate a competitive utility market (which is where much of this will eventually head) you’ll need to use both open source and a standards compliance.

    Why do I push the theme of a competitive utility market? Well as per the discussions by Artur on S3 etc – the issue with SaaS / HaaS is exit cost, potential lock-in, SLAs, DR and so forth. All of these problems can be solved with multiple providers of the same standard and portability between them.

    The argument for use of any SaaS / HaaS is more compelling as a marketplace – a Google S3, Microsoft S3 and Amazon S3 is a more attractive proposition to users because of portability than the single provider.

    A market with choice and freedom is always more attractive than any single vendor.

    So is GPLv3 irrelevant – I have to disagree, I actually feel it is the right approach as it allows for competitive improvements whilst creating the normal pressure to return changes to the source through the cost of constant modification.

    As utility based SaaS / HaaS platforms are open sourced, the attractiveness of a market will create pressure for standard bodies and compatibility assurance (whether the original group or not).

    So is GPLv3 relevant to a networked world – from a utility computing point of view, I believe it is ideal.

  • Tim – Ahh yes, we need a new acronym – DaaS – to go along with the new license style. As both provider of- and beneficiary of- data services (free and paid) I’m in the thick.

    I can tell you’ve hit a nerve with me, because I’ve been trying all day long to solve the whole problem in this little comment text box. I may have to carve out time for the next Open Source Briefing…

  • adam hyde

    Just to take a slightly different tack which relates back to something Tim said in the opening entry :
    “I believe that there will come a time when we will need to rediscover for Web 2.0 the freedoms that led Richard Stallman to the GPL”

    I agree, and I would take this a step further. The solution to freedom of web 2.0 content is in the GPL (v2) and does not lie in the default web2.0 license family – Creative Commons.

    If all content was licensed under the GPL, including Wikipedia (as the FDL is not a free license in my opinion), then we would have absolute interoperability of content. The GPL can be applied to software or “any other work”, and if the implementation principles of the CC (easy to understand, easy to use) had been focused on working with the GPL, then Lessig would have created the CC wrapper for the GPL and stopped there (http://creativecommons.org/license/cc-gpl).

    We would then have content that is easily tagged as free, guaranteed to be remain free, and easily distributable.

    Providing ‘sources’ is the only hick-up, but not so difficult (provide a link to archive.orgs way back engine in the license tag, for example).

    What we have now is one big mess of incompatible freedoms. Most CC licenses are not compatible with other CC licenses. Its easy to see how far this mess extends by looking at the official list of CC compatible licenses:
    http://creativecommons.org/compatiblelicenses

    …there aren’t any.

    There are no universal freedoms in web2.0 if the content is licensed with CC or the FDL.

    What we have is user-generated-content which is housed within platforms where interchange of content is only possible internally. This plays nicely into the hands of those platforms as they can appear free by waving the banners of freedom (CC Licenses) and generate good will amongst its user base while at the same time knowing that real exchange of content between platforms is not a possibility because on the lack of compatability between licenses.

    I think CC is the biggest barrier when it comes to data exchange. It has muddied the water of ‘freedoms’ that were pursued by the FSF, and made a real mess of the license world.

    The FSF is not innocent either, and they are more and more backing away into the GNU fortress. The inability, for example, to see take their own words seriously has meant that they are headed towards obsolence. They claim the GPL can infact be used for any content not just software, but then they retreat into a world that consists only of source code.

    In a world where software and content are merging and increasingly hard to differentiate, they are polarising themselves into obscurity. The problem is – the FSF do not see that the same freedoms should exist for content, as can easily be seen through the lens of their licenses. Why have a FDL at all? Why not just use the GPL for everything and preserve the same freedoms for all content.

    The FDL exists because documentation (‘content’) is uncomfortably close to software, some of it is even included in source code, and yet it is _not _ software. And the freedom of non-source code is not so important to the FSF, so we have the FDL. The FDL itself does not guarantee the free (libre) development, exchange, improvement of material. Rather it exists to preserve a business model of manual writers:
    “the GFDL has clauses that help publishers of free manuals make a profit”
    http://www.gnu.org/licenses/gpl-faq.html#WhyNotGPLForManuals

    So, we have a real problem. The solution, in my opinion, comes from the GPL and ironically not from the FSF or CC.

  • Things should not grow to be significantly more different than things are today.

    Certain web apps/sites run on lots of FOSS components, but in general do not contribute tons of code. Some services/sites are better than others — but more are getting better all of the time.

    The advantage of collaboration is the key factor here — more minds are better than few, new ideas come from openness, to watch the things you originally designed get used in ways you never intended.

    Open source succeeds because it works, not because
    it is mandatorily viral — look at the BSD license for a counter example.

    Even when these companies are not huge FOSS project farms, many users there still contribute in terms of testing, bug reports, and feature requests — these too, are important parts of the community.