If Web 2.0 was so hot, how about Web 3.0? This has been a recurrent theme of would-be meme-engineers who want to position their startup as the next big thing. Nova Spivack started it by describing the as-yet-to-be-revealed Radar Networks as Web 3.0, but now Jason Calacanis has his competing definition, neatly tailored to fit his own mahalo.com. The resulting storm of derision is entirely to be expected.
Now, I of all people should be hesitant to say “Web 3.0 is a stupid idea” because of course, that same criticism was leveled at “Web 2.0.” But there are a couple of important distinctions:
- Web 2.0 started out as the name of a conference! And that name had a very specific purpose: to signify that the web was roaring back after the dot com bust! The 2.0 bit wasn’t about the technology, but about the resurgence of interest in the web. When we came up with the idea back in 2003, a lot of programmers were out of work, and there was a general lack of interest in web applications. But we saw a resurgence coming, and designed a conference to tell the story of what was going to be different this time.
- I then spent some serious time trying to identify the characteristics of companies that had survived the dotcom bust and the best of the new companies and sites I saw coming up. That paper, What is Web 2.0?, was a retrospective description based on a broad swath of successful companies, not tailor-made for a single company or project that has yet to make its mark.
So for starters, I’d say that for “Web 3.0” to be meaningful we’ll need to see a serious discontinuity from the previous generation of technology. That might be another bust and resurgence, or more likely, it will be something qualitatively different. I like Stowe Boyd’s musings on the subject:
Personally, I feel the vague lineaments of something beyond Web 2.0, and they involve some fairly radical steps. Imagine a Web without browsers. Imagine breaking completely away from the document metaphor, or a true blurring of application and information. That’s what Web 3.0 will be, but I bet we will call it something else.
I’m with Stowe. There’s definitely something new brewing, but I bet we will call it something other than Web 3.0. And it’s increasingly likely that it will be far broader and more pervasive than the web, as mobile technology, sensors, speech recognition, and many other new technologies make computing far more ambient than it is today.
But in any event, the next meme to take hold will be broad based, with many proof points, each showing another aspect of the discontinuity. Anyone who says his startup is the sign of this next revolution is just out of touch.
I find myself particularly irritated by definitions of “Web 3.0” that are basically descriptions of Web 2.0 (i.e. new forms of collective intelligence applications) that justify themselves as breakthroughs only by pretending that Web 2.0 is somehow about ajax, mashups, and other client side technologies. For example, see Nova Spivack’s post today in response to Jason’s:
Web 3.0, in my opinion is best defined as the third-decade of the Web (2010 – 2020), during which time several key technologies will become widely used. Chief among them will be RDF and the technologies of the emerging Semantic Web. While Web 3.0 is not synonymous with the Semantic Web (there will be several other important technology shifts in that period), it will be largely characterized by semantics in general.
Web 3.0 is an era in which we will upgrade the back-end of the Web, after a decade of focus on the front-end (Web 2.0 has mainly been about AJAX, tagging, and other front-end user-experience innovations.)
I have some sympathy with Nova’s attempt to rescue the Web 3.0 term by tying it to a timeline rather than to any particular technology (Windows 95 anyone?), but I find the idea that Web 2.0 is about “front end” technologies to be so ridiculous as to discredit the whole idea. Google is the pre-eminent Web 2.0 success story, and it’s all back-end! Every major web 2.0 play is a back-end story. It’s all about building applications that harness network effects to get better the more people use them–and you can only do that with a richer back end. Nova is right that Semantic Web technologies may come increasingly into play in some sites, but I don’t think that’s a given.
As I wrote in a comment on Nova’s blog:
Alas, I find the Web 3.0 arguments as clear evidence that the proponents don’t understand Web 2.0 at all. Web 2.0 is not about front end technologies. It’s precisely about back-end, and it’s about meaning and intelligence in the back end.
The real difference between Web 2.0 and the semantic web is that the Semantic Web seems to think we need to add new kinds of markup to data in order to make it more meaningful to computers, while Web 2.0 seeks to identify areas where the meaning is already encoded, albeit in hidden ways. E.g. Google found meaning in link structure (a natural RDF triple); Wesabe is finding it in spending patterns.
There are sites (geni.com comes to mind) that create narrow-purpose cases where people add structured meaning, and I think we’ll find lots more of these. But I think that the big difference is in the amount of noise you accept in your meaningful data, and whether you think grammar evolves from data or is imposed upon it. Web 2.0 applications are fundamentally statistical in nature, collective intelligence as derived from lots and lots of input at global scale.
See my various posts on Web 2.0 vs. the Semantic Web.
Meanwhile, Web 2.0 was a pretty crappy name for what’s happening (Microsoft’s name, Live Software, is probably the best term I’ve seen), so I don’t see why we’d want to increment it to Web 3.0. But when people ask me what I think Web 3.0 will be, I don’t think of the semantic web at all.
What are things that will give a qualitative leap beyond what we experience today?
I think it’s the breaking of the keyboard/screen paradigm, and the world in which collective intelligence emerges not from people typing on keyboards but from the instrumentation of our activities.
In this sense, I’d say that Wesabe and Mint, which turn our credit card into a sensor telling us about tracks we’re leaving in the real world, or Jaiku, which turns our phone into a sensor for a smart address book, or Norwich Union’s “Pay as you drive” insurance, are more early signals of something I’d call “Web 3.0” than Semantic Web applications are.
Let’s just call the Semantic Web the Semantic Web, and not muddy the water by trying to call it Web 3.0, especially when the points of contrast are actually the same points that I used to distinguish Web 2.0 from Web 1.5. (I’ve always said that Web 2.0 = Web 1.0, with the dot com bust being a side trip that got it wrong.)
Nova did have a great response to this comment, which he sent to me in email, and which I reproduce here with his permission:
I would actually say that I agree with much of what you state in your comment on my post. EXCEPT for one thing. The Semantic Web is completely orthogonal to the issue of collective intelligence. It can in fact be used as a better backend for existing “Web 2.0” folksonomies, or it could be used for expert systems — it is not just a top-down framework.
It would not be technically correct to say that Semantic Web is not about statistics or that it is not about deriving structure from what is already there in the data — The Semantic Web is just a way of encoding whatever it is that you know (it could have been derived, or not).
So you could use statistics, or mining, or the wisdom of crowds, to markup data — but then where do you store and share what you have learned about that data? The Semantic Web proposes a richer framework for storing and publishing that metadata. It is completely independent of how the metadata is generated. It’s just a better way to share that metadata.
Using string tags and microformats, or XML tags for that mater, are just different ways of marking up data. RDF and OWL are also just different ways of marking up data — but they are BETTER ways of doing it. They have much more power, they are more open, they are more extensible, they support bottom-up collective intelligence better in fact.
This is why I propose that if we MUST use ridiculous terms like Web 1.0, Web 2.0, Web 3.0, then let’s not tie them to a particular technology. Let’s just tie them to decades, in which many technologies happen together.
Let’s face it the world is not as cut-and-dried as people would like to make it seem. RDF started in Web 1.0 in fact!!!
I think that there is a distinct difference in the structure of the Web over time however. RDF enables us to move the Web from a file-server to something more like a database. It enables a web of data. It does for data what hypertext does for text — I call that hyperdata. This is certainly something new and very useful, but it will depend on what people ultimately do with it.
At Radar we are taking a Web 2.0 approach to Web 3.0. Essentially we are making use of user-generated content and the wisdom of crowds, as well as statistical analysis, mining and machine learning. Combined we have something much more powerful than either on its own: a true platform for collective intelligence. The fact that we happen to store the data using the Semantic Web is a convenience — it makes our data more extensible and reusable by others. But ultimately the data itself comes from users.
Some of this makes sense to me. He’s certainly right that the Semantic Web may prove very useful for many classes of intelligent applications. But the proof of the pudding is in the eating, as my mother used to say.