Mon

Nov 13
2006

Tim O'Reilly

Tim O'Reilly

Web 3.0? Maybe when we get there.

John Markoff just published a story in the Times about the future of the web, suggesting that "From the billions of documents that form the World Wide Web and the links that weave them together, computer scientists and a growing collection of start-up companies are finding new ways to mine human intelligence." If you've been reading this blog, you know I totally agree that building systems that combine human and machine intelligence is a huge part of the oncoming future.

But I was surprised to see Markoff referring to this as "Web 3.0", when that very fact is the heart of what we've been calling Web 2.0. Markoff limits Web 2.0 to "the ability to seamlessly connect applications (like geographic mapping) and services (like photo-sharing) over the Internet," which seems rather surprising to me, given that "harnessing collective intelligence" has been a key part of the Web 2.0 definition from the beginning.

That being said, we're a long way from the full realization of the potential of intelligent systems, and there will no doubt be a tipping point where the systems get smart enough that we'll be ready to say, "this is qualitatively different. Let's call it Web 3.0."


tags: web 2.0  | comments: 15   | Sphere It
submit:

 
Previous  |  Next

1 TrackBacks

TrackBack URL for this entry: http://blogs.oreilly.com/cgi-bin/mt/mt-t.cgi/5045

» Web 3.0: the age of agents from LIVEdigitally

John Markoff stirred up the pot this weekend by launching Web 3.0, even while good ol’ Web 2.0 is still in beta (or alpha). And I didn’t even get to go to the launch party, although I did get to spend some time in the halls of the Web 2.... Read More

Comments: 15

  Steve Loughran [11.13.06 07:33 AM]

No, the article is about Semantic Web.

Web2.0!=SemWeb. I dont want to get into an argument about the merits of either, but the semantic web is built on ontologies and ubiquitous RDF, whereas Web2.0 is more about trackbacks and folksonomies. The only RDF files found in Web2.0 is in RSS1.0 data feeds, and they are considered legacy compared to Atom data sources.

Perhaps the rebranding of SemWeb work as Web3.0 is an attempt by the Semantic Web community to try and stay relevant, or it was just a witty title by the article author. We will have to see if the SOAP infrastructure tries to become Web4.0 next.

  Xavier Cazin [11.13.06 09:47 AM]

SemWeb = 3.0 ; SOAP = 4.0... That could have been if "2.0" was capturing an improvement upon the original Web technologies, but I don't think this is the case.

Doesn't it rather captures the fact that anyone can contribute a bit in order to get a lot, while in the times of the unnamed 1.0, only happy fews enjoyed the power of this pleasant asymetry?

  Kingsley Idehen [11.13.06 04:09 PM]

Tim,

A few things:

  1. We are in an innovation continuum
  2. The Web as a medium of innovation will evolve forever


  3. Different commentators have different views about monikers associated with these innovations


  4. To say Web 3.0 (aka the Data Web or Semantic Web - Layer 1) is what Web 2.0's collective intelligence is all about is a little inaccurate (IMHO); Web 2.0 doesn't provide "Open Data Access"


  5. Web 2.0 is a "Web of Services" primarily, a dimension of "Web Interaction" defined by interaction with Services


  6. Web 3.0 ("Data Web" or "Web of Databases" or "Semantic Web - Layer 1") is a Web dimension that provides "Open Data Access" that will be exemplified by the transition from "Mash-ups" (brute force data joining) to "Mesh-ups" (natural data joining)



The original "Web of Hypertext" or "Interactive Web", the current "Web of Services", and the emerging "Data Web" or "Web of Databases" collectively provide dimensions of interaction in the innovation continuum called the Web.



There are many more dimensions to come. Monikers come and go, but the retrospective "Long Shadow" of Innovation is ultimately timeless.



Mutual Inclusivity is a critical requirement for truly perceiving these dimensions ("Participation" if I recall). Mutual Exclusivity on the other hand, simpy leads to obscuring reality with Versionitis as exemplified by the ongoing: Web 1.0 vs 2.0 vs 3.0 debates.

  odoncaoa [11.14.06 11:21 AM]


> fact is the heart of what we've been calling Web 2.0.
> "harnessing collective intelligence" has been a key part of the Web 2.0 definition
Not certain were that is coming from? Because, as you are well aware O'Reilly, that is the differentiator between 2.0 and 3.0. In that 2.0 caters to the whims and fancies of it's human users. While 3.0 is facilitating the ability to have computation agents working among themselves, on our behalf, (even autonomously) when it gets fully rolled out. An obvious differenciator between 2.0, and 3.0 regards the diffences between data and information. As 2.0 by and large, requires it's users to qualify Web homed data, by adding relevant metadata, before an amount of data becomes information. The onus is on the user for translating all the data that s/he encounters along the way into information, with 2.0. There is no infrastructure to have such activity take place in an automated fashion with 2.0. As 3.0 will not necessarily require human qualification before the data can be transformed to information. The "machanics" are being crafted into the infrastructure, which will automatically facilitate such transformation.

Come on admit it T.O., you were just pissed because you liked the term "Grid", better than "Utility Computing", and you're not so comfortable being identified as a "Geek"; yeah? ;^)

  Karthik Gomadam [11.18.06 11:04 PM]

The blog captured a popular perception and belief that Web 2.0, as defined by Tim in his blog, is indeed a big first realistic step towards realizing the semantic web. I agree and disagree with the note that Steve wrote. Web2.0 as it stands today is not semantic web in entirety. However, it is a first and important step. The technology concepts like Ajax should not be seen as the only defining points of Web 2.0. A lot of web applications, use fancy UI and rebrand themselves as web 2.0, conveniently ignoring the important aspect called "Harnessing Collective Intelligence.". Again Semantic Web is not just in OWL or RDF but rather in modeling that collective intelligence in a manner that is usable and understandable and extensible by everyone else.
Its a bit surprising that some one like Steve would project the impact of SOAP as far as Web 4.0. It is not hard to see SOAP being used as the choice XML format in communicating for REST.

  Paul Walsh [12.10.06 05:32 AM]

Why is everyone referencing O’Reilly regarding the correct definition of Web 2.0. I never could get my head around this. I personally think that his definition of Web 2.0, isn't actually definition. He basically came up with some analogies which people later used to define what ‘they’ thought Web 2.0 was. If O’Reilly actually defined it, would there be so much debate?

So, what’s Web 3.0?

Well, IMHO, it can be anything that we want it to be. I don’t care much for naming conventions of this kind. However, coining a phrase does have benefits, it’s good for us folk to help benchmark where we are in the curve today, it also helps us to articulate where (we think) we’re heading.

The Web is in “permanent beta” as we improve it to meet and exceed users’ expectations. So Web 2.0, 3.0 and so on is great. Let’s hope Microsoft doesn’t take over or we’ll end up with Web 75 service pack 1092.

I think the Semantic Web is the most sustainable vision and you can’t really argue against it. The Semantic Web doesn’t equal RDF and ontologies. It is, in my opinion, about creating a common structure underneath the Web to improve how users find the content they’re looking for, no matter what Web site it sits on. Web 2.0 as I see it, falls short of this because it’s more about Web pages and applications which aren’t standardised to ensure they can *all* talk to each other in the future. Web 2.0 = version 2 of the Semantic Web – it’s not quite there but a good start…

To use another analogy, we should focus on getting all the plumbing in the hotel to use the same standard materials. It doesn’t matter if the plumbers use metric or imperial as long as they can convert from one to the other.

So, every time we extend the building to accommodate more rooms, we don’t have to worry about extra maintenance using additional water tanks and pipes to the mains, or even some people not getting hot water at all. Ok, I’m probably going a little too far now.

As George Orwell once said, “RDF is more equal than XML most of the time” ;)

  Paul Walsh [12.10.06 05:33 AM]

Whats up with the format of my previous post? I didn't use a text editor...

  Tim O'Reilly [12.10.06 07:57 AM]

Paul, you're probably on a Mac with "smart quotes" turned on...

As to the substance of your comment, here's a brief definition of Web 2.0:

Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. (This is what I've elsewhere called "harnessing collective intelligence.")

Other rules (which mostly fall out of this one) include:

* Don't treat software as an artifact, but as a process of engagement with your users. ("The perpetual beta")

* Open your data and services for re-use by others, and re-use the data and services of others whenever possible ("Small pieces loosely joined")

* Don't think of applications that reside on either client or server, but build applications that reside in the space between devices ("Software above the level of a single device")

* Remember that in a network environment, open APIs and standard protocols win, but this doesn't mean that the idea of competitive advantage goes away (Clayton Christensen: "The law of conservation of attractive profits")

* Chief among the future sources of lock in and competitive advantage will be data, whether through increasing returns from user-generated data (eBay, Amazon reviews, audioscrobbler info in last.fm, email/IM/phone traffic data as soon as someone who owns a lot of that data figures out that's how to use it to enable social networking apps, GPS and other location data), through owning a namespace (Gracenote/CDDB, Network Solutions), or through proprietary file formats (Microsoft Office, iTunes).

"Defining" a business model transition is always hard. We had a "personal computer" era long before the business rules were clear. A deeper understanding of the new rules of business in the PC era, and a ruthless application of them before anyone else understood them as well, is what made Microsoft the king of the hill in that era.

A lot of what I'm trying to do with my thinking on Web 2.0 is to make the rules apparent to everyone, so that the industry isn't blindsided. Perhaps a hopeless effort, but I've gotten some traction...

  Steve Cauffman [03.07.07 07:15 AM]

Just an FYI - There is an article concerning 'Web 3.0' in the March/April 2007 Technology Review. It's called "A Smarter Web" by John Borland and starts on page 64. It seems to focus on the work of MIT-affiliated Eric Miller.

  Sujatha [05.22.07 05:04 AM]

Very interesting analogy on Web 3.0 . Web 2.0's principle was to harness collective intelligence ,Implementing this in the next Era will be a good intersection point for artificial intelligence

  dan [08.25.07 05:28 AM]

check my web site.

  Sherwin Shao [10.21.07 07:47 AM]

Human intelligence is very personal. It's not a collective. This grand vision of Web 3.0 involving intelligent machines never happen, because humans will never be surpassed by machines in judgemental intelligence. Machines are not interested in certain things; that is a uniquely human trait. To belong to groups, another human trait. So machines can't really extract MEANING.



So What’s Next?



Different words have different meaning to different people. Different people are identified by different demographics. There’s no need for the machine to understand all that meaning stuff. As long as people understand, and communicate with the smallest possible unit of related meaning. Which is the question and answer.



By simply matching questions with other similar questions, and group the answers from all the experts, users are given access to the best possible knowledge available from all perspectives. So you can learn at a much faster rate, bypassing the uninteresting noise for the interesting nuggets of knowledge.



The next generation of the web will make use of what we like, what we know, and what we’ve done, to give us what we need. Based on our recent search history. Based on our demographics. Based on our ratings, questions and answers.



Given your usage, the system should know what you’re interested in, and can show you questions based on the community you naturally belong to. So you will not have to avoid people you find annoying… the system will segregate you naturally. Also, other people similar to you will, through their ratings, constantly be finding things that are interesting to you. This system is adaptive, so that as your interests change, your search results change with you.


Try http://www.helpglobe.com. You'll see what I mean.

  Tim O'Reilly [10.21.07 09:14 AM]

Sherwin --

Despite your opening disclaimer, what you describe is very much in line with what I call "harnessing collective intelligence" as a key aspect of Web 2.0. What we've discovered is that computer algorithms can extract patterns from human choices, thus creating a new fusion of man and machine. That's precisely what I mean by Web 2.0.

  Thomas Lord [10.21.07 01:17 PM]

"computer algorithms can extract patterns from human chioces, thus creating a new fusion of man and machine"


Yes, it creates a feedback circuit. It shreeks like a guitar leaned carelessly against its live amp. If you care about human freedom, you make sure the squelch knob is close at hand to just about everyone.

Stop pimping for a ubiquitous skinner box, please.

-t

  leah [04.11.08 02:25 PM]

I've been experimenting with various project management tools and have discovered an excellent site. It is a very user friendly, web-based application that is well worth taking the time to explore. Take a few minutes and look at Projjex.com. The tutorials are excellent & you don't need to be a Rocket Scientist to figure out how to use it. It even offers a free version so you can try it on for size.

Post A Comment:

 (please be patient, comments may take awhile to post)






Type the characters you see in the picture above.

RECOMMENDED FOR YOU

RECENT COMMENTS