Fri

Nov 10
2006

Tim O'Reilly

Tim O'Reilly

Harnessing Collective Intelligence

I've long made the assertion that one of the central differences between the PC era and the Web 2.0 era is that once the internet becomes platform, rather than just an add-on to the PC, you can build applications that harness network effects, so that they become better the more people use them. I've used the phrase "harnessing collective intelligence" to frame this phenomenon.
 

Yesterday at the Web 2.0 Conference, I hosted a panel on this topic. It featured Jim Buckmaster of Craigslist, Toni Schneider of Automatic (Wordpress), Owen van Natta of Facebook, and Richard Roseblatt, former chairman of Myspace, and now CEO of Demand Media.

One of the threads we focused on in the discussion was the difference between "user generated content," which many people focus on, and a far broader, more thought-provoking understanding of how collective intelligence is put to work. Craig Kaplan of predictwallstreet.com asked a question during the panel, and followed up with an email, which I thought was worth sharing:

As I mentioned in my question to the panelists, I feel there is a big difference between user generated content and collective intelligence.
 

For example, PredictWallStreet.com focuses one million unique monthly visitors on predicting whether a stock will close up or down. With the help of our algorithms the community can outperform the market -- something most analysts can't do. That's not user-generated content, that's a cognitive community exhibiting super intelligent behavior.

Wikipedia exhibits super intelligent behavior when it is more comprehensive and more up to date than encyclopedia Britannica. Britannica has the brand, but Wikipedia has the Brains on Board. And with very minimal software, Wikipedia directs millions of minds to create a new and better kind of encyclopedia. That's not just user-generated content. It's a cognitive community exhibiting super intelligent behavior.

Together we form a super intelligence that is a lot smarter than any one of us alone. As you say, Web 2.0 truly is just the froth before the wave. I believe networks of super intelligent cognitive communities are our future.

While I'm not sure I'd use the phrase "super-intelligent," I agree very much with Richard that there's a lot more to harnessing collective intelligence (the new HCI) than user generated content (UCG). Google's PageRank is HCI, but not UGC (although it is derived from the user-generated content of the WWW itself); Wordpress's akismet anti-spam plugin is HCI but not UGC; CraigsList's user moderation features are HCI applied to the problem of unbridled UGC.
 

Even sites that are very explicitly based on user-generated content, like MySpace, at their best bring in other aspects of HCI. During the panel, I mentioned Kathy Sierra's recent observation about MySpace, courtesy of her daughter: ""myspace keeps doing what everybody really wants, and it happens instantly.... As soon as you think of something, it's in there.... It's always evolving. It changes constantly. There's always something new." Richard responded that at MySpace, "product development is marketing." They test features on real customers in real time, trying to learn what they like by offering it to them, keeping what works, and changing what doesn't.

As you can see from Richard's comment, Web 2.0 concepts like harnessing collective intelligence and lightweight, responsive software development are intimately related. They require new competencies, new development models, and new attitudes towards the application development process.

tags: web 2.0  | comments: 5   | Sphere It
submit:

 
Previous  |  Next

1 TrackBacks

TrackBack URL for this entry: http://blogs.oreilly.com/cgi-bin/mt/mt-t.cgi/5049

» Search and the Dumbness of Crowds from The Software Abstractions Blog

The Dumbness of Crowds Kathy Sierra recently had a fascinating post in the Creating Passionate Users blog: The Dumbness of Crowds , in which she carefully analyzes the popular notion of The Wisdom of Crowds . Given the technology community's Read More

Comments: 5

  Daniel Tschentscher [11.14.06 03:12 AM]

I completely agree with you that this will probably be the next generation of websites. There are already quite a few examples out there that are accumulating collective intelligence, but the real breakthrough will only come with the breakthrough of natural language extraction tools and semantic algorithms. Think of companies like Monitor110 (extracting stock opinions from the web) or ViewScore (extracting structured opinions of reviews) - and they show the direction of a new breed of machine generated intelligence that is based on decentralized user input.

  Scott [02.16.07 02:58 PM]

Good Article.
It's just too bad that Predictwallstreet.com is "undergoing routine maintenance" or "under construction" more often than the Pennsylvania Turnpike.
It would be reasonable for them to allow users to KNOW if their site is up and running or not, so we don't waste any more time.
Thanks.

  Dave Kresta [07.26.07 10:57 AM]

It is interesting to think about collective intelligence in the context of business organizations and leadership. Rather than command-and-control structures, collaborative structures which harness collective intelligence will be the hallmark of successful business organizations in the future. Indeed, the definition of a business will need to evolve and become much fuzzier in order to fully embrace HCI.

  Webmaster | Web Development [10.21.08 05:04 AM]

Hi,
I have read this Blog and you have shared good information about Harnessing Collective Intelligence
Nice Post!!!!!!!
Thanks.
Web development | Ecommerce solutions

  Ajit Jaokar [12.29.08 01:40 PM]

I have a variant on this idea .. called ..
Unharnessing collective intelligence: A business model for privacy on Mobile devices based on k-anonymity
http://opengardensblog.futuretext.com/archives/2008/12/unharnessing_co.html

In a nutshell ..
>>>
If data is anonymised at the source and is under the control of the customer, the customer will trust the provider who anonymises their data (and in turn protects them). In return for that trust, the customer could volunteer to reveal attributes about themselves which would enable the provider to create personalized advertising campaigns and also to be used in segmentation. This benefits both the providers (protection from legal action, personalized advertising, segmentation) and also the customers (anonymised data, personalised services etc)

Note , it is not about privacy - it is about anonymising data.

Also, the approach potentially provides a compelling argument for both the provider and the customer. In doing so, it is different because at the moment, advertising, segmentation etc can be implemented on a best case basis - but a trust based approach will benefit all parties.

kind rgds
Ajit

Post A Comment:

 (please be patient, comments may take awhile to post)






Type the characters you see in the picture above.