Why Dell.com (was) More Enterprise 2.0 Than Dell IdeaStorm

In my keynote last week at Web 2.0 Expo New York, I made the comment that, cool as Dell Ideastorm is, the fundamental supply-chain approach behind dell.com is actually a better example of how Web 2.0 applies to the enterprise. I also made the provocative assertion that WalMart is a Web 2.0 company (or at least a model of how Web 2.0 principles apply to the enterprise.)

Based on questions I’ve heard since, I thought I should explain further.

Many people have been seduced into the ideas that Web 2.0 is all about explicit collaboration, contribution, and “the wisdom of crowds.” So, for example, on his Web 2.0 Watch List, Seth Godin wrote: “For our purposes, my definition is that most of these companies are, as the wikipedia says, sites that ‘let people collaborate and share information online in a new way.’ So,” he says, “Google doesn’t make the cut, because most of their traffic comes to their search engine.”

Now, I will say categorically that any Web 2.0 definition that excludes the Google search engine is broken. But it’s broken in an instructive way, one that shows what the problem is.

Web 2.0 is ultimately about understanding the rules of business in the network era. I define Web 2.0 as the design of systems that harness network effects to get better the more people use them, or more colloquially, as “harnessing collective intelligence.” This includes explicit network-enabled collaboration, to be sure, but it should encompass every way that people connected to a network create synergistic effects. So let’s take Google:

  • The very act of creating a search engine via spidering relies on a user-contributed network. Where does Google get its raw material except from us? When I publish this blog post, I’m contributing to Google (and to every other search engine.)

  • Google’s PageRank algorithm extracts an additional level of implied metadata contributed by links. All spiders follow links to discover new content; PageRank taught us that some links matter more than others. The engine became smarter by understanding a bit more about what people were contributing even when they didn’t know they were doing so. Many a new breakthrough in Web 2.0 (e.g. Facebook’s social graph) comes from making implicit contribution explicit in some way, gathering its benefit and amplifying it.

  • Google’s real-time ad auction is the heart of their economic engine. Their stroke of genius, which gave them their seemingly insuperable lead over the other search engines in ad monetization, was to understand that selling the top ad position to the highest bidder was actually leaving money on the table. Given that advertisers only pay for clicks, Google realized that if they could project the likely click-rate on an ad, they could sell top position to the best combination of price and click rate. A $5 cpc ad clicked on 3x as often as a $10 cpc ad will make $15. By instrumenting, measuring, and responding to the click rate, Google made ad auctions smarter – through harnessing implicit user contribution.

  • Google also measures click-through behavior, surfing habits, and everything else they can get their hands on that will help them improve search results or ad performance. All of this is fed back into a real-time loop that is constantly trying to automate and optimize the user experience. Sounds a lot like the human brain, eh? (At least one with lots of learning plasticity.)

    If there is only one thing that enterprises ought to learn about Web 2.0, it’s this one: building information systems that allow you to adjust in real time based on interaction with your customers is the true mark of a networked enterprise.

If you understand the following three things, you know everything you need to know about Web 2.0 and the enterprise:

  1. Harvest every bit of user contribution, not just the explicit. Your business has thousands of touch points with customers. When they buy from you, they contribute data as well as money. When your suppliers increase their prices, or change their delivery times, they contribute data to you. When you advertise, and people respond (or don’t), they contribute to you. When you introduce a new product, when you do something your customers love, or hate, and people talk about it, they contribute.

    Your data is one of your most critical business assets. Are you doing everything you can to wrest competitive advantage from it? I’ll remind you again: PageRank and the real time Adwords auction were both hidden in plain sight. Understanding what data you have, and what meaning you can extract from it, is the holy grail of Web 2.0.

  2. The era of IT as a back-office function is over. It’s no longer good enough to gather data and analyze it, then propose and adjust strategies over the next budget cycle. You must infuse your organization with IT, so that, like Walmart, your supply chain responds every time a customer rings up an item at the cash register. This is how Walmart is like Google. No, not the website, but the live enterprise, which learns and responds.

    That’s why in my enterprise 2.0 talks, I usually end by saying “turn your IT department inside out – or wait for some innovative startup to do it for you.” Banks could be building something like Wesabe’s Value Engine and tips feature, which extracts collective intelligence from credit card data; phone companies could be doing something like Skydeck‘s extraction of your social network from your phone bill. In fact, they’d be in a way better position to build integrated services against this data than startups that are having to first extract the data from corporate databases one customer at a time!

  3. Web 2.0 thrives on network effects (also known as virtuous circles): data begetting more data, services getting better in such a way that they are used more often, until you are so far ahead of the next guy that he can’t catch up. That network effect is enhanced by letting other people use and build on your data, not by keeping it private. What we’ve seen is that the first company to create network effects in a particular class of data tends to end up owning that data simply through having the biggest pile, or the best results, not because they have unique data. (Again, Google: Microsoft and Yahoo! have the same data for the most part; Google is better at creating value for others from it.)

I will note that Dell recently announced that they were abandoning their long-heralded build-to-order methodology in favor of standardized commodity boxes. But I still stand by my statement.

There are two possibilities: first, that Dell is wrong, and their new supply chain approach will not save them, just make them more like everyone else. It could be that their “live suppy chain” approach just got too crufty, too complex – the article linked above suggests more than 5000 possible configurations. Maybe what they needed to do was to make the system smarter again by streamlining and simplifying.

But it’s possible too that the competitive advantage to be wrung from a live enterprise only takes you so far, and that in certain circumstances other advantages are more important. It may well be that the PC market has reduced itself to such commodity status that standardization trumps customization. It may well be that the costs of physical goods mean that the laws of virtual networks are only partly true in that realm.

I haven’t studied Dell’s situation and market sufficiently to have a fixed opinion. But the answer is knowable – a good field of study for business school cases. It’s worth repeating something I once said about open source, but now for Web 2.0: This is science, not ideology. Our goal is to understand what works, and why.

P.S. I also talked about the ideas here in my piece from last year, What Would Google Do?

tags: ,