The timeliness of this Conference is NOT only because “web 2.0” technologies and business models have reached critical mass in the financial markets. It is also because, as driven by the web more generally, the frontier between human and machine-decision making has become radically problematic. First, quantitative approaches in trading, pricing, valuation, asset definition vastly expanded the domain for machine decision-making. But then the humans struck back, by refusing to act like the mindless molecules that the models driving machine decison-making required. The self-reflective, behavioral attributes of human market participants is now driving back that frontier, requiring innovations in every aspect of financial market processes, beginning with techniques of risk measurement and risk management. When price is an inverse function of liquidity and liquidity is an inverse function of price certainty, the recursive loop can only be broken by human intervention and action.
I’ve written quite a bit about “bionic software,” the idea that one of the distinguishing characteristics of Web 2.0 is that its applications are a new hybrid of man and machine, driven by algorithmic interpretation of aggregated human activity. Recent turmoil in financial markets show us just how such systems can run amok.
Figuring out the right balance of man and machine is one of the great challenges of our time. We’re increasingly building complex systems that involve both, but in what proportion?
Bill Janeway will be talking at the conference with Rick Bookstaber, author of A Demon of Our Own Design. Bookstaber was the head of risk management for Morgan Stanley, and now runs a hedge fund. He argues that the very techniques originally developed to manage risk via computational means have actually increased risk. He asks whether we can put the genie back in the bottle, and whether we can afford not to.
Incidentally, this same issue is playing itself out in the world of Web 2.0 itself, with new search engines, from Jason Calacanis’ mahalo to Jimmy Wales’ Wikia Search making the argument that a purely algorithmic approach is fundamentally flawed. In response to yesterday’s announcement of Wikia Search, Cory Doctorow wrote, in a BoingBoing editorial entitled Wiki-inspired “transparent” search engine:
We have a notion that the traditional search engine algorithm is “neutral” — that it lacks an editorial bias and simply works to fulfill some mathematical destiny, embodying some Platonic ideal of “relevance.” Compare this to an “inorganic” paid search result of the sort that Altavista used to sell.
But ranking algorithms are editorial: they embody the biases, hopes, beliefs and hypotheses of the programmers who write and design them.
Mahalo is placing a bet on human intervention in search results; Wikia Search on the power of making its ranking algorithms open and transparent (a la open source software). But both are trying to re-draw the boundary between human and machine.
For what it’s worth, while Google strongly favors a proprietary algorithmic approach (much like hedge funds and Wall Street firms trading for their own account), they also recognize the importance of human intervention. Peter Norvig, formerly the Director of Search Quality at Google and now its Director of Research, pointed out to me that there is a small percentage of Google pages that dramatically demonstrate human intervention by the search quality team. As it turns out, a search for “O’Reilly” produces one of those special pages. Driven by PageRank and other algorithms, my company, O’Reilly Media, used to occupy most of the top spots, with a few for Bill O’Reilly, the conservative pundit. It took human intervention to get O’Reilly Auto Parts, a Fortune-500 company, onto the first page of search results. There’s a special split-screen format for cases like this.
One also sees human vs. machine in the battle of search engines such as Google against search-engine spam. When I pinged him about the subject of this post, Peter wrote to me:
I do think you have a good point — as more money is handled by automated trading systems rather than by human traders, there is a larger risk of chaotic behavior. You are right that there is an analogy to search result manipulation, but I think we search engines actually have an easier problem because we can control the timescale and magnitude at which we make changes, whereas in markets big changes can happen very fast, and the only brake is to close the market…
It’s also intriguing to see how humans start to adapt to algorithmic models, learning their deficiencies and gaming the system. Search engine spam is a great case in point. Radar reader Eric Blossom noted recently, in a comment on my post Trading for their own account:
I find myself skipping to the second page or so of the search results list that I get from Google. I do this to avoid the heavy commercial pages that seem less pertinent. Places like Digg also seem to collect pointers to blogs etc. rather than to original sources. I’m clicking more as my eyeballs spin than I used to. Perhaps the golden age of search engines is already over from the user’s perspective. They’re still useful. Just a bit more painful.
But of course, if more and more people start to act like Eric, that could help to inform Google’s algorithms that some of those second page results are actually to be preferred, leading the system to “heal” itself. This would be a positive feedback effect from human reaction to algorithmic misbehavior. But it could go the other way.
One also has to wonder if there could, in the future, be a catastrophic sub-prime-like crash for the Google adwords market. There are more dissimilarities than there are similarities. An Adword is not a derivative. But as I suggested in my original Release 2.0 issue on Web 2.0 and Wall Street, all it would take is a desperate second-tier search engine introducing futures pricing into keyword buys (versus today’s real-time auction) to securitize and potentially destabilize this market.
George Soros’ comment from the introduction to his book The Open Society is relevant here:
I interpret history as a reflexive process in which the participants’ biased decisions interact with a reality that is beyond their comprehension. The interaction can be self-reinforcing or self-correcting. A self-reinforcing process cannot go on forever without running into limits set by reality, but it can go on long enough and far enough to bring about substantial changes in the real world. If and when it becomes unsustainable, it may then set into motion a self-reinforcing process going in the opposite direction. Such boom-bust sequences are clearly observable in financial markets, but their extent, duration, and actual course remain uncertain.
It’s probably less valuable to speculate about specific ways that the Web 2.0 economy could go haywire than it is to recognize that financial markets, as large, networked information systems driven by a combination of algorithmic and human activity, may well be canaries in the coal mine for other similar systems (like Web 2.0 applications) as they continue to evolve.
Remember William Gibson’s dictum: “The future is here. It’s just not evenly distributed yet.”