I haven’t seen a lot of people connecting the dots between Google’s recent announcement of 411 service and Microsoft’s acquisition of Tellme.
Now obviously, there’s one connection in that both are plays in local search, and it’s certainly true that providing 411 service is consistent with Google’s mission to provide “access to all the world’s information” and Microsoft’s desire to steal a march on them in voice-activated mobile search.
But it also seems to me that there’s a hidden story here about the speech recognition itself. I was talking recently to Eckart Walther of Yahoo!, who used to be at Tellme, and he pointed out that speech recognition took a huge leap in capability when automated speech recognition started being used for directory assistance. All of a sudden, there were millions of voices, millions of accents to train speech recognition systems on, and much less need for the individual user to train the system.
This is reminiscent of a comment that Peter Norvig, Director of Research at Google, made to me last year about automated translation, and why it’s getting better. “We don’t have better algorithms. We just have more data.”
If I’m right about this, we see here another demonstration of my Web 2.0 principle that “data is the Intel Inside”, and that many of the future battles between industry giants will be around who owns data, rather than who controls software APIs. In that battle, we’ll see deployed all kinds of techniques to “harness collective intelligence” to build added value databases of various kinds.
One wouldn’t think that one of the side effects of a voice search application would be the creation of a competitive advantage in speech recognition, but I’ll lay odds that that’s part of what’s in play here.
Anyone who has confirming information on this speculation, please let us know.