In the future we'll be talking, not typing

Stephan Spencer on how autonomous intelligence and language processing will transform search.

Search algorithms thus far have relied on links to serve up relevant results, but as research in artificial intelligence (AI), natural language processing (NLP), and input methods continue to progress, search algorithms and techniques likely will adapt.

In the following interview, Stephan Spencer (@sspencer), co-author of “The Art of SEO” and a speaker at the upcoming Web 2.0 Expo, discusses how next-generation advancements will influence search and computing (hint: your keyboard may soon be obsolete).

What role will artificial intelligence play in the future of search?

Stephan SpencerStephan Spencer: I think more and more, it’ll be an autonomous intelligence — and I say “autonomous intelligence” instead of “artificial intelligence” because it will no longer be artificial. Eventually, it will be just another life form. You won’t be able to tell the difference between AI and a human being.

So, artificial intelligence will become autonomous intelligence, and it will transform the way that the search algorithms determine what is considered relevant and important. A human being can eyeball a web page and say: “This doesn’t really look like a quality piece of content. There are things about it that just don’t feel right.” An AI would be able to make those kinds of determinations with much greater sophistication than a human being. When that happens, I think it will be transformative.

What does the future of search look like to you?

Stephan Spencer: I think we’ll be talking to our computers more than typing on them. If you can ask questions and have a conversation with your computer — with the Internet, with Google — that’s a much more efficient way of extracting information and learning.

The advent of the Linguistic User Interface (LUI) will be as transformative for humanity as the Graphical User Interface (GUI) was. Remember the days of typing in MS-DOS commands? That was horrible. We’re going to think the same about typing on our computers in — who knows — five years’ time?

Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD

In a “future of search” blog post you mentioned “Utility Fog.” What is that?

Stephan Spencer: Utility Fog is theoretical at this point. It’s a nanotechnology that will be feasible once we reach the phase of molecular nanotechnology — where nano machines can self-replicate. That changes the game completely.

Nano machines could swarm like bees and take any shape, color, or luminosity. They could, in effect, create a three-dimensional representation of an object, of a person, of an animal — you name it. That shape would be able to respond and react.

Specific to how this would affect search engines and use of the internet, I see it as the next stage: You would have a visual three-dimensional representation of the computer that you can interact with.

Related:

tags: , , ,
  • http://www.scottpreston.com scott preston

    I don’t think this will be the case. I’ve done a lot of AI programming and although speaking to a robot or a computer is appealing, the english language is not that efficient. In fact speaking takes a lot more energy to than typing does.

    I think interfaces will get better through touch and gestures and auto-complete for typing will get better, but talking only works for certain usage paradigms.

  • http://blog.jebdm.net Jebadiah Moore

    I disagree that voice-based interfaces are the way of the future. They are always going to be inconvenient in a wide array of circumstances: in meetings, in classes, in public places, etc. They will always be less convenient than tactile input for non-verbal input: for instance, for controlling games, photo/video/3d editing applications, etc. Voice is also inconvenient for editing existing text. Really, it only wins in a limited set of situations (for instance, the input of a big chunk of text, or hands-free control). Search is probably one of them—because it relies on textual input and could be improved by a quick series of clarifying questions which are somewhat inconvenient if you have to respond via keyboard or mouse—but I don’t think voice recognition is nearly as revolutionary as some people make it out to be.

  • http://www.codersbarn.com Anthony

    Couldn’t be more wrong. Regional dialects, accents and lisps will ensure that this is a non-starter!

  • Ian

    Yawn. This prediction has been “five years” away for three decades now. It’s not the technology that prevents it, it’s the environment and user. “LUI” will remain relegated to limited use, not adopted as a general-purpose (and certainly not the exclusive) interface.

  • http://www.neverendmedia.com Chris Kubica

    Imagine a master carpenter making a roll-top desk not with his/her own two hands but by telling an apprentice (who’s native tongue is not the same as the carpenter’s and thus does not have instant or perfect grasp of all the carpenter says) what to do and the apprentice does everything. Do you think a good desk would get made or that the carpenter would enjoy the experience?

    Now imagine that you are the master and the computer’s voice recognition software is the apprentice and that instead of doing what you want to do on your computer with your own two hands the voice recognition software attempts to interpret what you say and do what it thinks you want it to do. Do you think good, efficient work would get done and that you would be satisfied with it?

    And what if you were in a room with 50 other people all trying to talk to their computers in the same way at the same time? Or what if you were the only one talking and everyone else was quiet? Or what if you were the only one talking and you were working on an e-mail to your doctor about erectile disfunction or herpes? Or what if you were making an online wire transfer and you say “one hundred dollars” and the computer hears “one hundred thousand dollars” because you are speaking where there is a lot of background noise? Or what if you are making a large withdrawal at a voice-operated ATM machine and before you’re done the thief waiting in the bushes for anyone to say the words “thousand” and “withdrawal” together emerges and robs you?

    Just imagine.

  • Rick

    Most of the people pointing out the problems with a voice interface seem to forget that they are the same ones that “limit” telephone conversations in general. There will never be a single, exclusive, universal human-machine interface. The notion that there must always be a winner and everyone or everything else must lose is a sappy capitalistic superstition. Many interfaces exist already and more will appear as time goes by. Some may disappear. People will use whichever one is most convenient for a given purpose. Speech interfaces will be very popular for many activities as well as shunned for many others.
    A speech interface on a cell phone will be very natural for many use cases, not least because that sort of interface defined the use of a cell phone from the beginning. Now they have cameras, sophisticated tactile interfaces, and eventually god knows what. All will be used for one purpose or another.
    Get real, folks.