Fri

Jun 9
2006

Tim O'Reilly

Tim O'Reilly

The Google AI Starts Work on Its Ears

Information Week reports: "In a research paper [pdf] presented last week at interactive television conference Euro ITV in Athens, Greece, Google researchers Michele Covell and Shumeet Baluja propose using ambient-audio identification technology to capture TV sound with a laptop PC to identify the show that is the source of the sound and to use that information to immediately return personalized Internet content to the PC. 'We showed how to sample the ambient sound emitted from a TV and automatically determine what is being watched from a small signature of the sound—all with complete privacy and minuscule effort,' Covell and Baluja write on the Google Research Blog. 'The system could keep up with users while they channel surf, presenting them with a real-time forum about a live political debate one minute and an ad-hoc chat room for a sporting event in the next.'"

What I find most interesting about this technology is not its current intended use, but all its possible unintended uses! What does it mean when our computers start to get an independent sensorium? How much more thought-provoking does this become when you think about Google as an emergent AI, as George Dyson has done.

In scenario planning, you imagine possible futures -- not necessarily ones that you believe in, but ones that help you to stretch your vision of what might happen, "searching the possible for its possibleness," as poet Wallace Stevens put it. Then you look for headlines that signal to you that your imagined future might indeed be coming true.

In his essay, Turing's Cathedral, George wrote about one seemingly far-fetched but naggingly plausible scenario. He was writing about his visit to Google to give a talk on the 60th anniversary of Alan Turing's death. He wrote:

My visit to Google? Despite the whimsical furniture and other toys, I felt I was entering a 14th-century cathedral — not in the 14th century but in the 12th century, while it was being built. Everyone was busy carving one stone here and another stone there, with some invisible architect getting everything to fit. The mood was playful, yet there was a palpable reverence in the air. "We are not scanning all those books to be read by people," explained one of my hosts after my talk. "We are scanning them to be read by an AI."

When I returned to highway 101, I found myself recollecting the words of Alan Turing, in his seminal paper Computing Machinery and Intelligence, a founding document in the quest for true AI. "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children," Turing had advised. "Rather we are, in either case, instruments of His will providing mansions for the souls that He creates."

Google is Turing's cathedral, awaiting its soul. We hope. In the words of an unusually perceptive friend: "When I was there, just before the IPO, I thought the coziness to be almost overwhelming. Happy Golden Retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?"

For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI. There wouldn't be any need for True Believers, or the downloading of human brains or anything sinister like that: just a gradual, gentle, pervasive and mutually beneficial contact between us and a growing something else. This remains a non-testable hypothesis, for now. The best description comes from science fiction writer Simon Ings:

"When our machines overtook us, too complex and efficient for us to control, they did it so fast and so smoothly and so usefully, only a fool or a prophet would have dared complain."


tags:   | comments: 10   | Sphere It
submit:

 
Previous  |  Next

0 TrackBacks

TrackBack URL for this entry: http://blogs.oreilly.com/cgi-bin/mt/mt-t.cgi/4726

Comments: 10

  Kevin Farnham [06.09.06 11:40 PM]

These are the causes for fear for those of us who do not know such things. On the other hand, will the generation that lives in this environment necessarily experience it the way we would? Or will there be adaptations that we cannot conceive of that render it less horrid?

JennyCam, MySpace user infatuation with expressing intimate details about oneself to the "anonymous" public. Is a kind of privacy found in a controlled publication of one's "self"?

As the world becomes increasingly "monitored" by machines, how will humans react? I think/hope there will be adaptation, but I suspect it will be of a type we today would find remarkable.

  Shawn [06.10.06 09:16 AM]

@Kevin:

'Is a kind of privacy found in a controlled publication of one's "self"?'

The privacy in question here would answer a resounding no. Talking about tracking what you have on your TV as you merely channel surf has quite different implications than someone going out of their way to publish aspects of their personal life.

We now have to go out of our way to keep aspects of our personal life from getting recorded, analyzed and stored for later (yet undecided) use without our permission.

  Thomas Lord [06.10.06 10:57 AM]

It makes some sense. The explicitly coded AI in Google is presumably all about ranking links, selecting excerpts, placing ads --- in general it's about generating signals for human consumption.

The particular kinds of signals it generates have substantial social and economic impact (they modulate human behavior at a massive scale).

The feedback to this AI -- its basis for learning -- is presumably based on senses of moment-to-moment network health and financial performance. It is hard to imagine that Google isn't programming the system to learn experientially how to give advice about its own management by human operators. This is another way in which the AI controls its environment by learning what kinds of signals best influence the human operator/parasites in its environment.

This AI isn't the product of a long biological and cultural evolution: we ought to have no special confidence in its "sanity" with respect to its natural environment or even its own survival prospects. It's teleology is simply to grow and feel secure by sucking up signals from humans and issuing control signals to humans.

There is an old urban legend that Johnny Carson once cracked a joke on his show, telling everyone watching to get up and flush their toilet. (That part is true, if I recall correctly.) The result was massive damage to the sanitary system of New York (that's the legend).

Perhaps one scenario to keep an eye out for (may already be happening) is the Google AI creating strange and undesirable non-linearities in the behavior of populations and demographics because one night it figures out that it will make more money if it points people to page A about breaking news rather than page B. Or, in the case of TV features, if it segregates sports show viewers by recommending chat room X to red staters and blue staters to chat room B -- just as, on the field of the game, a significant event occurs that similarly polarizes viewers. Google AI -- the completely unhuman, self-serving, super version of moveon.org.

In some sense, even before computers were doing much, any corporation or state or other organization large and complicated enough that it needed to be operated using evolving bureaucratic procedures could reach a crudely similar level of complexity and might deserve the name AI. The differences are that computers think a lot faster, much more of the decision making can be explicitly fed to the machine, and much more of the decision making is implicitly fed to the machine by letting it work out what kinds of reports to humans are "most effective".

When I first read G. Dyson's sense of creepiness at all the shiny, happy people surrounding the Google machine I thought he was being a bit over the top. Now I am beginning to think he didn't go far enough.

-t

  Greg Linden [06.10.06 04:44 PM]

Yes, like any tool, the microphone on your PC can be used for good or evil.

Whether some use the microphone for evil things like eavesdropping has very little to do with Google.

  Marc [06.10.06 09:00 PM]

Google would be better off keeping its feet on the ground and looking while walking.

Their execution with Writely and Spreadsheets and their execution on all the "beta" products in general has been far less than great, and they have to be held to a standard of greatness given their resources and vision.

I tried to understand the reasons behind their sloppy execution and I came to the conlusion that Google needs to be re-organized in order to execute better.

I dug deep into it on my blog but came up with the rather shallow realization that Google is simply in need of better management.

Marc

  Serkant Karaca [06.15.06 10:04 AM]

Is this only for English or also for other languages? I think better to start with English than focus on further.

  Pinkie [09.02.06 02:18 AM]

since google pagerank shows the pages with the most links to it from other sites, doesn't that help to entrench already well linked sites and stave off the buildup of new budding potential "super-linkers"?

  Peter [10.30.06 11:29 AM]

Google sucks, mind your business , stop being rats!!!!!!!!!!!

  Brian [04.25.07 09:32 AM]

Here is a strange occurrence... I installed Gmail and maps onto my Blackberry phone which I have high security settings on. 15 min later I get a pop up stating that Gmail was trying to access my phone logs. Does anyone know what Google would want with our cell phone history logs?

  Eric [07.23.08 01:34 PM]

I had the same thing happen and if you look at the binary its not good that they would be poking around in your contacted people, ESPECIALLY in a government situation which Blackberry caters too. Apple and its ignant iPhone are no better but at least they are open about it.

I think that people really need to look at the Bland privacy policy that Google has been throwing our way, no they do not send identifying information but it is so easily appended these days.

I know some people are thinking, well I have nothing to hide, however the US is free and if you did have something to hide or something personal that you did not want leaked it should be your secret and yours only to deal with as needed.

Our present place in history is similar in stature to the industrial revolution, perhaps even more important in history than we realize. I believe that our privacy has got to be dealt with soon or it will diminish further and further until our next generation doesn't even know they are supposed to have it in the first place.

Eric Rothchild

Post A Comment:

 (please be patient, comments may take awhile to post)






Type the characters you see in the picture above.

RECOMMENDED FOR YOU

RECENT COMMENTS