The Google AI Starts Work on Its Ears

Information Week reports: “In a research paper [pdf] presented last week at interactive television conference Euro ITV in Athens, Greece, Google researchers Michele Covell and Shumeet Baluja propose using ambient-audio identification technology to capture TV sound with a laptop PC to identify the show that is the source of the sound and to use that information to immediately return personalized Internet content to the PC. ‘We showed how to sample the ambient sound emitted from a TV and automatically determine what is being watched from a small signature of the sound—all with complete privacy and minuscule effort,’ Covell and Baluja write on the Google Research Blog. ‘The system could keep up with users while they channel surf, presenting them with a real-time forum about a live political debate one minute and an ad-hoc chat room for a sporting event in the next.'”

What I find most interesting about this technology is not its current intended use, but all its possible unintended uses! What does it mean when our computers start to get an independent sensorium? How much more thought-provoking does this become when you think about Google as an emergent AI, as George Dyson has done.

In scenario planning, you imagine possible futures — not necessarily ones that you believe in, but ones that help you to stretch your vision of what might happen, “searching the possible for its possibleness,” as poet Wallace Stevens put it. Then you look for headlines that signal to you that your imagined future might indeed be coming true.

In his essay, Turing’s Cathedral, George wrote about one seemingly far-fetched but naggingly plausible scenario. He was writing about his visit to Google to give a talk on the 60th anniversary of Alan Turing’s death. He wrote:

My visit to Google? Despite the whimsical furniture and other toys, I felt I was entering a 14th-century cathedral — not in the 14th century but in the 12th century, while it was being built. Everyone was busy carving one stone here and another stone there, with some invisible architect getting everything to fit. The mood was playful, yet there was a palpable reverence in the air. “We are not scanning all those books to be read by people,” explained one of my hosts after my talk. “We are scanning them to be read by an AI.”

When I returned to highway 101, I found myself recollecting the words of Alan Turing, in his seminal paper Computing Machinery and Intelligence, a founding document in the quest for true AI. “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children,” Turing had advised. “Rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

Google is Turing’s cathedral, awaiting its soul. We hope. In the words of an unusually perceptive friend: “When I was there, just before the IPO, I thought the coziness to be almost overwhelming. Happy Golden Retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?”

For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI. There wouldn’t be any need for True Believers, or the downloading of human brains or anything sinister like that: just a gradual, gentle, pervasive and mutually beneficial contact between us and a growing something else. This remains a non-testable hypothesis, for now. The best description comes from science fiction writer Simon Ings:

“When our machines overtook us, too complex and efficient for us to control, they did it so fast and so smoothly and so usefully, only a fool or a prophet would have dared complain.”