Memepunks reports on some comments that Google’s Larry Page made last week at the company’s Zeitgeist conference in London. (There’s video here, though I haven’t been able to get it to play.) Page talked in unusually frank terms about his company’s ultimate ambition. Here’s how Memepunks summarizes it:
[Larry] spoke to the future of search, a future which contains a Google AI. “People always make the assumption that we’re done with search. That’s very far from the case. We’re probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything … some people could call that artificial intelligence.”
Larry’s remarks didn’t end there. He hinted that such things were already afoot at Google. He refused to predict when Google would achieve their goal of an AI, but he did say that “a lot of our systems already use learning techniques”. Larry noted how powerful an AI powered search engine would be. “The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly,” saying, “You could ask ‘what should I ask Larry?’ and it would tell you.”
Like Memepunks, I couldn’t help but think back to George Dyson’s enigmatic essay on Google and AI, which has (as regular Rough Type readers know all too well) become one of my personal touchstones. Memepunks points to one relevant passage from that essay:
For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.
I would also point to another passage, in which Dyson quotes a friend who had recently visited the Googleplex:
“When I was there, just before the IPO, I thought the coziness to be almost overwhelming. Happy Golden Retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?”
Let’s hope that Dyson’s friend was merely suffering from an overactive imagination. Google couldn’t possibly create a machine that “understands everything in the world.” Could it?
My own, more immediate concern is a simpler one and doesn’t require the appearance of the devil (in any more than metaphorical form, anyway). It’s this: Will we, through an increasing reliance on the kind of powerful knowledge-automation tools that Google develops and the web disseminates, naturally come to embrace Page’s sense of what it means to “understand something,” a sense that separates the act of understanding from the individual thinker?