There’s a new book out called The Google Story, subtitled “Inside the Hottest Business, Media and Technology Success of Our Time.” I haven’t read it, but I did read a review in this morning’s New York Times. The reviewer describes a passage that comes at the end of the book:
Sergey Brin, one of the search engine’s founders, is marveling, as he and his co-founder, Larry Page, are wont to do, about their product’s awesome computational powers. Having hatched a plan to download the world’s libraries and begun a research effort aimed at cataloging people’s genes, Mr. Brin hungers, with the boundless appetite of a man who has obtained great success at a tender age, for the one place Google has yet to directly penetrate – your mind. “Why not improve the brain?” he muses. “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”
Visionary? Scary? Cute? Hey, give a kid a Fabulous Money Printing Machine, and he’s bound to get a little excited.
What struck me, though, is how Brin’s words echo something that a Google engineer said to technology historian George Dyson when he recently visited the company’s headquarters: “We are not scanning all those books to be read by people. We are scanning them to be read by an AI.” I wasn’t quite sure when I first read that quote how serious the engineer was being. Now, I’m sure. Forget the read-write web; the Google Brain Plug-In promises the read-write mind.
The theme that computers can help bring human beings to a more perfect state is a common one in writings on artificial intelligence, as David Noble documents in his book The Religion of Technology. Here’s AI pioneer Earl Cox: “Technology will soon enable human beings to turn into something else altogether [and] escape the human condition … Humans may be able to transfer their minds into the new cybersystems and join the cybercivilization … We will download our minds into vessels created by our machine children and, with them, explore the universe …”
Here’s computer guru Danny Hillis explaining the underlying philosophy more explicitly:
“We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture. And those two things have coevolved together, because they helped each other. But they’re fundamentally different things. What’s valuable about us, what’s good about humans, is the idea thing. It’s not the animal thing … I guess I’m not overly perturbed by the prospect that there might be something better than us that might replace us … We’ve got a lot of bugs, sorts of bugs left over history back from when we were animals.”
As I described in The Amorality of Web 2.0, this ethic is alive and well today, and clearly it’s held not only by the internet’s philosopher class but by those who are actually writing the code that, more and more, guides how we live, interact and, yes, think.
Plug me in, Sergey. I’m ready to be debugged.