Reading that new Playboyinterview with Ray Kurzweil sent me back to the notorious interviewPlayboy did with Larry Page and Sergey Brin in 2004. (The interview nearly torpedoed Google’s IPO, you’ll recall.) Page and Brin’s anticipation of a grand computer-human mind-meld neatly prefigures Kurzweil’s speculations:
PAGE: The more information you have, the better.
PLAYBOY: Yet more isn’t necessarily better.
BRIN: Exactly. This is why it’s a complex problem we’re solving. You want access to as much as possible so you can discern what is most relevant and correct. The solution isn’t to limit the information you receive. Ultimately you want to have the entire world’s knowledge connected directly to your mind.
PLAYBOY: Is that what we have to look forward to?
BRIN: Well, maybe. I hope so. At least a version of that. We probably won’t be looking up everything on a computer.
PLAYBOY: Is your goal to have the entire world’s knowledge connected directly to our minds?
BRIN: To get closer to that — as close as possible.
PLAYBOY: At some point doesn’t the volume become overwhelming?
BRIN: Your mind is tremendously efficient at weighing an enormous amount of information. We want to make smarter search engines that do a lot of the work for us. The smarter we can make the search engine, the better. Where will it lead? Who knows? But it’s credible to imagine a leap as great as that from hunting through library stacks to a Google session, when we leap from today’s search engines to having the entirety of the world’s information as just one of our thoughts.
What’s telling is that twelve years have gone by — twelve years of enormous advances in digital computing and networking — and yet, at a practical level, we’re no closer to hooking computers and brains together in the way Page, Brin, and Kurzweil imagine. In fact, we have no clue as to how you’d even go about such a project. As Kevin Kelly once observed, “the singularity is always near.” And that is where it is likely to remain.
Ten years ago, Larry Page and Sergey Brin couldn’t stop talking about their excitement at the prospect of extending or replacing the human brain with computers. For the last several years, they’ve been much quieter about their mind-disruption project. I sense that a soldier in Google’s flack army warned them that in voicing their fantasies they risked weirding people out. Renovating the species is a job best done on the sly.
Still, we’ll always have Ray Kurzweil. (I mean that literally.) When, in 2012, Google hired the inventor and immortalist as a director of engineering, it also gained a new mouthpiece for its boldest ambitions. Playboy has just published a wide-ranging interview with Kurzweil in which he discusses everything from his hobbies (“I like to take naps”) to his anxieties (“unstructured social situations make me nervous”). The big thrust, though, is the imminent upgrading of homo sapiens:
By the 2030s we will have nanobots that can go into a brain non-invasively through the capillaries, connect to our neocortex and basically connect it to a synthetic neocortex that works the same way in the cloud. So we’ll have an additional neocortex, just like we developed an additional neocortex 2 million years ago, and we’ll use it just as we used the frontal cortex: to add additional levels of abstraction. We’ll create more profound forms of communication than we’re familiar with today, more profound music and funnier jokes. We’ll be funnier. We’ll be sexier. We’ll be more adept at expressing loving sentiments.
He brings the discussion down to earth with an example:
Let’s say I’m walking along and I see my boss at Google, Larry Page, approaching. I have three seconds to come up with something clever to say, and the 300 million modules in my neocortex won’t cut it. I need a billion modules for two seconds. I’ll be able to access that in the cloud just as we can access additional computation in the cloud for our mobile phones, and I’ll be able to say exactly the right thing.
I think there’s a flaw in Kurzweil’s logic here. He fails to anticipate the inevitable arm’s race in cleverness. Larry Page is going to be plugged into that enormous cloud neocortex, too, so surely Page’s standards for what qualifies as a clever remark will have gone up exponentially. Even with his new brain, Kurzweil will still be in exactly the same boat, floundering to muster the wit necessary to impress the boss. Funny and sexy are relative terms.
As to the expression of loving sentiments, the interview goes deep into Kurzweil’s views on the future of copulation, which would appear to be indistinguishable from the future of onanism. I’ll spare you the details, but when the interviewer, David Hochman, asks Kurzweil whether there’s “anyone whose body you would like to inhabit” in order to have sex in virtual reality, Kurzweil replies, “Probably some attractive woman. If I had to pick one? Amy Adams. I like the perky way she uses her body.”
Facebook has a problem. Its members aren’t sharing as much as they used to. At least they’re not sharing firsthand the way they used to. Instead of posting notices about what they’re doing or thinking, or where they are, or whom they’re hanging out with, they’re just recycling secondhand stuff — news stories, songs, other people’s photos or tweets, YouTube videos, etc. The nature of what they share on the network is changing from the personal to the impersonal, from the informal to the formal, from the subjective to the objective. To put it into media terms, which would seem to be the appropriate terms, they are shifting their role from that of actor to that of producer or publisher or aggregator.
To the extent that people still post reports on their firsthand experiences, they’re tending to use more selective networks, like Snapchat, that offer more precise audience control. People are retreating from public displays of experience to more private displays. They’re shifting from mass media to narrower media that, in their intimacy, more closely resemble traditional social settings.
Because Facebook feeds on personal sharing the way a vampire feeds on blood — the more intimate the information you publish, the more Facebook knows about you, and the more precisely it can tailor ads and other messages — any decline in personal sharing is ominous for the company. It’s no surprise that Facebook is now trying to figure out some interface tweaks and tricks that will, as a company spokesperson puts it, “make sharing on Facebook more fun and dynamic.” It’s hard not to hear a hint of desperation in that statement.
Facebook employees, according to a Bloomberg story, refer to the curtailment of personal sharing as “context collapse.” But that’s completely wrong. Context collapse is a sociological term of art that describes the way social media tend to erase the boundaries that once defined people’s social lives. Before social media came along, your social life played out in different and largely separate spheres. You had your friends in one sphere, your family members in another sphere, your coworkers in still another sphere, and so on. The spheres overlapped, but they remained distinct. The self you presented to your family was not the same self you presented to your friends, and the self you presented to your friends was not the one you presented to the people you worked with or went to school with. With a social network like Facebook, all these spheres merge into a single sphere. Everybody sees what you’re doing. Context collapses.
When Mark Zuckerberg infamously said, “You have one identity; the days of you having a different image for your work friends or your co-workers and for the people you know are probably coming to an end pretty quickly,” he was celebrating context collapse. Context collapse is a wonderful thing for a company like Facebook because a uniform self, a self without context, is easy to package as a commodity. The protean self is a fly in the Facebook ointment.
Facebook’s problem now is not context collapse but its opposite: context restoration. When people start backing away from broadcasting intimate details about themselves, it’s a sign that they’re looking to reestablish some boundaries in their social lives, to mend the walls that social media has broken. It’s an acknowledgment that the collapse of multiple social contexts into a single one-size-fits-all context circumscribes a person’s freedom. There’s only so much fun you can have if you know that your mom, your boss, and your weird neighbor are all watching. The protean self, we’re rediscovering, is a more comfortable self than the uniform self. Being forced into “one identity” is a drag.
There’s something else going on here, too. We’re learning how difficult and exhausting it is to sustain a mass-media presence. The problem with broadcasting everyday experience is that everyday experience is inevitably repetitive, and repetitiveness is, in a media context, the kiss of death. To remain interesting when viewed at a distance, when viewed through media, a person has to display continuing novelty — novelty of experience, novelty of thought. Very few of us can do that for very long. I imagine that, on Facebook, even Oscar Wilde and Dorothy Parker would have worn out their welcomes after a while.
The repetitiveness of our lives remains interesting to our family and our close friends, but outside that intimate context it gets boring. As reality TV stars, we all face declining ratings and, in the end, cancellation.
Twenty years ago, as the commercial internet took form, the web’s default setting was switched to “surveillance” when it might have been switched to “privacy.” As is often the case with defaults, no one much noticed at the time. Today, with the Silicon Valley surveillance complex set to expand further through the Internet of Things, we have another opportunity to think carefully about digital surveillance and its consequences for how we live. That opportunity, as I argue in a Los Angeles Timesop-ed, probably won’t be open for long. A new default setting is about to be established.
“Americans live their lives on their phones now.” So wrote 15 prominent technology companies, including Google, Facebook, Amazon and Snapchat, in a legal brief supporting Apple in its now-moot fight with the Justice Department over unlocking the San Bernardino killer’s iPhone. Our phones have become “an extension of our memories,” the companies argued, and “to access someone’s cellphone is to access their innermost thoughts and their most private affairs.”
Although the companies are right, their earnest defense of privacy is deeply ironic, if not hypocritical. They are, after all, in the business of surveillance. They collect personal data on a scale that would make most law enforcement agencies blush. The very existence of firms like Google and Facebook hinges on their ability to monitor our innermost thoughts and our most intimate affairs, to tap into our digital memory pretty much continuously. . . .
Up ahead in the distance, I saw a shimmering light.
Who would have guessed that the Eagles would prove our most reliable prophets?
Nikil Saval, author of Cubed: A Secret History of the Workplace,traces the planned new headquarters of Google and Apple — Googledome (above) and Mothership Apple (below) — to their origins in what’s been called Hippie Modernism. The aggressive futurism of the two campuses “is in fact rooted in the past,” Saval writes. “It comes, transfigured, from the wrecked dreams of communal living, of back-to-the-land utopias, of expanding plastic spheres and geodesic domes that populated the landscape of Northern California around the time (and around the same place) that the first semiconductors were being perfected.” It’s Bucky Fuller all over again.
The central building of Googledome is wrapped in “a sinuous glass membrane, a protective bubble or amniotic sac,” Saval notes. “In aerial renderings it looks like larvae, incubating a new and possibly terrifying future.” It’s always the same: The more utopian an artistic portrayal of the future, the creepier it seems.
The feeling has something to do, I think, with the absence of angles — all those soft and welcoming curves, undulating like the coils of a snake. There’s also the Panopticon effect. The transparent enclosures of both Googledome and Mothership Apple can be read as monuments to a surveillance culture. You’re being watched, they whisper, but you need not fret — the watcher is benign, generous, loving.
“Relax,” said the night man. “We are programmed to receive.”
I have a review of When We Are No More: How Digital Memory Will Shape Our Future, Abby Smith Rumsey’s meditation on the fragility of cultural memory, in the Washington Post. It begins:
In the spring of 1997, the Library of Congress opened an ambitious exhibit featuring several hundred of the most historically significant items in its collection. One of the more striking of the artifacts was the “rough draught” of the Declaration of Independence. Over Thomas Jefferson’s original, neatly penned script ran edits by John Adams, Benjamin Franklin and other Founding Fathers. Words were crossed out, inserted and changed, the revisions providing a visual record of debate and compromise. A boon to historians, the four-page manuscript provides even the casual viewer with a keen sense of the drama of a nation being born.
Imagine if the Declaration were composed today. It would almost certainly be written on a computer screen rather than with ink and paper, and the edits would be made electronically, through email exchanges or a file shared on the Internet. If we were lucky, a modern-day Jefferson would turn on the word processor’s track-changes function and print copies of the document as it progressed. We’d at least know who wrote what, even if the generic computer type lacked the expressiveness of handwriting. More likely, the digital file would come to be erased or rendered unreadable by changes in technical standards. We’d have the words, but the document itself would have little resonance. . . .