“This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.” –Hannah Arendt, 1958
“Human beings are ashamed to have been born instead of made.” –Günther Anders, 1956
Now that we’ve branded every consumer good with a computer chip “smart,” the inevitable next step is for robots to start thinking big thoughts, turn us into their menials, and mind-meld into a higher form of life, or lifeyness. Or so we’re told by an (oddly enthusiastic) chorus of putatively rational doomsayers. Forget dirty bombs, climate change, and rogue microbes. AI is now the greatest existential threat to humanity.
Pardon me for yawning. The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.
Rob Walker interviewed me about The Glass Cage for Yahoo Tech, and we touched on this topic:
You don’t spend much time on the idea that the march of artificial intelligence is “summoning the demon” that will destroy humanity, as Elon Musk recently worried aloud. And he’s not the only smart person to frame the issue in apocalyptic, sci-fi terms; it’s become an almost trendy fear. What do you make of that?
It’s probably overblown. All those apocalyptic AI fears are based on an assumption that computers will achieve consciousness, or at least some form of self-awareness. But we have yet to see any evidence of that happening, and because we don’t even know how our own minds achieve consciousness, we have no reliable idea of how to go about building self-aware machines.
There seem to be two theories about how computers will attain consciousness. The first is that computers will gain so much speed and so many connections that consciousness will somehow magically “emerge” from their operations. The second is that we’ll be able to replicate the neuronal structure of our own brains in software, creating an artificial mind.
Now, it’s possible that one of those approaches might work, but there’s no rational reason to assume they’ll work. They’re shots in the dark. Even if we’re able to construct a complete software model of a human brain — and that itself is far from a given — we can’t assume that it will actually function the way a brain functions. The mind may be more than a data-processing system, or at least more than one that can be transferred from biological components to manufactured ones.
The people who expect a “singularity” of machine consciousness to happen in the near future — whether it’s Elon Musk or Ray Kurzweil or whoever — are basing their arguments on faith, not reason. I’d argue that the real threat to humanity is our own misguided tendency to put the interests of technology ahead of the interests of people and other living things.
I have an essay in tomorrow’s Wall Street Journal in which I examine how an overdependence on software is sapping the talents of professionals and argue for a more humanistic approach to programming and automation. The piece begins:
Artificial intelligence has arrived. Today’s computers are discerning and sharp. They can sense the environment, untangle knotty problems, make subtle judgments and learn from experience. They don’t think the way we think—they’re still as mindless as toothpicks—but they can replicate many of our most prized intellectual talents. Dazzled by our brilliant new machines, we’ve been rushing to hand them all sorts of sophisticated jobs that we used to do ourselves.
But our growing reliance on computer automation may be exacting a high price. Worrisome evidence suggests that our own intelligence is withering as we become more dependent on the artificial variety. Rather than lifting us up, smart software seems to be dumbing us down. …
Jenny Shank interviews me about The Glass Cage over at MediaShift. The conversation gets into some topics that haven’t been covered much elsewhere, including my suggestion that Roomba, the automated vacuum cleaner, provides an early and ever so slightly ominous example of robot morality (or lack thereof). “Roomba makes no distinction between a dust bunny and an insect,” I write in the book. “It gobbles both, indiscriminately. If a cricket crosses its path, the cricket gets sucked to its death. A lot of people, when vacuuming, will also run over the cricket. They place no value on a bug’s life, at least not when the bug is an intruder in their home. But other people will stop what they’re doing, pick up the cricket, carry it to the door, and set it loose. … When we set Roomba loose on a carpet, we cede to it the power to make moral choices on our behalf.”
Here’s the relevant bit from the interview:
Shank: “The Glass Cage” made explicit for me a number of problems with automation that I had been vaguely worried about. But one thing that I had never worried about until reading “The Glass Cage” was the morality of the Roomba. You write, “Roomba makes no distinction between a dust bunny and an insect.” Why is it so easy to overlook the fact, as I did, that when a Roomba vacuums indiscriminately, it’s following a moral code?
Carr: It’s easier not to think about it, frankly. The workings of automated machines often raise tricky moral questions. We tend to ignore those gray areas in order to enjoy the conveniences the machines provide without suffering any guilt. But I don’t think we’re going to be able to remain blind to the moral complexities raised by robots and other autonomous machines much longer. As soon as you allow robots, or software programs, to act freely in the world, they’re going to run up against ethically fraught situations and face hard choices that can’t be resolved through statistical models. That will be true of self-driving cars, self-flying drones, and battlefield robots, just as it’s already true, on a lesser scale, with automated vacuum cleaners and lawnmowers. We’re going to have to figure out how to give machines moral codes even if it’s not something we want to think about.
Computers think straight. People think crookedly. Despite all the frustrations that come with thinking crookedly, we have it much better than our calculating kin. Thinking crookedly is more interesting, more rewarding, flat-out more fun than thinking straight. Emotion, pleasure, art, ingenuity, daring, wit, funkiness, love: pretty much everything good is a byproduct of crooked thinking. To think crookedly — to be conscious and self-aware and kind of fucked-up — is a harder feat by far than to think straight. That’s why it’s been fairly easy for us to get machines to think straight, while we still have no idea how to get them to think crookedly.
“Certainly if you had … an artificial brain that was smarter than your brain, you’d be better off,” Sergey Brin once said. Certainly Sergey Brin was wrong. He was thinking too straight. The conscious human mind is buggy, impurely smart, and that’s its greatest feature.
Still, thinking straight, really straight, is a useful skill. After all, it provides a perfect complement to our own way of thinking. That’s why we made computers, and it’s why computers are so valuable in so many situations. For a crooked thinker, there’s nothing like being able to call on a straight thinker from time to time.
In an essay about artificial intelligence in Wired, Kevin Kelly makes an incisive point: for computers, consciousness would be a disaster — a bug-as-bug, not a bug-as-feature. What we want our AI aides to be, writes Kelly, are “nerdily autistic, supersmart specialists”:
In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability — especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them.
All along, our all-too-human AI boffins have been pursuing the wrong goal. If the value of our computers lies in the complementary nature of their intelligence, the last thing we’d want to do is turn them into crooked thinkers like ourselves. Who wants a fucked-up computer?
“In 1881, when Monte Grover, a Wyoming prostitute, pasted published poetry into her scrapbook, she followed a common practice of using clippings to construct an idealized life by isolating a set of values that she found around her. She preserved marks of her inner identity and her best self within a scrapbook. People today, more than a hundred years later, find their identities recorded and inscribed in bureaucratic files and data banks; their official human identities are found in X rays, birth certificates, driver’s licenses, and DNA samples. But a scrapbook represents a construction of identity outside these formalized and authoritative records. It is the self that guides the scissors and assembles the scraps.” —Susan Tucker, Katherine Ott and Patricia P. Buckler, The Scrapbook in American Life, 2006
It struck me, as I was scrolling through some guy’s Tumblr today, that the scrapbook has become our essential cultural form, the artifact that defines the time. Watching TV shows and films, reading books and articles, listening to songs: they all still have their places in our lives, sure. But it’s scrapbooking, particularly of the unbound, online variety, that consumes us. If we’re not arranging our own scraps, we’re rummaging through the scraps of others.
“Cut-and-paste”: the scrapbooking metaphor has long suffused our experience of computers. Now, the scrapbook is the interface. The cloud is our great shared scrapbook.
Pinterest makes its scrapbooky nature most explicit, but, really, all social networking platforms are scrapbooks: Facebook, Twitter, Tumblr, Instagram, Flickr, Ello, YouTube, LinkedIn. Even the more basic communications media — email, texting, etc. — feel more and more scrappy, now that we don’t bother to delete the messages. (“It deepens like a coastal shelf,” wrote Philip Larkin, and indeed it does.) Blogs are scrapbooks. Medium’s a scrapbook. A tap of a Like button is nothing if not a quick scissoring.
Scrapbooking and data-mining are the yang and the yin of the web: light and dark, aboveground and underground, exposed and hidden. Today’s scrapbooks serve both as a counterweight to the bureaucratic file and as part of the file’s contents. The Eloi’s pastime is fodder for the Morlocks.
Inherently retrospective — a means of preemptively packaging the present as memory — the scrapbook is a melancholy form. Pressed insistently forward, we spend our time arranging the bits and pieces of our lives into something we think looks something like us. If the material scrapbook of old was familial and semiprivate, the new scrapbook is social and altogether public. It’s still a melancholy form, but now it’s an anxious one, too. It’s one thing to construct an idealized life, a “best self,” for your own consumption; it’s another thing to construct one for all to see.
“It appears, then, that scrapbook-making as a ritualized, order-inducing gesture is both an acknowledgement of and a response to the heightened sense of fragmentation which has attended the experience of modernity,” wrote Tamar Katriel and Thomas Farrell in their 1991 article “Scrapbooks as Cultural Texts.” They may be right. And maybe the appeal of the digital form of scrapbooking is that it’s all-encompassing and never-ending: as long as you’re arranging your fragments, you don’t have time to realize that they’re fragments. The lack of coherence just means that a piece is still missing.
The lightbulb, Marshall McLuhan wrote at the start of his 1964 book Understanding Media, is an example of a medium without content. Walk into a dark room and hit the light switch, and the bulb generates a new environment for you even though the bulb transmits no information. The idea of a medium without content is hard to grasp — it doesn’t make sense in the context of our assumptions about media — but it’s fundamental to understanding McLuhan’s contention that the medium is the message, i.e., that the medium creates an environment independent of the content or information it transmits.
So what are we to make of the smartphone, the medium of the moment, our portable environment? If, as McLuhan argued, the content of any new medium is an old medium, the content of the smartphone would seem to be all media: telephone, television, radio, cinema, printed book, electronic book, comic book, record, MP3, newspaper, magazine, letter, newsletter, email, telegraph, conversation, peep show, library, school, lecture, ATM, desktop, laptop, love note, medical record, rap sheet. Contentwise, the smartphone is Whitmanesque: it contains multitudes. The smartphone is what happens when the architecture of media collapses. It’s a black hole full of light: information supercompressed but radiant. In its singularity, it might be described as the first post-media medium. Its circuitry dissolves plurality; the media becomes the medium.
Bursting with information, the smartphone is, in McLuhan’s terms, a hot medium, maybe the hottest imaginable. It invades the sensorium of its user with an absolute imperialist zeal. Flooding the visual sense, it allows no signal but its own. To look into the screen of a smartphone is to be lost to the world. Like every hot medium, the smartphone isolates and fragments the self. It individualizes, alienates. Not only does it reverse what McLuhan described as the coolness of the aural phone, turning it into a superheated visual medium, but it reverses the entire re-tribalization pattern that McLuhan saw emerging from electric media. The smartphone out-de-tribalizes even the printed book. The smartphone’s “interactivity” is a ruse, for the only activity it allows is the activity it mediates. Its dominance precludes involvement and participation.
But that can’t be right. What does one do with a smartphone but participate — interact, converse, communicate, shop, create, get involved? Here we find the conundrum of the smartphone, the conundrum of our new artificial environment — and the conundrum that wraps around McLuhan’s hot/cool media dialectic.
In a 1967 essay, the critic Richard Kostelanetz wrote that McLuhan’s books “offer a cool experience in a hot medium.” The lo-def ambiguity of the writing fights against the hi-def clarity of the printed word; the information demands the reader’s involvement while the medium forbids it. It may be that the smartphone is of a similar nature, hot and cool at once (but never lukewarm). At the very least, one could say that the smartphone creates an environment that encourages participation at a distance: participation as performance. The smartphone re-tribalizes by putting us always on display, by eating away at our sense of the private self, but it de-tribalizes by isolating us in an abstract world, a world of our own. You hit the light switch, and the bulb comes on and you find yourself in an empty room full of people. To put it another way: participation is the content of the smartphone, and the content, as McLuhan wrote, is “the juicy piece of meat carried by the burglar to distract the watchdog of the mind.” The illusion of involvement conceals its absence. Here comes Walt Whitman, alone and isolated, dreaming dreams of connection, turning a barbaric yawp into silent words on a flat page.
This year marks the 50th anniversary of the publication of Marshall McLuhan’s best known work, Understanding Media. To mark the occasion, I’m republishing some thoughts on the man and the book that originally appeared here in 2011. I also had an opportunity to chat about McLuhan’s legacy with Brooke Gladstone in a segment of On the Media airing this weekend, which you can listen to here. The image above is a detail from a MAD magazine cover.
One of my favorite YouTube videos is a clip from a 1968 Canadian TV show featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both icons of the sixties, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”
Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose. Both impressions, it turns out, are valid. As the novelist Douglas Coupland argued in his recent biography, Marshall McLuhan: You Know Nothing of My Work!, McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas. In 1960, he had a stroke so severe that he was given his last rites. In 1967, just a few months before the Mailer debate, surgeons removed a tumor the size of a small apple from the base of his brain. A later procedure revealed that McLuhan had an extra artery pumping blood into his cranium.
Between the stroke and the tumor, McLuhan managed to write a pair of extravagantly original books. The Gutenberg Galaxy, published in 1962, explored the cultural and personal consequences of the invention of the printing press, arguing that Gutenberg’s invention shaped the modern mind. Two years later, Understanding Media extended the analysis to the electric media of the twentieth century, which, McLuhan argued, were destroying the individualist ethic of print culture and turning the world into a tightly networked global village. The ideas in both books drew heavily on the works of other thinkers, including such contemporaries as Harold Innis, Albert Lord, and Wyndham Lewis, but McLuhan’s synthesis was, in content and tone, unlike anything that had come before.
When you read McLuhan today, you find all sorts of reasons to be impressed by his insight into media’s far-reaching effects and by his anticipation of the course of technological progress. When he looked at a Xerox machine in 1966, he didn’t just see the ramifications of cheap photocopying, as great as they were. He foresaw the transformation of the book from a manufactured object into an information service: “Instead of the book as a fixed package of repeatable and uniform character suited to the market with pricing, the book is increasingly taking on the character of a service, an information service, and the book as an information service is tailor-made and custom-built.” That must have sounded outrageous a half century ago. Today, with books shedding their physical skins and turning into software programs, it sounds like a given.
You also realize that McLuhan got a whole lot wrong. One of his central assumptions was that electric communication technologies would displace the phonetic alphabet from the center of culture, a process that he felt was well under way in his own lifetime. “Our Western values, built on the written word, have already been considerably affected by the electric media of telephone, radio, and TV,” he wrote in Understanding Media. He believed that readers, because their attention is consumed by the act of interpreting the visual symbols of alphabetic letters, become alienated from their other senses, sacrifice their attachment to other people, and enter a world of abstraction, individualism, and rigorously linear thinking. This, for McLuhan, was the story of Western civilization, particularly after the arrival of Gutenberg’s press.
By freeing us from our single-minded focus on the written word, new technologies like the telephone and the television would, he argued, broaden our sensory and emotional engagement with the world and with others. We would become more integrated, more “holistic,” at both a sensory and a social level, and we would recoup some of our primal nature. But McLuhan failed to anticipate that, as the speed and capacity of communication networks grew, what they would end up transmitting more than anything else is text. The written word would invade electric media. If McLuhan were to come back to life today, the sight of people using their telephones as reading and writing devices would blow his mind. He would also be amazed to discover that the fuzzy, low-definition TV screens that he knew (and on which he based his famous distinction between hot and cold media) have been replaced by crystal-clear, high-definition monitors, which more often that not are crawling with the letters of the alphabet. Our senses are more dominated by the need to maintain a strong, narrow visual focus than ever before. Electric media are social media, but they are also media of isolation. If the medium is the message, then the message of electric media has turned out to be far different from what McLuhan supposed.
Of course, the fact that some of his ideas didn’t pan out wouldn’t have bothered McLuhan much. He was far more interested in playing with ideas than nailing them down. He intended his writings to be “probes” into the present and the future. He wanted his words to knock readers out of their intellectual comfort zones, to get them to entertain the possibility that their accepted patterns of perception might need reordering. Fortunately for him, he arrived on the scene at a rare moment in history when large numbers of people wanted nothing more than to have their minds messed with.
McLuhan was a scholar of literature, with a doctorate from Cambridge, and his interpretation of the intellectual and social effects of media was richly allusive and erudite. But what particularly galvanized the public and the press was the weirdness of his prose. Perhaps a consequence of his unusual mind, he had a knack for writing sentences that sounded at once clinical and mystical. His books read like accounts of acid trips written by a bureaucrat. That kaleidoscopic, almost psychedelic style made him a darling of the counterculture — the bearded and the Birkenstocked embraced him as a guru — but it alienated him from his colleagues in academia. To them, McLuhan was a celebrity-seeking charlatan.
Neither his fans nor his foes saw him clearly. The central fact of McLuhan’s life was his conversion, at the age of twenty-five, to Catholicism, and his subsequent devotion to the religion’s rituals and tenets. He became a daily mass-goer. Though he never discussed it, his faith forms the moral and intellectual backdrop to all his mature work. What lay in store, McLuhan believed, was the timelessness of eternity. The earthly conceptions of past, present, and future were by comparison of little consequence. His role as a thinker was not to celebrate or denigrate the world but simply to understand it, to recognize the patterns that would unlock history’s secrets and thus provide hints of God’s design. His job was not dissimilar, as he saw it, from that of the artist.
That’s not to say that McLuhan was without secular ambition. Coming of age at the dawn of mass media, he very much wanted to be famous. “I have no affection for the world,” he wrote to his brother in the late thirties, at the start of his academic career. But in the same letter he disclosed the “large dreams” he harbored for “the bedazzlement of men.” Modern media needed its own medium, the voice that would explain its transformative power to the world, and he would be it.
The tension between McLuhan’s craving for earthly attention and his distaste for the material world would never be resolved. Even as he came to be worshipped as a techno-utopian seer in the mid-sixties, he had already, writes Coupland, lost all hope “that the world might become a better place with new technology.” He heralded the global village, and was genuinely excited by its imminence and its possibilities, but he also saw its arrival as the death knell for the literary culture he revered. The electronically connected society would be the setting not for the further flourishing of civilization but for the return of tribalism, if on a vast new scale. “And as our senses [go] outside us,” he wrote, “Big Brother goes inside.” Always on display, always broadcasting, always watched, we would become mediated, technologically and socially, as never before. The intellectual detachment that characterizes the solitary thinker — and that was the hallmark of McLuhan’s own work — would be replaced by the communal excitements, and constraints, of what we have today come to call “interactivity.”
McLuhan also saw, with biting clarity, how all mass media are fated to become tools of commercialism and consumerism — and hence instruments of control. The more intimately we weave media into our lives, the more tightly we become locked in a corporate embrace: “Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.” Has a darker vision of modern media ever been expressed?
“Many people seem to think that if you talk about something recent, you’re in favor of it,” McLuhan explained during an uncharacteristically candid interview in 1966. “The exact opposite is true in my case. Anything I talk about is almost certain to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.” Though the founders of Wired magazine would posthumously appoint McLuhan as the “patron saint” of the digital revolution, the real McLuhan was as much a Luddite as a technophile. He would have found the collective banality of Facebook abhorrent, if also fascinating.
In the fall of 1979, McLuhan suffered another major stroke, but this was one from which he would not recover. Though he regained consciousness, he remained unable to read, write, or speak until his death a little more than a year later. A lover of words — his favorite book was Joyce’s Finnegans Wake — he died in a state of wordlessness. He had fulfilled his own prophecy and become post-literary.
Portions of this essay appeared originally in the New Republic. Photo of texters by Susan NYC.