In “Beautiful Lies: The Art of the Deep Fake,” an essay in the Los Angeles Review of Books, I examine the rise and ramifications of deep fakes through a review of two books, photographer Jonas Bendiksen‘s The Book of Veles and mathematician Noah Giansiracusa‘s How Algorithms Create and Prevent Fake News. As Bendiksen’s work shows, deep-fake technology gives artists a new tool for probing reality. As for the rest of us, the technology promises to turn reality into art.
Here’s a bit from the essay:
The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and even more disorienting effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Susan Sontag described in On Photography — to one where our bias is to take nothing as evidence.
The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?
I like to think of Marc Andreessen as the metaverse’s Statue of Liberty. He stands just outside the virtual world’s golden door, illuminating the surrounding darkness with a holographic torch, welcoming the downtrodden to a new and better life.
You might remember the colorful interview Andreessen gave to Substack trickster Niccolo Soldo last spring. At one point in the exchange, the high-browed venture capitalist sketches out his vision of the metaverse and makes a passionate case for its superiority to what he calls “the quote-unquote real world.” His words have taken on new weight now, in the wake of Mark Zuckerberg’s announcement that Facebook is changing its name to Meta and embarking on the construction of an all-encompassing virtual world. Andreessen, an early Facebook investor and one of its directors since 2008, is a pal of Zuckerberg’s and has long had the entrepreneur’s ear. He is, it’s been said, “something of an Obi-Wan to Zuckerberg’s Luke Skywalker.”
In describing the metaverse, Zuckerberg has stressed the anodyne. There will be virtual surfing, virtual fencing, virtual poker nights. We’ll be able to see and smile at our colleagues even while working alone in our homes. We’ll be able to fly over cities and through buildings. David Attenborough will stop by for the odd chat. Andreessen’s vision is far darker and far more radical, eschatological even. He believes the metaverse is where the vast majority of humanity will end up, and should end up. If the metaverse Zuckerberg presents for public consumption seems like a tricked-out open-world videogame, Andreessen’s metaverse comes off as a cross between an amusement park and a concentration camp.
But I should let him explain it. When Soldo asks, “Are we TOO connected these days?,” Andreessen responds:
Your question is a great example of what I call Reality Privilege. … A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substance, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date. These are also *all* of the people who get to ask probing questions like yours. Everyone else, the vast majority of humanity, lacks Reality Privilege — their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world.
The Reality Privileged, of course, call this conclusion dystopian, and demand that we prioritize improvements in reality over improvements in virtuality. To which I say: reality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don’t think we should wait another 5,000 years to see if it eventually closes the gap. We should build — and we are building — online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.
It’s tempting to dismiss all this as just more bad craziness from Big Tech’s fiercely adolescent mind. But that would be a mistake. For one thing, Andreessen is revealing his worldview and his ultimate goals here, and he has the influence and the resources to, if not create the future, at least push the future in the direction he prefers. As Tad Friend pointed out in “Tomorrow’s Advance Man,” a 2015 New Yorkerprofile of Andreessen, power in Silicon Valley accrues to those who can “not just see the future but summon it.” That’s a very small group, and Andreessen is in it. For another thing, Big Tech’s bad craziness has a tendency, as we’ve seen over the past twenty-odd years, to migrate into our everyday lives. We ignore it at our eventual peril.
In Andreessen’s view, society is condemned, by natural law, to radical inequality. In a world where material goods are scarce and human will and talent unequally distributed, society will always be divided into two groups: a small elite who lead rich lives and the masses who live impoverished ones. A few eat cake; the rest get, at best, crumbs. The entire history of civilization — Andreessen’s “5,000 years” — bears this out. Any attempt, political or economic, to overcome society’s natural bias toward extreme inequality is futile. It’s just magical thinking. The only way out, the only solution, is to overturn natural law, to escape the quote-unquote real world. That was never possible — until now. Computers have given us the chance to invent a new world of virtual abundance, where history’s have-nots can experience a simulation of the “glorious substance” that history’s haves have always enjoyed. With the metaverse, civilization is at last liberated from nature and its constraints.
The migration from the real world to the virtual world, some would argue, is already well under way. The masses — at least those who can afford computers and lots of network bandwidth — are voting with their thumbs. Most American teenagers today say they would rather hang out with their friends online than in person. And large numbers of people, particularly boys and young men, are choosing to spend as much time as possible in the hyper-stimulating virtual worlds of videogames rather than in the relative tedium of the physical world. In her influential 2011 book Reality Is Broken, Jane McGonical argues that this choice is entirely rational:
The real world just doesn’t offer up as easily the carefully designed pleasures, the thrilling challenges, and the powerful social bonding afforded by virtual environments. Reality doesn’t motivate us as effectively. Reality isn’t engineered to maximize our potential. Reality wasn’t designed from the bottom up to make us happy. … Reality, compared to games, is broken.
McGonical holds out hope that reality can be “fixed” (by making it more gamelike), but Andreessen would dismiss that as just another example of magical thinking. What you really want to do is speed up the out-of-reality migration — and don’t look back.
Andreessen is not actually suggesting that the metaverse will close the economic gap between haves and have-nots, it’s important to note. At a material level, there’s every reason to believe that the gap will widen as the metaverse grows. It’s the Reality Privileged, or at least its Big Tech wing, who are, as Andreessen emphasizes, building the metaverse. They will also be the ones who own it and profit from it. Andreessen may expect the Reality Deprived to see the metaverse as a gift bestowed upon them by the Reality Privileged, a cosmic act of noblesse oblige, but it’s self-interest that motivates him, Zuckerberg, and the other world-builders.
Not only would the metaverse expand their wealth, it would also get the Reality Deprived out of their hair. With the have-nots spending more and more of their time experiencing a simulation of glorious substance through their VR headsets, the haves would have the actual glorious substance all the more to themselves. The beaches would be emptier, the streets cleaner. Best of all, the haves would be able to shed all responsibility, and guilt, for the problems of the real world. When Andreessen argues that we should no longer bother to “prioritize improvements in reality,” he’s letting himself off the hook. Let them eat virtual cake.
Even within the faux-rich confines of the metaverse, there’s every reason to believe that inequality would continue to reign. The metaverse, as envisioned by Andreessen and Zuckerberg, is fundamentally consumerist — it’s the world remade in the image of the experience economy. As Zuckerberg promised in his Facebook Connect keynote, the Meta metaverse will, within ten years, “host hundreds of billions of dollars of digital commerce.” Money will still exist in the virtual world, and it will be as unequally distributed as ever. That means that we will quickly see a division open up between the Virtuality Privileged and the Virtuality Deprived. While Zuckerberg was giving his keynote, Nike was, as the Wall Street Journalreported, filing trademark applications for “digital versions of its sneakers, clothing and other goods stamped with its swoosh logo.” In the metaverse, the rich kids will still get the cool kicks.
The paradox of Andreessen’s metaverse is that, despite its immateriality, it’s essentially materialist. Andreessen can’t imagine people aspiring to anything more than having the things and the experiences that money can buy. If the peasants are given a simulation of the worldly pleasures of the rich, their lives will suddenly become “wonderful.” They won’t actually own anything, but their existence will be “immeasurably richer and more fulfilling.”
When we take up residence in the metaverse, we’ll all be living the dream. It won’t be our dream, though. It will be the dream of Marc Andreessen and Mark Zuckerberg.
This is the third installment in the series “Meanings of the Metaverse,” which began here and continued here. The fourth installment, “World Enough and Time,” will appear here shortly.
Q: “Will I be able to bring my body into the metaverse?”
A: “You bring your body into your dreams, don’t you?”
Even today, nearly two years into the pandemic, one holds onto certain expectations about how a Big Tech company’s Big Reveal event will unfold. There will be flashing lights. There will be loud, bass-heavy music. There will be a crowded auditorium. The CEO, dressed in some version of Steve Jobs garb, will stroll onto a large stage. The audience of fanboys will erupt in raucous applause.
So it was disconcerting last week when Facebook Connect opened with a quiet, domestic tableau: Mark Zuckerberg sitting alone on a neutral-toned armchair in a neutral-toned living room. He made a few introductory comments — blandly grandiose, as always — then stood up and started walking slowly around the room. Behind him, propped carefully and conspicuously against a wall, a bicycle came into view. And then, a few seconds later, a surfboard appeared, also placed prominently in the camera’s field of view. Très sportif, I thought. And then it struck me: Those aren’t sporting goods. Those are symbols.
Symbols of what? Symbols of physicality. Symbols of the outdoors, the open road, sea and shore. Symbols of bodies in motion, in friendly combat with nature. Symbols of fitness, healthfulness, ruddiness, sweat. In short: Symbols of embodiment.
“Embodiment” has replaced “community” as Zuckerberg’s go-to word. It’s on a constant loop in his brain. “You can think about the metaverse,” he toldThe Vergein July, “as an embodied internet, where instead of just viewing content — you are in it.” “Since I was in middle school,” he went on, “one of the things that I really wanted to build was basically the sense of an embodied internet.” He hit the same note in his Stratecheryinterview last month: “I think the metaverse is this embodied Internet, where instead of looking at the Internet, you’re in it.” And he hit it again in describing the metaverse in his keynote: “It’s just a fundamentally different experience from staring at a screen, this quality of being physically embodied and able to interact with the world and move around inside it.”
This all comes off as typical Zuckerberg b.s. — lofty rhetoric that makes sense as marketing-speak but is otherwise absurd. I mean, how does one become “physically embodied” in a virtual world? A “virtual body” is an oxymoron. Right?
One of the most interesting things about computers is the way they hold a mirror up to us, a mirror that reflects not nature but our conception of nature. Attempts to create artificial intelligence force us to grapple with questions about our own natural intelligence — what it is, where it comes from, what its limits are. Programs for natural language processing raise hard questions about the origins and character of natural language. And in our attempts to create virtual worlds with virtual inhabitants — the metaverse, for instance — we confront profound questions about our being: What is a world? What does it mean to be in a world? What’s the relationship of mind and body? As Michael Heim wrote in his 1991 essay “The Erotic Ontology of Cyberspace,” collected in the book Cyberspace: First Steps, “cyberspace is a metaphysical laboratory, a tool for examining our very sense of reality.”
A human body, as we experience it from inside, is actually two bodies. It is the physical body (the flesh and the blood), and it is the mind’s representation of that body (which draws on the brain’s complex neuronal map of the body). Normally, we feel no divide between the physical body and its mental representation; the two act as one. But when we dream, they separate. We feel fully embodied in our dream, and yet our actual body lies more or less inert on the bed. Although the mind requires a body to create a representation of the body, once that representation exists, the mind seems able to create a virtual body that can have, so to speak, a life of its own.
It may be that the mind wants a body — that it is by nature a body-maker — and that when given the opportunity, or the necessity, it will happily conjure up a body to be its instrument. Anyone who has spent a long time controlling an avatar in a well-designed first-person videogame knows how the mind will habituate itself to a virtual body and begin to make that body feel real. It’s one of the closest experiences we now have to being in a waking dream. That transference happens with just a two-dimensional screen and a handheld controller. Imagine what the mind will do when set loose in an elaborate three-dimensional simulation and flooded with artificial sensory stimuli.
So maybe the idea of virtual embodiment is not as absurd as it seems. Maybe Zuckerberg is onto something.
Still, it would be an error, a profound ontological error, to think that virtual embodiment is the same as actual embodiment. A mental representation of a physical body is not a physical body, even if it feels like one. Walter J. Ong’s concept of “secondary orality” becomes helpful here. In his 1982 book Orality and Literacy: The Technologizing of the Word, Ong examined the popular notion that electronic technologies like the telephone and the television were returning society to an “oral culture” — like the one that existed for most of human history, until the invention of reading and writing brought “literate culture” into dominance. Ong showed that while the “secondary orality” engendered by modern electronic media shares certain important characteristics with preliterate “primary orality,” it is nonetheless a fundamentally different phenomenon. Underlying it is a different state of consciousness. Once technologized, neither speech nor consciousness can be de-technologized.
Virtual embodiment may be best understood as secondary embodiment. It may seem like natural, or primary, embodiment, but it is fundamentally different. I don’t think we know what all the differences are yet, but one of the major ones, I would suggest, will manifest itself in our social relations. When embodied as an avatar in virtual space, we may feel as though we have a physical body, but because that feeling of embodiment is purely a projection of our own mind, we will not experience other avatars as physical, full beings. They will remain shadows, cartoon figures — like the characters in videogames. Virtual embodiment, in other words, is essentially and inescapably solipsistic. Present only to ourselves, we will be embodied but estranged.
We are adaptable creatures, mentally and physically. The danger with secondary embodiment is that, indulged in too long, it may come to supplant primary embodiment. It may become our way of being. “The more we mistake the cyberbodies for ourselves,” warned Heim, with considerable prescience, “the more the machine twists ourselves into the prostheses we are wearing.” The metaverse will be the only world we know, and we will be alone in it.
This is the second installment in the series “Meanings of the Metaverse.” The first installment, “Productizing Reality,” is here.
Facebook, it’s now widelyaccepted, has been a calamity for the world. The obvious solution, most people would agree, is to get rid of Facebook. Mark Zuckerberg has a different idea: Get rid of the world.
Cyberutopians have been dreaming about replacing the physical world with a virtual one since Zuckerberg was in Oshkosh B’gosh overalls. The desire is rooted in misanthropy — meatspace, yuck — but it is also deeply idealistic, Platonic even. The world as we know it, the thinking goes, is messy and chaotic, illogical and unpredictable. It is a place of death and decay, where mind — the true essence of the human — is subordinate to the vagaries of the flesh. Cyberspace liberates the mind from its bodily trappings. It is a place of pure form. Everything in it reflects the logic and order inherent to computer programming.
Hints of that old cyberian idealism float through Zuckerberg’s conception of the metaverse — he’s big on teleportation — but despite his habit of reminding us that he took philosophy and classics courses in college, Zuckerberg is no metaphysician. A Mammonist rather than a Platonist, he’s in it for the money. His goal with the metaverse is not just to create a virtual world that is more encompassing, more totalizing, than what we experience today with social media and videogames. It’s to turn reality itself into a product. In the metaverse, nothing happens that is not computable. That also means that, assuming the computers doing the computing are in private hands, nothing happens that is not a market transaction, a moment of monetization, either directly through an exchange of money or indirectly through the capture of data. With the metaverse, capital subsumes reality. It’s money all the way down.
Zuckerberg’s public embrace of the metaverse, culminating in last week’s Meta rebranding, has been widely seen as a cynical ploy to distract the public from the mess Facebook has made for itself and everyone else. There’s truth in that view, but it would be a mistake to think that the metaverse is just a change-the-subject tactic. It’s a coldly calculated, high-stakes, speculative bet on the future. Zuckerberg believes that several trends are coming together now, commercial, technological, and social, that justify big investments in an all-encompassing virtual sphere. He knows that Facebook — er, Meta — needs to act quickly if it’s to become the dominant player in what could be the biggest of all markets. As one of his lieutenants wrote in a recent memo, “The Metaverse is ours to lose.”
For Meta, Facebook and Instagram are cash cows — established, mature businesses that throw off a lot of cash. The company will milk those social media platforms to fund billions of dollars of investment in metaverse technologies ($10 billion this year alone). Much of that money will go into hardware, including virtual-reality headsets, artificial-reality glasses, hologram projectors, and a myriad of digital sensor systems. Facebook’s greatest vulnerability has always been its dependence on competitors — Apple, Google, Microsoft — to provide the hardware and associated operating systems required to access its sites and apps. The extent of that vulnerability was made clear this year when Apple instituted its data blockade, curtailing Facebook’s ability to track people online and hence making its ads less effective.
If Meta can control the hardware and operating systems people use to frolic in the metaverse, it will neutralize the threat posed by Apple and its other rivals. It will disintermediate the intermediaries. Beyond the hardware, though, the very structure of the metaverse, as envisioned by Zuckerberg, would make it hard if not impossible to prevent a company like Meta from collecting personal data. That’s because, as Zuckerberg emphasized in his Facebook Connect keynote Thursday, a universal metaverse requires universal interoperability. Being in the metaverse needs to be as seamless an experience as being in the real world. That can only happen if all data is shared. Gaps in the flow of data become holes in reality.
And what data! Two of the most revealing, and unsettling, moments in Zuckerberg’s keynote came when he was describing work now being done in the company’s “Reality Labs.” (Does Facebook have a Senior Vice President of Dystopian Branding?) He showed a demo of a woman walking through her home while wearing a pair of Meta AR glasses. The glasses mapped, automatically and in precise detail, everything she looked at. Such digital mapping will allow Meta to create, as Reality Labs Chief Scientist Michael Abrash explained, “an index” of “every single object” in a person’s home, “including not only location, but also the texture, geometry, and function.” The maps will become the basis for “contextual AI” that will be able to anticipate a person’s intentions and desires by tracking eye movements. What you look at, after all, is what you’re interested in. “Ultimately,” said Abrash, “her AR glasses will tell her what her available actions are at any time.” The advertising opportunities are endless.
But that’s just the start. Meta has designs on our bodies that go well beyond eye-tracking. Zuckerberg explained that Reality Labs is at work on “neural interfaces” that will tap directly into the nervous system:
We believe that neural interfaces are going to be an important part of how we interact with AR glasses, and more specifically EMG [electromyography] input from the muscles on your wrist combined with contextualized AI. It turns out that we all have unused neuromotor pathways, and with simple and perhaps even imperceptible gestures, sensors will one day be able to translate those neuromotor signals into digital commands that enable you to control your devices. It’s pretty wild.
Wild, indeed. If Facebook’s ability to collect, analyze, and monetize your personal data makes you nervous now, wait till you see what Meta has in store. There are no secrets in the metaverse.
There is, however, private property. One of the obstacles to the computerized productization of reality has always been the difficulty in establishing and enforcing property rights in cyberspace. Fifteen years ago, a company called Linden Lab took a stab at building a proto-metaverse in the form of the much-hyped videogame Second Life. The company promised its users, including many of the world’s biggest businesses, that they would be able to buy, sell, and own virtual goods in Second Life. What it failed to mention was that those goods, being composed purely of data, could be easily and perfectly copied. And that’s exactly what happened. Second Life was invaded by the so-called CopyBot, a software program that could replicate any object in the virtual world, including people’s avatars. An orgy of piracy ensued, dooming Second Life to irrelevance. Today, thanks to blockchains, cryptocurrencies, and non-fungible tokens (NFTs), the copyability problem seems to have been solved. Property rights, including identity rights, will be able to be enforced in the metaverse, which vastly expands its commercial potential.
Just because Zuckerberg wants a universal metaverse to exist doesn’t mean that it will exist. Anyone who’s been on a Zoom call knows that, even at a pretty basic level, we’re a long way from the kind of seamless, perfectly synchronized virtual existence that Meta is promising. As Michael Abrash himself cautioned, “It’s going to take about a dozen major technological breakthroughs to get to the next-generation metaverse.” That’s a lot of breakthroughs, and no breakthrough is foreordained.
But Zuckerberg has one thing on his side: When given the opportunity, people have shown themselves to be willing, even eager, to choose a simulation over the real thing. The metaverse, should it arrive, may feel like home, only better.
Now that it’s broadly understood that Facebook is a social disease, what’s to be done? In “How to Fix Social Media,” an essay in the new issue of The New Atlantis, I suggest a way forward. It begins by seeing social media companies for what they are. Companies like Facebook, Google, and Twitter are engaged in two very different communication businesses. They transmit personal messages between individuals, and they broadcast information to the masses. They’re mailbox, and they’re megaphone. The mailbox business is a common carriage business; the megaphone business is business with a public calling. Disentangling the two businesses opens the way for a two-pronged regulatory approach built on well-established historical precedents.
Here’s a taste of the essay:
For most of the twentieth century, advances in communication technology proceeded along two separate paths. The “one-to-one” systems used for correspondence and conversation remained largely distinct from the “one-to-many” systems used for broadcasting. The distinction was manifest in every home: When you wanted to chat with someone, you’d pick up the telephone; when you wanted to view or listen to a show, you’d switch on the TV or radio. The technological separation of the two modes of communication underscored the very different roles they played in people’s lives. Everyone saw that personal communication and public communication entailed different social norms, presented different sets of risks and benefits, and merited different legal, regulatory, and commercial responses.
The fundamental principle governing personal communication was privacy: Messages transmitted between individuals should be shielded from others’ eyes and ears. The principle had deep roots. It stemmed from a European common-law doctrine, known as the secrecy of correspondence, established centuries ago to protect the confidentiality of letters sent through the mail. For early Americans, the doctrine had special importance. In the years leading up to the War of Independence, the British government routinely intercepted and read letters sent from the colonies to England. Incensed, the colonists responded by establishing their own “constitutional post,” with a strict requirement that mail be carried “under lock and key.” At the moment of the country’s birth, the secrecy of correspondence became a democratic ideal.
Late Tuesday night, just as the Red Sox were beginning a top-of-the-eleventh rally against the Rays, my smart TV decided to ask me a question of deep ontological import:
Are you still there?
To establish my thereness (and thus be permitted to continue watching the game), I would need to “interact with the remote,” my TV informed me. I would need to respond to its signal with a signal of my own. At first, as I spent a harried few seconds finding the remote and interacting with it, I was annoyed by the interruption. But I quickly came to see it as endearing. Not because of the TV’s solicitude — the solicitude of a machine is just a gentle form of extortion — but because of the TV’s cluelessness. Though I was sitting just ten feet away from the set, peering intently into its screen, my smart TV couldn’t tell that I was watching it. It didn’t know where I was or what I was doing or even if I existed at all. That’s so cute.
I had found a gap in the surveillance system, but I knew it would soon be plugged. Media used to be happy to transmit signals in a human-readable format. But as soon as it was given the ability to collect signals, in a machine-readable format, media got curious. It wanted to know, and then it wanted to know everything, and then it wanted to know everything without having to ask. If a smart device asks you a question, you know it’s not working properly. Further optimization is required. And you know, too, that somebody is working on the problem.
Rumor has it that most smart TVs already have cameras secreted inside them — somewhere in the top bezel, I would guess, not far from the microphone. The cameras generally haven’t been activated yet, but that will change. In a few years, all new TVs will have operational cameras. All new TVs will watch the watcher. This will be pitched as an attractive new feature. We’ll be told that, thanks to the embedded cameras and their facial-recognition capabilities, televisions will henceforth be able to tailor content to individual viewers automatically. TVs will know who’s on the couch without having to ask. More than that, televisions will be able to detect medical and criminal events in the home and alert the appropriate authorities. Televisions will begin to save lives, just as watches and phones and doorbells already do. It will feel comforting to know that our TVs are watching over us. What good is a TV that can’t see?
We’ll be the show then. We’ll be the show that watches the show. We’ll be the show that watches the show that watches the show. In the end, everything turns into an Escher print.
“If you’re not paying for the product, you are the product.” If I have to hear that sentence again, I swear I’ll barf. As Shoshana Zuboff has pointed out, it doesn’t even have the benefit of being true. A product has dignity as a made thing. A product is desirable in itself. That doesn’t describe what we have come to represent to the operators of the machines that gather our signals. We’re the sites out of which industrial inputs are extracted, little seams in the universal data mine. But unlike mineral deposits, we continuously replenish our supply. The more we’re tapped, the more we produce.
The game continues. My smart TV tells me the precise velocity and trajectory of every pitch. To know is to measure, to measure is to know. As the system incorporates me into its workings, it also seeks to impose on me its point of view. It wants me to see the game — to see the world, to see myself — as a stream of discrete, machine-readable signals.
The way we describe our digitally mediated selves, the ones that whirl through computer screens like silks through a magician’s hands, has changed during the pandemic. The change is more than just a matter of terminology. It signals a shift in perspective and perhaps in attitude. “Virtual” told us that distance doesn’t matter; “remote” says that it matters a lot. “Virtual” suggested freedom; “remote” suggests incarceration.
The idea of virtuality-as-liberation came to the fore in Silicon Valley after the invention of the World Wide Web in 1989, but its origins go back to the beginnings of the computer age. In the 1940s and 1950s, as Katherine Hayles describes in How We Became Posthuman, the pioneers of digital computing — Turing, Shannon, Wiener, et al. — severed mind from body. They defined intelligence as “a property of the formal manipulation of symbols rather than enaction in the human life-world.” Our essence as thinking beings, they implied, is independent of our bodies. It lies in patterns of information and hence can be represented through electronic data processing. The self can be abstracted, virtualized.
Though rigorously materialist in its conception, this new mind-body dualism soon took on the characteristics of a theology. Not only would we be able to represent our essence through data, the argument went, but the transfer of the self to a computer would be an act of transcendence. It would free us from the constraints of the physical — from the body and its fixed location in space. As virtual beings, we would exist everywhere all at once. We would experience the “bodiless exultation of cyberspace,” as William Gibson put it in his 1984 novel Neuromancer. The sense of disembodiment as a means of emancipation was buttressed by the rise of schools of social critics who argued that “identity” could and should be separated from biology. If the self is a pattern of data, then the self is a “construct” that is infinitely flexible.
The arrival of social media seemed to bring us closer to the virtual ideal. It gave everyone easy access to multimedia software tools for creating rich representations of the self, and it provided myriad digital theaters, or “platforms,” for these representations to perform in. More and more, self-expression became a matter of symbol-processing, of information-patterning. The content of our character became the character of our content, and vice versa.
The pandemic has brought us back to our bodies, with a vengeance. It has done this not through re-embodiment but, paradoxically, through radical disembodiment. We’ve been returned to our bodies by being forced into further separation from them, by being cut off from, to quote Hayles again, “enaction in the human life-world.” As we retreated from the physical world, social media immediately expanded to subsume everyday activities that traditionally lay outside the scope of media. The computer — whether in the form of phone, laptop, or desktop — became our most important piece of personal protective equipment. It became the sterile enclosure, the prophylactic, that enabled us to go about the business of our lives — work, school, meetings, appointments, socializing, shopping — without actually inhabiting our lives. It allowed us to become remote.
In many ways, this has been a good thing. Without the tools of social media, and our experience in using them, the pandemic would have been even more of a trial. We would have felt even more isolated, our agency more circumscribed. Social media schooled us in the arts of social distancing before those arts became mandatory. But the pandemic has also given us a lesson, a painful one, in the limits of remoteness. In promising to eliminate distance, virtuality also promised to erase the difference between presence and absence. We would always be there, wherever “there” happened to be. That seemed plausible when our virtual selves were engaged in the traditional pursuits of media — news and entertainment, play and performance, information production and information gathering — but it was revealed to be an illusion as soon as social media became our means of living. Being remote is a drag. The state of absence, a physical state but also a psychic one, is a state of loneliness and frustration, angst and ennui.
What the pandemic has revealed is that when taken to an extreme — the extreme Silicon Valley saw as an approaching paradise — virtuality does not engender a sense of liberation and exultation. It engenders a sense of confinement and despair. Absence will never be presence. A body in isolation is a self in isolation.
Think about the cramped little cells in which we appear when we’re on Zoom. It’s hard to imagine a better metaphor for our situation. The architecture of Zoom is the architecture of the Panopticon, but it comes with a twist that Jeremy Bentham never anticipated. On Zoom, each of us gets to play the roles of both jailer and jailed. We are the watcher and the watched, simultaneously. Each role is an exercise in remoteness, and each is demeaning. Each makes us feel small.
What happens when the pandemic subsides? We almost certainly will rejoice in our return to the human life-world — the world of embodiment, presence, action. We’ll celebrate our release from remoteness. But will we rebel against social media and its continuing encroachment on our lives? I have my doubts. As the research of Sherry Turkle and others has shown, one of the attractions of virtualization has always been the sense of safety it provides. Even without a new virus on the prowl, the embodied world, the world of people and things, presents threats, not just physical but also social and psychological. Presence is also exposure. When we socialize through a screen, we feel protected from many of those threats — less fearful, more in control — even if we also feel more isolated and constrained and adrift.
If, in the wake of the pandemic, we end up feeling more vulnerable to the risks inherent in being physically in the world, we may, despite our immediate relief, continue to seek refuge in our new habits of remoteness. We won’t feel liberated, but at least we’ll feel protected.