Category Archives: Uncategorized

At the Concord station (for Leo Marx)

Leo Marx has died, at the mighty age of 102. His work, particularly The Machine in the Garden, inspired many people who write on the cultural consequences of technological progress, myself included. As a small tribute, I’m posting this excerpt from The Shallows, in which Marx’s influence is obvious.

It was a warm summer morning in Concord, Massachusetts. The year was 1844. Nathaniel Hawthorne was sitting in a small clearing in the woods, a particularly peaceful spot known around town as Sleepy Hollow. Deep in concentration, he was attending to every passing impression, turning himself into what Ralph Waldo Emerson, the leader of Concord’s transcendentalist movement, had eight years earlier termed a “transparent eyeball.”

Hawthorne saw, as he would record in his notebook later that day, how “sunshine glimmers through shadow, and shadow effaces sunshine, imaging that pleasant mood of mind where gayety and pensiveness intermingle.” He felt a slight breeze, “the gentlest sigh imaginable, yet with a spiritual potency, insomuch that it seems to penetrate, with its mild, ethereal coolness, through the outward clay, and breathe upon the spirit itself, which shivers with gentle delight.” He smelled on the breeze a hint of “the fragrance of the white pines.” He heard “the striking of the village clock” and “at a distance mowers whetting their scythes,” though “these sounds of labor, when at a proper remoteness, do but increase the quiet of one who lies at his ease, all in a mist of his own musings.”

Abruptly, his reverie was broken:

But, hark! there is the whistle of the locomotive,—the long shriek, harsh above all other harshness, for the space of a mile cannot mollify it into harmony. It tells a story of busy men, citizens from the hot street, who have come to spend a day in a country village,—men of business,—in short, of all unquietness; and no wonder that it gives such a startling shriek, since it brings the noisy world into the midst of our slumbrous peace.

Leo Marx opens The Machine in the Garden, his classic 1964 study of technology’s influence on American culture, with a recounting of Hawthorne’s morning in Sleepy Hollow. The writer’s real subject, Marx argues, is “the landscape of the psyche” and in particular “the contrast between two conditions of consciousness.” The quiet clearing in the woods provides the solitary thinker with “a singular insulation from disturbance,” a protected space for reflection. The clamorous arrival of the train, with its load of “busy men,” brings “the psychic dissonance associated with the onset of industrialism.” The contemplative mind is overwhelmed by the noisy world’s mechanical busyness.

The stress that Google and other Internet companies place on the efficiency of information exchange as the key to intellectual progress is nothing new. It’s been, at least since the start of the Industrial Revolution, a common theme in the history of the mind. It provides a strong and continuing counterpoint to the very different view, promulgated by the American transcendentalists as well as the earlier English romantics, that true enlightenment comes only through contemplation and introspection. The tension between the two perspectives is one manifestation of the broader conflict between, in Marx’s terms, “the machine” and “the garden”—the industrial ideal and the pastoral ideal—that has played such an important role in shaping modern society.

When carried into the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of contemplative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in what Google calls the “world of numbers,” but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual locomotion.

Even as the printing press, invented by Johannes Gutenberg in the fifteenth century, made the literary mind the general mind, it set in motion the process that now threatens to render the literary mind obsolete. When books and periodicals began to flood the marketplace, people for the first time felt overwhelmed by information. Robert Burton, in his 1628 masterwork An Anatomy of Melancholy, described the “vast chaos and confusion of books” that confronted the seventeenth-century reader: “We are oppressed with them, our eyes ache with reading, our fingers with turning.” A few years earlier, in 1600, another English writer, Barnaby Rich, had complained, “One of the great diseases of this age is the multitude of books that doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought into the world.”

Ever since, we have been seeking, with mounting urgency, new ways to bring order to the confusion of information we face every day. For centuries, the methods of personal information management tended to be simple, manual, and idiosyncratic—filing and shelving routines, alphabetization, annotation, notes and lists, catalogues and concordances, indexes, rules of thumb. There were also the more elaborate, but still largely manual, institutional mechanisms for sorting and storing information found in libraries, universities, and commercial and governmental bureaucracies. During the twentieth century, as the information flood swelled and data-processing technologies advanced, the methods and tools for both personal and institutional information management became more complex, more systematic, and increasingly automated. We began to look to the very machines that exacerbated information overload for ways to alleviate the problem.

Vannevar Bush sounded the keynote for our modern approach to managing information in his much-discussed article “As We May Think,” which appeared in the Atlantic Monthly in 1945. Bush, an electrical engineer who had served as Franklin Roosevelt’s science adviser during World War II, worried that progress was being held back by scientists’ inability to keep abreast of information relevant to their work. The publication of new material, he wrote, “has been extended far beyond our present ability to make use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”

With his memex, Bush anticipated both the personal computer and the hypermedia system of the internet. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Englebart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.” Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!” We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”

The automatic muse

In the fall of 1917, the Irish poet William Butler Yeats, now in middle age and having twice had marriage proposals turned down, first by his great love Maud Gonne and next by Gonne’s daughter Iseult, offered his hand to a well-off young Englishwoman named Georgie Hyde-Lees. She accepted, and the two were wed a few weeks later, on October 20, in a small ceremony in London.

Hyde-Lees was a psychic, and four days into their honeymoon she gave her husband a demonstration of her ability to channel the words of spirits through automatic writing. Yeats was fascinated by the messages that flowed through his wife’s pen, and in the ensuing years the couple held more than 400 such seances, the poet poring over each new script. At one point, Yeats announced that he would devote the rest of his life to interpreting the messages. “No,” the spirits responded, “we have come to give you metaphors for poetry.” And so they did, in abundance. Many of Yeats’s great late poems, with their gyres, staircases, and phases of the moon, were inspired by his wife’s mystical scribbles.

One way to think about AI-based text-generation tools like OpenAI’s GPT-3 is as clairvoyants. They are mediums that bring the words of the past into the present in a new arrangement. GPT-3 is not creating text out of nothing, after all. It is drawing on a vast corpus of human expression and, through a quasi-mystical statistical procedure (no one can explain exactly what it is doing), synthesizing all those old words into something new, something intelligible to and requiring interpretation by its interlocutor. When we talk to GPT-3, we are, in a way, communing with the dead. One of Hyde-Lees’ spirits said to Yeats, “this script has its origin in human life — all religious systems have their origin in God & descend to man — this ascends.” The same could be said of the script generated by GPT-3. It has its origin in human life; it ascends.

It’s telling that one of the first commercial applications of GPT-3, Sudowrite, is being marketed as a therapy for writer’s block. If you’re writing a story or essay and you find yourself stuck, you can plug the last few sentences of your work into Sudowrite, and it will generate the next few sentences, in a variety of versions. It may not give you metaphors for poetry (though it could), but it will give you some inspiration, stirring thoughts and opening possible new paths. It’s an automatic muse, a mechanical Georgie Hyde-Lees.

Sudowrite, and GPT-3 in general, has already been used for a lot of stunts. Kevin Roose, the New York Times technology columnist, recently used it to generate a substantial portion of a review of a mediocre new book on artificial intelligence. (The title of the review was, naturally, “A Robot Wrote this Book Review.”) Commenting on Sudowrite’s output, Roose wrote, “within a few minutes, the AI was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.”

But the potential of these AI-powered automatic writers goes far beyond journalistic parlor tricks. They promise to serve as new tools for the creation of art. One of the most remarkable pieces of writing I read this year was Vauhini Vara’s story “Ghosts” in The Believer. While locked down in 2020, Vara became obsessed with GPT-3. “I sought out examples of GPT-3’s work, and they astonished me,” she writes in an introduction to her piece. “Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter — but often poetically so, almost truer than writing any human would produce.” Yeats would have understood.

Vara’s older sister contracted cancer in high school, and died a few years later. The experience left Vara traumatized, and, though an accomplished writer, she had never been able to write about it. But with GPT-3 she began to find the words. “I found myself irresistibly attracted to GPT-3,” she explains “— to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them.” She began to feed GPT-3 some sentences about her sister’s illness, and the system started to weave those sentences into stories — fantastical and uncanny, but also stirring, and ultimately heartbreaking. The Believer story chronicles eight of her sessions with GPT-3. It reads as a conversation between a writer and her muse, a conversation that begins tentatively and grows richer and truer as it goes on.

At one point, after Vara confesses to feeling like “a ghost” after her sister’s death, GPT-3 continues the thought:

So I can’t describe her to you. But I can describe what it felt like to have her die. It felt like my life was an accident — or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else — a stranger — would have to live, in my place. I was that stranger. I still am.

What gives the exchange all the more resonance is the sense that, in drawing on its corpus of past human speech to generate its mysterious new pastiche, GPT-3 is expressing the pain of others who have suffered unbearable losses. Spirits are talking.

Social media as pseudo-community

In 1987, two years before James Beniger wrote The Control Revolution, his seminal study of the role information systems play in society, he published an article called “Personalization of Mass Media and the Growth of Pseudo-Community” in the journal Communication Research. Beniger’s subject was the shift from “interpersonal communication” to “mass communication” as the basis of human relations. The shift had begun in the eighteenth century, with the introduction of high-speed printing presses and the proliferation of widely circulating newspapers and magazines; had accelerated with the arrival of broadcasting in the middle of the twentieth century; and was taking a new turn with the rise of digital media.

Beniger argued that interpersonal, or face-to-face, communication encourages the development of small, tightly knit, tightly controlled communities where individual interests are subordinate to group interests. For most of human history, society was structured along these intimate lines. Mass communication, more efficient but less intimate, encourages the development of large, loosely knit, loosely controlled communities where individual interests take precedence over group interests. As mass communication became ever more central to human experience in the second half of the twentieth century, thanks to the enormous popularity of radio and television, society restructured itself, with individualism and personal freedom becoming the governing ethos. The trend seemed to culminate in the free-wheeling, self-indulgent 1970s.

The arrival of the personal computer around 1980 put a twist in the story. By enabling mass media messages to be personalized, computers began to make mass communication feel as intimate as interpersonal communication, while also making mass communication even more efficient.* Imbuing broadcasting with an illusion of intimacy, computers expanded media’s power to structure and control human relations. Observed Beniger:

Gradually each of us has become enmeshed in superficially interpersonal relations that confuse personal with mass messages and increasingly include interactions with machines that write, speak, and even “think” with success steadily approaching that of humans. The change constitutes nothing less than a transformation of traditional community into impersonal association — toward an unimagined hybrid of the two extremes that we might call pseudo-community.

Beniger emphasized that, for broadcasters and advertisers, contriving a sense of intimacy had always been a central goal, as it served to give their programs and messages greater influence over the audience. Even during the early days of radio and TV, the performers who seemed most sincere to listeners and viewers tended to have the greatest success — whether their sincerity was real or feigned. With computer personalization, Beniger understood, individuals’ sense of personal connection with mass-media messages would strengthen. The glue of pseudo-community would be pseudo-intimacy. 

Although Beniger wrote his article several years before the invention of the web and long before the arrival of social media, he was remarkably prescient about what lay ahead:

The capacity of such [digital] mass media for simulating interpersonal communication is limited only by their output technologies, computing power, and artificial intelligence; their capacity for personalization is limited only by the size and quality of data sets on the households and individuals to which they are linked.

The power of “sincerity” — today we would be more likely to use the terms “authenticity” and “relatability” — would also intensify, Beniger saw. Overwhelmed with personalized messages, people would put their trust and faith in whatever human or machine broadcaster felt most real, most genuine to them.

Mass communication skills would thereby prove as effective in influencing attitudes in behavior as would the corresponding interpersonal skills in a true “community of values.” Electorates of large nation states might even entrust mass media personalities with high public office as a consequence of this dynamic.

Beniger did not live long enough to see the rise of social media, but it seems clear he would have viewed its expansion and automation of personalized broadcasts as the fulfillment of his vision of pseudo-community. Digital media’s blurring of interpersonal and mass communication, he concluded in his article, was establishing a “new infrastructure” for societal control, on a scale far greater than was possible before. The infrastructure could be used, he wrote, “for evil or for good.”

________
*For a different take on the consequences of the blurring of personal and mass communication, see my recent New Atlantis article “How to Fix Social Media.”

Deep Fake State

In “Beautiful Lies: The Art of the Deep Fake,” an essay in the Los Angeles Review of Books, I examine the rise and ramifications of deep fakes through a review of two books, photographer  Jonas Bendiksen‘s The Book of Veles and mathematician Noah Giansiracusa‘s How Algorithms Create and Prevent Fake News. As Bendiksen’s work shows, deep-fake technology gives artists a new tool for probing reality. As for the rest of us, the technology promises to turn reality into art.

Here’s a bit from the essay:

The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and even more disorienting effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Susan Sontag described in On Photography — to one where our bias is to take nothing as evidence.

The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?

Read it.

The mailbox and the megaphone

Now that it’s broadly understood that Facebook is a social disease, what’s to be done? In “How to Fix Social Media,” an essay in the new issue of The New Atlantis, I suggest a way forward. It begins by seeing social media companies for what they are. Companies like Facebook, Google, and Twitter are engaged in two very different communication businesses. They transmit personal messages between individuals, and they broadcast information to the masses. They’re mailbox, and they’re megaphone. The mailbox business is a common carriage business; the megaphone business is business with a public calling. Disentangling the two businesses opens the way for a two-pronged regulatory approach built on well-established historical precedents.

Here’s a taste of the essay:

For most of the twentieth century, advances in communication technology proceeded along two separate paths. The “one-to-one” systems used for correspondence and conversation remained largely distinct from the “one-to-many” systems used for broadcasting. The distinction was manifest in every home: When you wanted to chat with someone, you’d pick up the telephone; when you wanted to view or listen to a show, you’d switch on the TV or radio. The technological separation of the two modes of communication underscored the very different roles they played in people’s lives. Everyone saw that personal communication and public communication entailed different social norms, presented different sets of risks and benefits, and merited different legal, regulatory, and commercial responses.

The fundamental principle governing personal communication was privacy: Messages transmitted between individuals should be shielded from others’ eyes and ears. The principle had deep roots. It stemmed from a European common-law doctrine, known as the secrecy of correspondence, established centuries ago to protect the confidentiality of letters sent through the mail. For early Americans, the doctrine had special importance. In the years leading up to the War of Independence, the British government routinely intercepted and read letters sent from the colonies to England. Incensed, the colonists responded by establishing their own “constitutional post,” with a strict requirement that mail be carried “under lock and key.” At the moment of the country’s birth, the secrecy of correspondence became a democratic ideal.

Read on.

Are you still there?

Late Tuesday night, just as the Red Sox were beginning a top-of-the-eleventh rally against the Rays, my smart TV decided to ask me a question of deep ontological import:

Are you still there?

To establish my thereness (and thus be permitted to continue watching the game), I would need to “interact with the remote,” my TV informed me. I would need to respond to its signal with a signal of my own. At first, as I spent a harried few seconds finding the remote and interacting with it, I was annoyed by the interruption. But I quickly came to see it as endearing. Not because of the TV’s solicitude — the solicitude of a machine is just a gentle form of extortion — but because of the TV’s cluelessness. Though I was sitting just ten feet away from the set, peering intently into its screen, my smart TV couldn’t tell that I was watching it. It didn’t know where I was or what I was doing or even if I existed at all. That’s so cute.

I had found a gap in the surveillance system, but I knew it would soon be plugged. Media used to be happy to transmit signals in a human-readable format. But as soon as it was given the ability to collect signals, in a machine-readable format, media got curious. It wanted to know, and then it wanted to know everything, and then it wanted to know everything without having to ask. If a smart device asks you a question, you know it’s not working properly. Further optimization is required. And you know, too, that somebody is working on the problem.

Rumor has it that most smart TVs already have cameras secreted inside them — somewhere in the top bezel, I would guess, not far from the microphone. The cameras generally haven’t been activated yet, but that will change. In a few years, all new TVs will have operational cameras. All new TVs will watch the watcher. This will be pitched as an attractive new feature. We’ll be told that, thanks to the embedded cameras and their facial-recognition capabilities, televisions will henceforth be able to tailor content to individual viewers automatically. TVs will know who’s on the couch without having to ask. More than that, televisions will be able to detect medical and criminal events in the home and alert the appropriate authorities. Televisions will begin to save lives, just as watches and phones and doorbells already do. It will feel comforting to know that our TVs are watching over us. What good is a TV that can’t see?

We’ll be the show then. We’ll be the show that watches the show. We’ll be the show that watches the show that watches the show. In the end, everything turns into an Escher print.

“If you’re not paying for the product, you are the product.” If I have to hear that sentence again, I swear I’ll barf. As Shoshana Zuboff has pointed out, it doesn’t even have the benefit of being true. A product has dignity as a made thing. A product is desirable in itself. That doesn’t describe what we have come to represent to the operators of the machines that gather our signals. We’re the sites out of which industrial inputs are extracted, little seams in the universal data mine. But unlike mineral deposits, we continuously replenish our supply. The more we’re tapped, the more we produce.

The game continues. My smart TV tells me the precise velocity and trajectory of every pitch. To know is to measure, to measure is to know. As the system incorporates me into its workings, it also seeks to impose on me its point of view. It wants me to see the game — to see the world, to see myself — as a stream of discrete, machine-readable signals.

Are you still there?

Honestly, I have no idea.

Not being there: from virtuality to remoteness

I used to be virtual. Now I’m remote.

The way we describe our digitally mediated selves, the ones that whirl through computer screens like silks through a magician’s hands, has changed during the pandemic. The change is more than just a matter of terminology. It signals a shift in perspective and perhaps in attitude. “Virtual” told us that distance doesn’t matter; “remote” says that it matters a lot. “Virtual” suggested freedom; “remote” suggests incarceration.

The idea of virtuality-as-liberation came to the fore in Silicon Valley after the invention of the World Wide Web in 1989, but its origins go back to the beginnings of the computer age. In the 1940s and 1950s, as Katherine Hayles describes in How We Became Posthuman, the pioneers of digital computing — Turing, Shannon, Wiener, et al. — severed mind from body. They defined intelligence as “a property of the formal manipulation of symbols rather than enaction in the human life-world.” Our essence as thinking beings, they implied, is independent of our bodies. It lies in patterns of information and hence can be represented through electronic data processing. The self can be abstracted, virtualized.

Though rigorously materialist in its conception, this new mind-body dualism soon took on the characteristics of a theology. Not only would we be able to represent our essence through data, the argument went, but the transfer of the self to a computer would be an act of transcendence. It would free us from the constraints of the physical — from the body and its fixed location in space. As virtual beings, we would exist everywhere all at once. We would experience the “bodiless exultation of cyberspace,” as William Gibson put it in his 1984 novel Neuromancer. The sense of disembodiment as a means of emancipation was buttressed by the rise of schools of social critics who argued that “identity” could and should be separated from biology. If the self is a pattern of data, then the self is a “construct” that is infinitely flexible.

The arrival of social media seemed to bring us closer to the virtual ideal. It gave everyone easy access to multimedia software tools for creating rich representations of the self, and it provided myriad digital theaters, or “platforms,” for these representations to perform in. More and more, self-expression became a matter of symbol-processing, of information-patterning. The content of our character became the character of our content, and vice versa.

The pandemic has brought us back to our bodies, with a vengeance. It has done this not through re-embodiment but, paradoxically, through radical disembodiment. We’ve been returned to our bodies by being forced into further separation from them, by being cut off from, to quote Hayles again, “enaction in the human life-world.” As we retreated from the physical world, social media immediately expanded to subsume everyday activities that traditionally lay outside the scope of media. The computer — whether in the form of phone, laptop, or desktop — became our most important piece of personal protective equipment. It became the sterile enclosure, the prophylactic, that enabled us to go about the business of our lives — work, school, meetings, appointments, socializing, shopping — without actually inhabiting our lives. It allowed us to become remote.

In many ways, this has been a good thing. Without the tools of social media, and our experience in using them, the pandemic would have been even more of a trial. We would have felt even more isolated, our agency more circumscribed. Social media schooled us in the arts of social distancing before those arts became mandatory. But the pandemic has also given us a lesson, a painful one, in the limits of remoteness. In promising to eliminate distance, virtuality also promised to erase the difference between presence and absence. We would always be there, wherever “there” happened to be. That seemed plausible when our virtual selves were engaged in the traditional pursuits of media — news and entertainment, play and performance, information production and information gathering — but it was revealed to be an illusion as soon as social media became our means of living. Being remote is a drag. The state of absence, a physical state but also a psychic one, is a state of loneliness and frustration, angst and ennui.

What the pandemic has revealed is that when taken to an extreme — the extreme Silicon Valley saw as an approaching paradise — virtuality does not engender a sense of liberation and exultation. It engenders a sense of confinement and despair. Absence will never be presence. A body in isolation is a self in isolation.

Think about the cramped little cells in which we appear when we’re on Zoom. It’s hard to imagine a better metaphor for our situation. The architecture of Zoom is the architecture of the Panopticon, but it comes with a twist that Jeremy Bentham never anticipated. On Zoom, each of us gets to play the roles of both jailer and jailed. We are the watcher and the watched, simultaneously. Each role is an exercise in remoteness, and each is demeaning. Each makes us feel small.

What happens when the pandemic subsides? We almost certainly will rejoice in our return to the human life-world — the world of embodiment, presence, action. We’ll celebrate our release from remoteness. But will we rebel against social media and its continuing encroachment on our lives? I have my doubts. As the research of Sherry Turkle and others has shown, one of the attractions of virtualization has always been the sense of safety it provides. Even without a new virus on the prowl, the embodied world, the world of people and things, presents threats, not just physical but also social and psychological. Presence is also exposure. When we socialize through a screen, we feel protected from many of those threats — less fearful, more in control — even if we also feel more isolated and constrained and adrift.

If, in the wake of the pandemic, we end up feeling more vulnerable to the risks inherent in being physically in the world, we may, despite our immediate relief, continue to seek refuge in our new habits of remoteness. We won’t feel liberated, but at least we’ll feel protected.