Author Archives: Nick

At the Concord station (for Leo Marx)

Leo Marx has died, at the mighty age of 102. His work, particularly The Machine in the Garden, inspired many people who write on the cultural consequences of technological progress, myself included. As a small tribute, I’m posting this excerpt from The Shallows, in which Marx’s influence is obvious.

It was a warm summer morning in Concord, Massachusetts. The year was 1844. Nathaniel Hawthorne was sitting in a small clearing in the woods, a particularly peaceful spot known around town as Sleepy Hollow. Deep in concentration, he was attending to every passing impression, turning himself into what Ralph Waldo Emerson, the leader of Concord’s transcendentalist movement, had eight years earlier termed a “transparent eyeball.”

Hawthorne saw, as he would record in his notebook later that day, how “sunshine glimmers through shadow, and shadow effaces sunshine, imaging that pleasant mood of mind where gayety and pensiveness intermingle.” He felt a slight breeze, “the gentlest sigh imaginable, yet with a spiritual potency, insomuch that it seems to penetrate, with its mild, ethereal coolness, through the outward clay, and breathe upon the spirit itself, which shivers with gentle delight.” He smelled on the breeze a hint of “the fragrance of the white pines.” He heard “the striking of the village clock” and “at a distance mowers whetting their scythes,” though “these sounds of labor, when at a proper remoteness, do but increase the quiet of one who lies at his ease, all in a mist of his own musings.”

Abruptly, his reverie was broken:

But, hark! there is the whistle of the locomotive,—the long shriek, harsh above all other harshness, for the space of a mile cannot mollify it into harmony. It tells a story of busy men, citizens from the hot street, who have come to spend a day in a country village,—men of business,—in short, of all unquietness; and no wonder that it gives such a startling shriek, since it brings the noisy world into the midst of our slumbrous peace.

Leo Marx opens The Machine in the Garden, his classic 1964 study of technology’s influence on American culture, with a recounting of Hawthorne’s morning in Sleepy Hollow. The writer’s real subject, Marx argues, is “the landscape of the psyche” and in particular “the contrast between two conditions of consciousness.” The quiet clearing in the woods provides the solitary thinker with “a singular insulation from disturbance,” a protected space for reflection. The clamorous arrival of the train, with its load of “busy men,” brings “the psychic dissonance associated with the onset of industrialism.” The contemplative mind is overwhelmed by the noisy world’s mechanical busyness.

The stress that Google and other Internet companies place on the efficiency of information exchange as the key to intellectual progress is nothing new. It’s been, at least since the start of the Industrial Revolution, a common theme in the history of the mind. It provides a strong and continuing counterpoint to the very different view, promulgated by the American transcendentalists as well as the earlier English romantics, that true enlightenment comes only through contemplation and introspection. The tension between the two perspectives is one manifestation of the broader conflict between, in Marx’s terms, “the machine” and “the garden”—the industrial ideal and the pastoral ideal—that has played such an important role in shaping modern society.

When carried into the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of contemplative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in what Google calls the “world of numbers,” but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual locomotion.

Even as the printing press, invented by Johannes Gutenberg in the fifteenth century, made the literary mind the general mind, it set in motion the process that now threatens to render the literary mind obsolete. When books and periodicals began to flood the marketplace, people for the first time felt overwhelmed by information. Robert Burton, in his 1628 masterwork An Anatomy of Melancholy, described the “vast chaos and confusion of books” that confronted the seventeenth-century reader: “We are oppressed with them, our eyes ache with reading, our fingers with turning.” A few years earlier, in 1600, another English writer, Barnaby Rich, had complained, “One of the great diseases of this age is the multitude of books that doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought into the world.”

Ever since, we have been seeking, with mounting urgency, new ways to bring order to the confusion of information we face every day. For centuries, the methods of personal information management tended to be simple, manual, and idiosyncratic—filing and shelving routines, alphabetization, annotation, notes and lists, catalogues and concordances, indexes, rules of thumb. There were also the more elaborate, but still largely manual, institutional mechanisms for sorting and storing information found in libraries, universities, and commercial and governmental bureaucracies. During the twentieth century, as the information flood swelled and data-processing technologies advanced, the methods and tools for both personal and institutional information management became more complex, more systematic, and increasingly automated. We began to look to the very machines that exacerbated information overload for ways to alleviate the problem.

Vannevar Bush sounded the keynote for our modern approach to managing information in his much-discussed article “As We May Think,” which appeared in the Atlantic Monthly in 1945. Bush, an electrical engineer who had served as Franklin Roosevelt’s science adviser during World War II, worried that progress was being held back by scientists’ inability to keep abreast of information relevant to their work. The publication of new material, he wrote, “has been extended far beyond our present ability to make use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”

With his memex, Bush anticipated both the personal computer and the hypermedia system of the internet. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Englebart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.” Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!” We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”

Meanings of the metaverse: The people of the metaverse

Rachael

Through deep-learning algorithms, computers are learning to simulate us — the way we look, the way we speak, the way we move, the words we use. They are becoming experts at pastiche. They collect the traces of ourselves that we leave behind online — the data of beingness — and they weave that data into something new that resembles us. The real is the raw material of the fake.

Our computers, in other words, are learning to do what we have already learned to do. For many years now, we have spent our days consuming the data of beingness — all those digitized images and videos and words, all those facial expressions and microexpressions, those poses and posturings, those intonations of voice, those opinions and beliefs and emotions, those behaviors, those affects. Out of that vast, ever-evolving online databank of human specifications a pattern emerges — a pattern that suits us, that represents the self we desire to present to others. We cobble together a simulation of a person that we present as the person who we are. We become deep fakes that pass, in the media world that has become the world, for real people.

The child is no longer father to the man. The data is father to the man.

Rob Horning, in a new essay in Real Life, describes how he happened upon an online trove of snapshots taken in the 1980s. That was the last pre-internet decade, of course, and the faded, yellowing, flash-saturated shots might as well have been taken on a different planet. The people portrayed in them have a relationship to photography, and to media in general, that is alien to our own. “The subjects usually know that they are being watched,” writes Horning, “but they can’t imagine, even in theory, that it could be everyone watching. … It is as though who they were in general was more fixed and objective, less fluid and discursive. Though they are anonymous, they register more concretely as specific people, unpatterned by the grammar of gestures and looks that posting images to networks seems to impose.”

Horning is entranced, and disoriented, by the pictures because he sees something that no longer exists: a gap between image and being. Before we began to construct ourselves as patterns of data to be consumed through media by a general audience, the image of a person, as, for instance, captured in a snapshot, and the person were still separate. The image and the self had not yet merged. This is what gives old photographs of people their poignancy and their power, as well as their strangeness. We know, as Horning emphasizes, that back then people were self-conscious — they were aware of themselves as objects seen by others, and they composed their looks and behavior with viewers in mind — but the scale of the audience, and hence of the performance, was entirely different. The people in these photographs were not yet digitized. Their existence was not yet mediated in the way ours is.

It’s revealing that, before the arrival of the net, people didn’t talk about “authenticity” as we do today. They didn’t have to. They understood, implicitly, that there was something solid behind whatever show they might put on for public consumption. The show was not everything. The anxiety of the deep fake had not yet taken hold of the subconscious. The reason we talk so much about authenticity now is because authenticity is no longer available to us. At best, we simulate authenticity: we imbue our deep fakeness with the qualities that people associate with the authentic. We assemble a self that fits the pattern of authenticity, and the ever-present audience applauds the pattern as “authentic.” The likes roll in, the views accumulate. Our production is validated. If we’re lucky, we rise to the level of influencer. What is an influencer but the perfection of the deep-fake self?

I know, I know. You disagree. You reject my argument. You rebel against my “reductionist” speculations. You think I’m nuts. I can almost hear you screaming, “I am not a deep fake! I am a human being!” But that’s what you would think, and that’s what you would scream. After all, you have created for yourself a deep fake that believes, above all else, that it is real.

The metaverse may not yet have arrived, but we are prepared for it. We are, already, the people of the metaverse.

________________________
This is the fifth installment in the series “Meanings of the Metaverse,” which began here.

Meanings of the metaverse: Reality surfing

The metaverse promises to bring us an abundance of realities. There’ll be the recalcitrant old status-quo-ante reality — the hard-edged one that Dr. Johnson encountered when he kicked that rock to refute Bishop Berkeley’s theory of radical solipsism. (Let’s call that one “OG Reality.”) Then there’ll be Virtual Reality, the 3-D dreamscape you’ll enter when you strap on VR goggles or, somewhat further in the future, tap your temple thrice to activate your Oculus Soma brain plug-in. Then there’ll be Augmented Reality, where OG Reality will be overlaid with a transparent, interactive digital-interface layer that will act kind of like the X-Ray Spex you used to be able to order through ads at the back of comic books, but with better optics. And there’ll be something called Mixed Reality, which actually encompasses a spectrum of realities with different blends of OG, Augmented, and Virtual. These will be the four main categories of what might be termed Shared Realities — realities that can be inhabited by many people (or their avatars) simultaneously. Along with the Shared Realities there will be a more or less infinite number of Personal Realities  — ones of the Berkeleian type that will be inhabited or otherwise experienced by only a single mind, either embodied or disembodied. (Things get a little tricky here, as a Personal Reality can, and often will, be coterminous with a Shared Reality.) All of these realities will also exist in a plethora of brand-name variations — Apple Augmented, Meta Augmented, Microsoft Augmented, Google Augmented, QAnon Augmented, and so on. I suspect that there will also be a wide array of Deep Fake Realities ginned up by neural-net algorithms for various political or commercial purposes. Maybe Open AI will even come up with an online Deep Fake Reality Generator that will democratize reality creation.

If T.S. Eliot was correct when he wrote, in Four Quartets, that “humankind cannot bear very much reality,” then we’re going to be screwed. I mean, I got a headache just writing that last paragraph. But maybe what Eliot really meant has more to do with quality than quantity. Maybe he was saying that what we can’t bear is too much depth in reality, not too many variations of reality. If that’s the case, then everything should be cool. The reality explosion will suit us just fine. The metaverse will do for reality what the web did for information: give us so many options that we don’t have to experience any of them very deeply at all. We’ll be able to reality surf, zipping out of a reality whenever it becomes too “heavy,” as the hippies used to say. Remember how happy Zuckerberg’s avatar looked when he was flying around the metaverse during that Facebook Connect keynote last fall? That’ll be us. Untethered, aloof, free. The great thing about the metaverse is that when you kick a rock in it, nothing is refuted.

_________

This is the fourth installment in the series “Meanings of the Metaverse,” which began here and continued here and here

The automatic muse

In the fall of 1917, the Irish poet William Butler Yeats, now in middle age and having twice had marriage proposals turned down, first by his great love Maud Gonne and next by Gonne’s daughter Iseult, offered his hand to a well-off young Englishwoman named Georgie Hyde-Lees. She accepted, and the two were wed a few weeks later, on October 20, in a small ceremony in London.

Hyde-Lees was a psychic, and four days into their honeymoon she gave her husband a demonstration of her ability to channel the words of spirits through automatic writing. Yeats was fascinated by the messages that flowed through his wife’s pen, and in the ensuing years the couple held more than 400 such seances, the poet poring over each new script. At one point, Yeats announced that he would devote the rest of his life to interpreting the messages. “No,” the spirits responded, “we have come to give you metaphors for poetry.” And so they did, in abundance. Many of Yeats’s great late poems, with their gyres, staircases, and phases of the moon, were inspired by his wife’s mystical scribbles.

One way to think about AI-based text-generation tools like OpenAI’s GPT-3 is as clairvoyants. They are mediums that bring the words of the past into the present in a new arrangement. GPT-3 is not creating text out of nothing, after all. It is drawing on a vast corpus of human expression and, through a quasi-mystical statistical procedure (no one can explain exactly what it is doing), synthesizing all those old words into something new, something intelligible to and requiring interpretation by its interlocutor. When we talk to GPT-3, we are, in a way, communing with the dead. One of Hyde-Lees’ spirits said to Yeats, “this script has its origin in human life — all religious systems have their origin in God & descend to man — this ascends.” The same could be said of the script generated by GPT-3. It has its origin in human life; it ascends.

It’s telling that one of the first commercial applications of GPT-3, Sudowrite, is being marketed as a therapy for writer’s block. If you’re writing a story or essay and you find yourself stuck, you can plug the last few sentences of your work into Sudowrite, and it will generate the next few sentences, in a variety of versions. It may not give you metaphors for poetry (though it could), but it will give you some inspiration, stirring thoughts and opening possible new paths. It’s an automatic muse, a mechanical Georgie Hyde-Lees.

Sudowrite, and GPT-3 in general, has already been used for a lot of stunts. Kevin Roose, the New York Times technology columnist, recently used it to generate a substantial portion of a review of a mediocre new book on artificial intelligence. (The title of the review was, naturally, “A Robot Wrote this Book Review.”) Commenting on Sudowrite’s output, Roose wrote, “within a few minutes, the AI was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.”

But the potential of these AI-powered automatic writers goes far beyond journalistic parlor tricks. They promise to serve as new tools for the creation of art. One of the most remarkable pieces of writing I read this year was Vauhini Vara’s story “Ghosts” in The Believer. While locked down in 2020, Vara became obsessed with GPT-3. “I sought out examples of GPT-3’s work, and they astonished me,” she writes in an introduction to her piece. “Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter — but often poetically so, almost truer than writing any human would produce.” Yeats would have understood.

Vara’s older sister contracted cancer in high school, and died a few years later. The experience left Vara traumatized, and, though an accomplished writer, she had never been able to write about it. But with GPT-3 she began to find the words. “I found myself irresistibly attracted to GPT-3,” she explains “— to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them.” She began to feed GPT-3 some sentences about her sister’s illness, and the system started to weave those sentences into stories — fantastical and uncanny, but also stirring, and ultimately heartbreaking. The Believer story chronicles eight of her sessions with GPT-3. It reads as a conversation between a writer and her muse, a conversation that begins tentatively and grows richer and truer as it goes on.

At one point, after Vara confesses to feeling like “a ghost” after her sister’s death, GPT-3 continues the thought:

So I can’t describe her to you. But I can describe what it felt like to have her die. It felt like my life was an accident — or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else — a stranger — would have to live, in my place. I was that stranger. I still am.

What gives the exchange all the more resonance is the sense that, in drawing on its corpus of past human speech to generate its mysterious new pastiche, GPT-3 is expressing the pain of others who have suffered unbearable losses. Spirits are talking.

Social media as pseudo-community

In 1987, two years before James Beniger wrote The Control Revolution, his seminal study of the role information systems play in society, he published an article called “Personalization of Mass Media and the Growth of Pseudo-Community” in the journal Communication Research. Beniger’s subject was the shift from “interpersonal communication” to “mass communication” as the basis of human relations. The shift had begun in the eighteenth century, with the introduction of high-speed printing presses and the proliferation of widely circulating newspapers and magazines; had accelerated with the arrival of broadcasting in the middle of the twentieth century; and was taking a new turn with the rise of digital media.

Beniger argued that interpersonal, or face-to-face, communication encourages the development of small, tightly knit, tightly controlled communities where individual interests are subordinate to group interests. For most of human history, society was structured along these intimate lines. Mass communication, more efficient but less intimate, encourages the development of large, loosely knit, loosely controlled communities where individual interests take precedence over group interests. As mass communication became ever more central to human experience in the second half of the twentieth century, thanks to the enormous popularity of radio and television, society restructured itself, with individualism and personal freedom becoming the governing ethos. The trend seemed to culminate in the free-wheeling, self-indulgent 1970s.

The arrival of the personal computer around 1980 put a twist in the story. By enabling mass media messages to be personalized, computers began to make mass communication feel as intimate as interpersonal communication, while also making mass communication even more efficient.* Imbuing broadcasting with an illusion of intimacy, computers expanded media’s power to structure and control human relations. Observed Beniger:

Gradually each of us has become enmeshed in superficially interpersonal relations that confuse personal with mass messages and increasingly include interactions with machines that write, speak, and even “think” with success steadily approaching that of humans. The change constitutes nothing less than a transformation of traditional community into impersonal association — toward an unimagined hybrid of the two extremes that we might call pseudo-community.

Beniger emphasized that, for broadcasters and advertisers, contriving a sense of intimacy had always been a central goal, as it served to give their programs and messages greater influence over the audience. Even during the early days of radio and TV, the performers who seemed most sincere to listeners and viewers tended to have the greatest success — whether their sincerity was real or feigned. With computer personalization, Beniger understood, individuals’ sense of personal connection with mass-media messages would strengthen. The glue of pseudo-community would be pseudo-intimacy. 

Although Beniger wrote his article several years before the invention of the web and long before the arrival of social media, he was remarkably prescient about what lay ahead:

The capacity of such [digital] mass media for simulating interpersonal communication is limited only by their output technologies, computing power, and artificial intelligence; their capacity for personalization is limited only by the size and quality of data sets on the households and individuals to which they are linked.

The power of “sincerity” — today we would be more likely to use the terms “authenticity” and “relatability” — would also intensify, Beniger saw. Overwhelmed with personalized messages, people would put their trust and faith in whatever human or machine broadcaster felt most real, most genuine to them.

Mass communication skills would thereby prove as effective in influencing attitudes in behavior as would the corresponding interpersonal skills in a true “community of values.” Electorates of large nation states might even entrust mass media personalities with high public office as a consequence of this dynamic.

Beniger did not live long enough to see the rise of social media, but it seems clear he would have viewed its expansion and automation of personalized broadcasts as the fulfillment of his vision of pseudo-community. Digital media’s blurring of interpersonal and mass communication, he concluded in his article, was establishing a “new infrastructure” for societal control, on a scale far greater than was possible before. The infrastructure could be used, he wrote, “for evil or for good.”

________
*For a different take on the consequences of the blurring of personal and mass communication, see my recent New Atlantis article “How to Fix Social Media.”

Deep Fake State

In “Beautiful Lies: The Art of the Deep Fake,” an essay in the Los Angeles Review of Books, I examine the rise and ramifications of deep fakes through a review of two books, photographer  Jonas Bendiksen‘s The Book of Veles and mathematician Noah Giansiracusa‘s How Algorithms Create and Prevent Fake News. As Bendiksen’s work shows, deep-fake technology gives artists a new tool for probing reality. As for the rest of us, the technology promises to turn reality into art.

Here’s a bit from the essay:

The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and even more disorienting effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Susan Sontag described in On Photography — to one where our bias is to take nothing as evidence.

The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?

Read it.

Meanings of the metaverse: The Andreessen solution

We’ll be happier there.

I like to think of Marc Andreessen as the metaverse’s Statue of Liberty. He stands just outside the virtual world’s golden door, illuminating the surrounding darkness with a holographic torch, welcoming the downtrodden to a new and better life.

You might remember the colorful interview Andreessen gave to Substack trickster Niccolo Soldo last spring. At one point in the exchange, the high-browed venture capitalist sketches out his vision of the metaverse and makes a passionate case for its superiority to what he calls “the quote-unquote real world.” His words have taken on new weight now, in the wake of Mark Zuckerberg’s announcement that Facebook is changing its name to Meta and embarking on the construction of an all-encompassing virtual world. Andreessen, an early Facebook investor and one of its directors since 2008, is a pal of Zuckerberg’s and has long had the entrepreneur’s ear.  He is, it’s been said, “something of an Obi-Wan to Zuckerberg’s Luke Skywalker.”

In describing the metaverse, Zuckerberg has stressed the anodyne. There will be virtual surfing, virtual fencing, virtual poker nights. We’ll be able to see and smile at our colleagues even while working alone in our homes. We’ll be able to fly over cities and through buildings. David Attenborough will stop by for the odd chat. Andreessen’s vision is far darker and far more radical, eschatological even. He believes the metaverse is where the vast majority of humanity will end up, and should end up. If the metaverse Zuckerberg presents for public consumption seems like a tricked-out open-world videogame, Andreessen’s metaverse comes off as a cross between an amusement park and a concentration camp.

But I should let him explain it.  When Soldo asks, “Are we TOO connected these days?,” Andreessen responds:

Your question is a great example of what I call Reality Privilege. … A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substance, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date. These are also *all* of the people who get to ask probing questions like yours. Everyone else, the vast majority of humanity, lacks Reality Privilege — their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world.

The Reality Privileged, of course, call this conclusion dystopian, and demand that we prioritize improvements in reality over improvements in virtuality. To which I say: reality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don’t think we should wait another 5,000 years to see if it eventually closes the gap. We should build — and we are building — online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.

It’s tempting to dismiss all this as just more bad craziness from Big Tech’s fiercely adolescent mind. But that would be a mistake. For one thing, Andreessen is revealing his worldview and his ultimate goals here, and he has the influence and the resources to, if not create the future, at least push the future in the direction he prefers. As Tad Friend pointed out in “Tomorrow’s Advance Man,” a 2015 New Yorker profile of Andreessen, power in Silicon Valley accrues to those who can “not just see the future but summon it.” That’s a very small group, and Andreessen is in it. For another thing, Big Tech’s bad craziness has a tendency, as we’ve seen over the past twenty-odd years, to migrate into our everyday lives. We ignore it at our eventual peril.

In Andreessen’s view, society is condemned, by natural law, to radical inequality. In a world where material goods are scarce and human will and talent unequally distributed, society will always be divided into two groups: a small elite who lead rich lives and the masses who live impoverished ones. A few eat cake; the rest get, at best, crumbs. The entire history of civilization — Andreessen’s “5,000 years” — bears this out. Any attempt, political or economic, to overcome society’s natural bias toward extreme inequality is futile. It’s just magical thinking. The only way out, the only solution, is to overturn natural law, to escape the quote-unquote real world. That was never possible — until now. Computers have given us the chance to invent a new world of virtual abundance, where history’s have-nots can experience a simulation of the “glorious substance” that history’s haves have always enjoyed. With the metaverse, civilization is at last liberated from nature and its constraints.

The migration from the real world to the virtual world, some would argue, is already well under way. The masses — at least those who can afford computers and lots of network bandwidth — are voting with their thumbs. Most American teenagers today say they would rather hang out with their friends online than in person. And large numbers of people, particularly boys and young men, are choosing to spend as much time as possible in the hyper-stimulating virtual worlds of videogames rather than in the relative tedium of the physical world. In her influential 2011 book Reality Is BrokenJane McGonical argues that this choice is entirely rational:

The real world just doesn’t offer up as easily the carefully designed pleasures, the thrilling challenges, and the powerful social bonding afforded by virtual environments. Reality doesn’t motivate us as effectively. Reality isn’t engineered to maximize our potential. Reality wasn’t designed from the bottom up to make us happy. … Reality, compared to games, is broken.

McGonical holds out hope that reality can be “fixed” (by making it more gamelike), but Andreessen would dismiss that as just another example of magical thinking. What you really want to do is speed up the out-of-reality migration — and don’t look back.

Andreessen is not actually suggesting that the metaverse will close the economic gap between haves and have-nots, it’s important to note. At a material level, there’s every reason to believe that the gap will widen as the metaverse grows. It’s the Reality Privileged, or at least its Big Tech wing, who are, as Andreessen emphasizes, building the metaverse. They will also be the ones who own it and profit from it. Andreessen may expect the Reality Deprived to see the metaverse as a gift bestowed upon them by the Reality Privileged, a cosmic act of noblesse oblige, but it’s self-interest that motivates him, Zuckerberg, and the other world-builders.

Not only would the metaverse expand their wealth, it would also get the Reality Deprived out of their hair. With the have-nots spending more and more of their time experiencing a simulation of glorious substance through their VR headsets, the haves would have the actual glorious substance all the more to themselves. The beaches would be emptier, the streets cleaner. Best of all, the haves would be able to shed all responsibility, and guilt, for the problems of the real world. When Andreessen argues that we should no longer bother to “prioritize improvements in reality,” he’s letting himself off the hook. Let them eat virtual cake.

Even within the faux-rich confines of the metaverse, there’s every reason to believe that inequality would continue to reign. The metaverse, as envisioned by Andreessen and Zuckerberg, is fundamentally consumerist — it’s the world remade in the image of the experience economy. As Zuckerberg promised in his Facebook Connect keynote, the Meta metaverse will, within ten years, “host hundreds of billions of dollars of digital commerce.” Money will still exist in the virtual world, and it will be as unequally distributed as ever. That means that we will quickly see a division open up between the Virtuality Privileged and the Virtuality Deprived. While Zuckerberg was giving his keynote, Nike was, as the Wall Street Journal reported, filing trademark applications for “digital versions of its sneakers, clothing and other goods stamped with its swoosh logo.” In the metaverse, the rich kids will still get the cool kicks.

The paradox of Andreessen’s metaverse is that, despite its immateriality, it’s essentially materialist. Andreessen can’t imagine people aspiring to anything more than having the things and the experiences that money can buy. If the peasants are given a simulation of the worldly pleasures of the rich, their lives will suddenly become “wonderful.” They won’t actually own anything, but their existence will be “immeasurably richer and more fulfilling.”

When we take up residence in the metaverse, we’ll all be living the dream. It won’t be our dream, though. It will be the dream of Marc Andreessen and Mark Zuckerberg.

_________

This is the third installment in the series “Meanings of the Metaverse,” which began here and continued here. The fourth installment, “Reality Surfing,” is here.