Culture vultures

I’m not tearing up over Elon Musk’s termination, with extreme prejudice, of Twitter. Kill the blue bird, gut it, stuff it, and stick it in a media museum to collect dust. Think of all the extra time journalists will now have for journalism.

But there is something ominous about a superbillionaire taking over what had become a sort of public square, a center of discourse, for crying out loud, and doing with it what he pleases, including some pretty perverted acts. I mean, that X logo? Virginia Heffernan compares it to “the skull and crossbones on cartoon bottles of poison.” To me, it looks like something that a cop might spray-paint on a floor to mark the spot where a corpse lay before it was removed—the corpse in this case being the bird’s.

Musk’s toying dismemberment of Twitter feels even more unsettling in the wake of the announcement yesterday that private-equity giant KKR is buying Simon & Schuster, publisher of Catch-22 and Den of Thieves, among other worthy titles, for a measly billion and a half. Says S&S CEO Jon Karp: “They plan to invest in us and make us even greater than we already are. What more could a publishing company want?” That would have made a funny tweet.

Both gambits are asset plays, or, maybe a better term, asset undertakings. I don’t understand everything Musk’s doing—manic episodes have their own logic—but he does get an established social-media platform and a big pile of content to feed into the large language model he’s building at xAI. (Fun game: connect the Xs.) KKR gets its own pile of content to, uh, leverage. It intentions probably aren’t entirely literary.

Well-turned sentences had a decent run, but after TikTok they’ve become depreciating assets. Traditional word-based culture—and, sure, I’ll stick Twitter into that category—is beginning to look like a feeding ground for vultures. Tell Colleen Hoover to turn out the lights when she leaves.

Vision Pro’s big reveal

At first glance, there doesn’t seem to be much to connect Meta’s $500 Quest 3 face strap-on for gamer-proles with Apple’s $3,500 Vision Pro face tiara for elite beings of a hypothetical nature, but the devices do share one important thing in common: redundancy. Both offer a set of features that lag far behind our already well-established psychic capabilities. They offer kludgy imitations of what our minds now do effortlessly. Our reality has been augmented, virtual, and mixed for a long time, and we’re at home in it. Bulky headgear that projects images onto fields of vision feels like a leap backwards.

Baudrillard explained it all thirty years ago in The Perfect Crime:

The virtual camera is in our heads. No need of a medium to reflect our problems in real time: every existence is telepresent to itself. The TV and the media long since left their media space to invest “real” life from the inside, precisely as a virus does a normal cell. No need of the headset and the data suit: it is our will that ends up moving about the world as though inside a computer-generated image.

Who needs real goggles when we already wear virtual ones?

Vision Pro’s value seems to lie largely in the realm of metaphor. There’s that brilliant little reality dial—the “digital crown”—that allows you to fade in and out of the world, an analog rendering of the way our consciousness now wavers between presence and absence, here and not-here. And there’s the projection of your eyes onto the outer surface of the lens, so those around you can judge your degree of social and emotional availability at any given moment. Your eyes disappear, Apple explains, as you become more “immersed,” as you retreat from your physical surroundings into the screen’s captivating images. See you later. Your fingers keep moving, though, worrying their virtual worry beads, the body reduced to interface. In its metaphors, Vision Pro reveals us for what we have become: avatars in the uncanny valley.

Apple presents its Vision line as the next logical step in the progression of computing: from desktop computing to mobile computing to, now, “spatial computing.” Apps float in the air. The invisible data streams that already swirl around us become visible. The world is the computer. Maybe that is the future of computing. Maybe not. In most situations, the smartphone still seems more practical, flexible, and user-friendly than something that, like the xenomorph in Alien, commandeers the better part of your face.

The vision that Vision offers us seems more retrospective than prospective. It shows us a time when entering a virtual world required a gizmo. That’s the past, not the future.

Meanings of the metaverse: Liquid death in life

“Liquid Death”: Whenever the metaverse gets rebranded into something more consumable, I would suggest that for its new name. It’s edgy, it’s memorable, and it hits the bullseye.

Liquid Death, the edgy canned water, has already proclaimed itself, in one of its edgy TikToks, the “official water of the metaverse.”*

@liquiddeath The official water of the metaverse @vyralteq #murderyourthirst #deathtoplastic ♬ original sound – Liquid Death

That’s apt. Liquid Death is, after all, the first product to exist entirely in the metaverse. In fact, calling it a “product” seems anachronistic. It just reveals how ill-suited our vocabulary is to the metaverse. Words are too tied to things; we’re going to need a new language. I guess you could say LD is a “metaproduct,” but that doesn’t seem quite right either. It suggests that, behind the metaproduct, there is an actual, primary “product.” And that’s not true at all.

Let me explain. We used to think that avatars were virtual representations of actual objects, digital symbols of “real” things, but LD turns that old assumption on its head. The real LD is a symbol, and the stuff you pour down your neck from the can is just a physical representation of the symbol, a derivative. The water is the avatar. The actual “product” — everything is going to need to be put into scare quotes soon — is the sum of the billions of digital Liquid Death messages and images that pour continuously through billions of streams. The actual product is nothing.

Jean Baudrillard, philosopher of the hyperreal, would have put it like this:

Liquid Death: more watery than water

That’s why LD can market itself as an alcoholic beverage — the “latest innovation in beer,” as the Wall Street Journal described it — even though it’s just water. In the metaverse, a tallboy of water is every bit as intoxicating as a double IPA. More intoxicating, actually, if you get the branding right. And if you’re still partying in the “quote-unquote real world,” as Marc Andreessen puts it, drinking a symbol of an alcoholic beverage without actually drinking an alcoholic beverage is the first step to becoming a symbol yourself. Liquid Death is the metaverse’s gateway drug.

Liquid Death: more boozy than booze

Whether you call it a product or a “product” or TBD, one thing’s for sure: Liquid Death is a prophecy. Mark Zuckerberg says that his immediate goal is to “get a billion people into the metaverse doing hundreds of dollars apiece in digital commerce.” That’s his “north star.” Meeting the goal is going to require that commerce accelerate its long-term shift from goods, as traditionally defined, to symbols. Which in turn will require a psychic shift on the part of consumers, a kind of caterpillar-to-butterfly transubstantiation. We’ll need to do to the self what Liquid Death has done to booze: shift its essence from the thing to the representation of the thing. The avatar becomes the person, the non-fungible token of the self. The body turns into an avatar of the symbol, a derivative of a derivative.

Liquid Death operates a virtual country club—called the Liquid Death Country Club—which you can join, it says, by “selling your soul.” That’s what I love about Liquid Death. It tells you the truth about the metaverse.

________
*When edginess achieves cultural centrality, is it still edgy?

This is the sixth installment in the series “Meanings of the Metaverse,” which began here.

At the Concord station (for Leo Marx)

Leo Marx has died, at the mighty age of 102. His work, particularly The Machine in the Garden, inspired many people who write on the cultural consequences of technological progress, myself included. As a small tribute, I’m posting this excerpt from The Shallows, in which Marx’s influence is obvious.

It was a warm summer morning in Concord, Massachusetts. The year was 1844. Nathaniel Hawthorne was sitting in a small clearing in the woods, a particularly peaceful spot known around town as Sleepy Hollow. Deep in concentration, he was attending to every passing impression, turning himself into what Ralph Waldo Emerson, the leader of Concord’s transcendentalist movement, had eight years earlier termed a “transparent eyeball.”

Hawthorne saw, as he would record in his notebook later that day, how “sunshine glimmers through shadow, and shadow effaces sunshine, imaging that pleasant mood of mind where gayety and pensiveness intermingle.” He felt a slight breeze, “the gentlest sigh imaginable, yet with a spiritual potency, insomuch that it seems to penetrate, with its mild, ethereal coolness, through the outward clay, and breathe upon the spirit itself, which shivers with gentle delight.” He smelled on the breeze a hint of “the fragrance of the white pines.” He heard “the striking of the village clock” and “at a distance mowers whetting their scythes,” though “these sounds of labor, when at a proper remoteness, do but increase the quiet of one who lies at his ease, all in a mist of his own musings.”

Abruptly, his reverie was broken:

But, hark! there is the whistle of the locomotive,—the long shriek, harsh above all other harshness, for the space of a mile cannot mollify it into harmony. It tells a story of busy men, citizens from the hot street, who have come to spend a day in a country village,—men of business,—in short, of all unquietness; and no wonder that it gives such a startling shriek, since it brings the noisy world into the midst of our slumbrous peace.

Leo Marx opens The Machine in the Garden, his classic 1964 study of technology’s influence on American culture, with a recounting of Hawthorne’s morning in Sleepy Hollow. The writer’s real subject, Marx argues, is “the landscape of the psyche” and in particular “the contrast between two conditions of consciousness.” The quiet clearing in the woods provides the solitary thinker with “a singular insulation from disturbance,” a protected space for reflection. The clamorous arrival of the train, with its load of “busy men,” brings “the psychic dissonance associated with the onset of industrialism.” The contemplative mind is overwhelmed by the noisy world’s mechanical busyness.

The stress that Google and other Internet companies place on the efficiency of information exchange as the key to intellectual progress is nothing new. It’s been, at least since the start of the Industrial Revolution, a common theme in the history of the mind. It provides a strong and continuing counterpoint to the very different view, promulgated by the American transcendentalists as well as the earlier English romantics, that true enlightenment comes only through contemplation and introspection. The tension between the two perspectives is one manifestation of the broader conflict between, in Marx’s terms, “the machine” and “the garden”—the industrial ideal and the pastoral ideal—that has played such an important role in shaping modern society.

When carried into the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of contemplative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in what Google calls the “world of numbers,” but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual locomotion.

Even as the printing press, invented by Johannes Gutenberg in the fifteenth century, made the literary mind the general mind, it set in motion the process that now threatens to render the literary mind obsolete. When books and periodicals began to flood the marketplace, people for the first time felt overwhelmed by information. Robert Burton, in his 1628 masterwork An Anatomy of Melancholy, described the “vast chaos and confusion of books” that confronted the seventeenth-century reader: “We are oppressed with them, our eyes ache with reading, our fingers with turning.” A few years earlier, in 1600, another English writer, Barnaby Rich, had complained, “One of the great diseases of this age is the multitude of books that doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought into the world.”

Ever since, we have been seeking, with mounting urgency, new ways to bring order to the confusion of information we face every day. For centuries, the methods of personal information management tended to be simple, manual, and idiosyncratic—filing and shelving routines, alphabetization, annotation, notes and lists, catalogues and concordances, indexes, rules of thumb. There were also the more elaborate, but still largely manual, institutional mechanisms for sorting and storing information found in libraries, universities, and commercial and governmental bureaucracies. During the twentieth century, as the information flood swelled and data-processing technologies advanced, the methods and tools for both personal and institutional information management became more complex, more systematic, and increasingly automated. We began to look to the very machines that exacerbated information overload for ways to alleviate the problem.

Vannevar Bush sounded the keynote for our modern approach to managing information in his much-discussed article “As We May Think,” which appeared in the Atlantic Monthly in 1945. Bush, an electrical engineer who had served as Franklin Roosevelt’s science adviser during World War II, worried that progress was being held back by scientists’ inability to keep abreast of information relevant to their work. The publication of new material, he wrote, “has been extended far beyond our present ability to make use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”

With his memex, Bush anticipated both the personal computer and the hypermedia system of the internet. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Englebart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.” Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!” We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”

Meanings of the metaverse: The people of the metaverse

Rachael

Through deep-learning algorithms, computers are learning to simulate us — the way we look, the way we speak, the way we move, the words we use. They are becoming experts at pastiche. They collect the traces of ourselves that we leave behind online — the data of beingness — and they weave that data into something new that resembles us. The real is the raw material of the fake.

Our computers, in other words, are learning to do what we have already learned to do. For many years now, we have spent our days consuming the data of beingness — all those digitized images and videos and words, all those facial expressions and microexpressions, those poses and posturings, those intonations of voice, those opinions and beliefs and emotions, those behaviors, those affects. Out of that vast, ever-evolving online databank of human specifications a pattern emerges — a pattern that suits us, that represents the self we desire to present to others. We cobble together a simulation of a person that we present as the person who we are. We become deep fakes that pass, in the media world that has become the world, for real people.

The child is no longer father to the man. The data is father to the man.

Rob Horning, in a new essay in Real Life, describes how he happened upon an online trove of snapshots taken in the 1980s. That was the last pre-internet decade, of course, and the faded, yellowing, flash-saturated shots might as well have been taken on a different planet. The people portrayed in them have a relationship to photography, and to media in general, that is alien to our own. “The subjects usually know that they are being watched,” writes Horning, “but they can’t imagine, even in theory, that it could be everyone watching. … It is as though who they were in general was more fixed and objective, less fluid and discursive. Though they are anonymous, they register more concretely as specific people, unpatterned by the grammar of gestures and looks that posting images to networks seems to impose.”

Horning is entranced, and disoriented, by the pictures because he sees something that no longer exists: a gap between image and being. Before we began to construct ourselves as patterns of data to be consumed through media by a general audience, the image of a person, as, for instance, captured in a snapshot, and the person were still separate. The image and the self had not yet merged. This is what gives old photographs of people their poignancy and their power, as well as their strangeness. We know, as Horning emphasizes, that back then people were self-conscious — they were aware of themselves as objects seen by others, and they composed their looks and behavior with viewers in mind — but the scale of the audience, and hence of the performance, was entirely different. The people in these photographs were not yet digitized. Their existence was not yet mediated in the way ours is.

It’s revealing that, before the arrival of the net, people didn’t talk about “authenticity” as we do today. They didn’t have to. They understood, implicitly, that there was something solid behind whatever show they might put on for public consumption. The show was not everything. The anxiety of the deep fake had not yet taken hold of the subconscious. The reason we talk so much about authenticity now is because authenticity is no longer available to us. At best, we simulate authenticity: we imbue our deep fakeness with the qualities that people associate with the authentic. We assemble a self that fits the pattern of authenticity, and the ever-present audience applauds the pattern as “authentic.” The likes roll in, the views accumulate. Our production is validated. If we’re lucky, we rise to the level of influencer. What is an influencer but the perfection of the deep-fake self?

I know, I know. You disagree. You reject my argument. You rebel against my “reductionist” speculations. You think I’m nuts. I can almost hear you screaming, “I am not a deep fake! I am a human being!” But that’s what you would think, and that’s what you would scream. After all, you have created for yourself a deep fake that believes, above all else, that it is real.

The metaverse may not yet have arrived, but we are prepared for it. We are, already, the people of the metaverse.

________________________
This is the fifth installment in the series “Meanings of the Metaverse,” which began here.

Meanings of the metaverse: Reality surfing

The metaverse promises to bring us an abundance of realities. There’ll be the recalcitrant old status-quo-ante reality — the hard-edged one that Dr. Johnson encountered when he kicked that rock to refute Bishop Berkeley’s theory of radical solipsism. (Let’s call that one “OG Reality.”) Then there’ll be Virtual Reality, the 3-D dreamscape you’ll enter when you strap on VR goggles or, somewhat further in the future, tap your temple thrice to activate your Oculus Soma brain plug-in. Then there’ll be Augmented Reality, where OG Reality will be overlaid with a transparent, interactive digital-interface layer that will act kind of like the X-Ray Spex you used to be able to order through ads at the back of comic books, but with better optics. And there’ll be something called Mixed Reality, which actually encompasses a spectrum of realities with different blends of OG, Augmented, and Virtual. These will be the four main categories of what might be termed Shared Realities — realities that can be inhabited by many people (or their avatars) simultaneously. Along with the Shared Realities there will be a more or less infinite number of Personal Realities  — ones of the Berkeleian type that will be inhabited or otherwise experienced by only a single mind, either embodied or disembodied. (Things get a little tricky here, as a Personal Reality can, and often will, be coterminous with a Shared Reality.) All of these realities will also exist in a plethora of brand-name variations — Apple Augmented, Meta Augmented, Microsoft Augmented, Google Augmented, QAnon Augmented, and so on. I suspect that there will also be a wide array of Deep Fake Realities ginned up by neural-net algorithms for various political or commercial purposes. Maybe Open AI will even come up with an online Deep Fake Reality Generator that will democratize reality creation.

If T.S. Eliot was correct when he wrote, in Four Quartets, that “humankind cannot bear very much reality,” then we’re going to be screwed. I mean, I got a headache just writing that last paragraph. But maybe what Eliot really meant has more to do with quality than quantity. Maybe he was saying that what we can’t bear is too much depth in reality, not too many variations of reality. If that’s the case, then everything should be cool. The reality explosion will suit us just fine. The metaverse will do for reality what the web did for information: give us so many options that we don’t have to experience any of them very deeply at all. We’ll be able to reality surf, zipping out of a reality whenever it becomes too “heavy,” as the hippies used to say. Remember how happy Zuckerberg’s avatar looked when he was flying around the metaverse during that Facebook Connect keynote last fall? That’ll be us. Untethered, aloof, free. The great thing about the metaverse is that when you kick a rock in it, nothing is refuted.

_________

This is the fourth installment in the series “Meanings of the Metaverse,” which began here and continued here and here

The automatic muse

In the fall of 1917, the Irish poet William Butler Yeats, now in middle age and having twice had marriage proposals turned down, first by his great love Maud Gonne and next by Gonne’s daughter Iseult, offered his hand to a well-off young Englishwoman named Georgie Hyde-Lees. She accepted, and the two were wed a few weeks later, on October 20, in a small ceremony in London.

Hyde-Lees was a psychic, and four days into their honeymoon she gave her husband a demonstration of her ability to channel the words of spirits through automatic writing. Yeats was fascinated by the messages that flowed through his wife’s pen, and in the ensuing years the couple held more than 400 such seances, the poet poring over each new script. At one point, Yeats announced that he would devote the rest of his life to interpreting the messages. “No,” the spirits responded, “we have come to give you metaphors for poetry.” And so they did, in abundance. Many of Yeats’s great late poems, with their gyres, staircases, and phases of the moon, were inspired by his wife’s mystical scribbles.

One way to think about AI-based text-generation tools like OpenAI’s GPT-3 is as clairvoyants. They are mediums that bring the words of the past into the present in a new arrangement. GPT-3 is not creating text out of nothing, after all. It is drawing on a vast corpus of human expression and, through a quasi-mystical statistical procedure (no one can explain exactly what it is doing), synthesizing all those old words into something new, something intelligible to and requiring interpretation by its interlocutor. When we talk to GPT-3, we are, in a way, communing with the dead. One of Hyde-Lees’ spirits said to Yeats, “this script has its origin in human life — all religious systems have their origin in God & descend to man — this ascends.” The same could be said of the script generated by GPT-3. It has its origin in human life; it ascends.

It’s telling that one of the first commercial applications of GPT-3, Sudowrite, is being marketed as a therapy for writer’s block. If you’re writing a story or essay and you find yourself stuck, you can plug the last few sentences of your work into Sudowrite, and it will generate the next few sentences, in a variety of versions. It may not give you metaphors for poetry (though it could), but it will give you some inspiration, stirring thoughts and opening possible new paths. It’s an automatic muse, a mechanical Georgie Hyde-Lees.

Sudowrite, and GPT-3 in general, has already been used for a lot of stunts. Kevin Roose, the New York Times technology columnist, recently used it to generate a substantial portion of a review of a mediocre new book on artificial intelligence. (The title of the review was, naturally, “A Robot Wrote this Book Review.”) Commenting on Sudowrite’s output, Roose wrote, “within a few minutes, the AI was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.”

But the potential of these AI-powered automatic writers goes far beyond journalistic parlor tricks. They promise to serve as new tools for the creation of art. One of the most remarkable pieces of writing I read this year was Vauhini Vara’s story “Ghosts” in The Believer. While locked down in 2020, Vara became obsessed with GPT-3. “I sought out examples of GPT-3’s work, and they astonished me,” she writes in an introduction to her piece. “Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter — but often poetically so, almost truer than writing any human would produce.” Yeats would have understood.

Vara’s older sister contracted cancer in high school, and died a few years later. The experience left Vara traumatized, and, though an accomplished writer, she had never been able to write about it. But with GPT-3 she began to find the words. “I found myself irresistibly attracted to GPT-3,” she explains “— to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them.” She began to feed GPT-3 some sentences about her sister’s illness, and the system started to weave those sentences into stories — fantastical and uncanny, but also stirring, and ultimately heartbreaking. The Believer story chronicles eight of her sessions with GPT-3. It reads as a conversation between a writer and her muse, a conversation that begins tentatively and grows richer and truer as it goes on.

At one point, after Vara confesses to feeling like “a ghost” after her sister’s death, GPT-3 continues the thought:

So I can’t describe her to you. But I can describe what it felt like to have her die. It felt like my life was an accident — or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else — a stranger — would have to live, in my place. I was that stranger. I still am.

What gives the exchange all the more resonance is the sense that, in drawing on its corpus of past human speech to generate its mysterious new pastiche, GPT-3 is expressing the pain of others who have suffered unbearable losses. Spirits are talking.