Category Archives: Uncategorized

Goodbye Rough Type, Hello New Cartographies

Rough Type has had a twenty-year run. That seems like long enough, particularly seeing as the blog has been pretty much dormant in recent years. So this will be the last Rough Type post.

But don’t shed too many tears. I’m going to continue blogging, maybe even at a faster clip, through a Substack I’ve started called New Cartographies. My first new post is up. It’s titled “Dead Labor, Dead Speech,” and here’s how it begins:

If, as Marx argued, capital is dead labor, then the products of large language models might best be understood as dead speech. Just as factory workers produce, with their “living labor,” machines and other forms of physical capital that are then used, as “dead labor,” to produce more physical commodities, so human expressions of thought and creativity—“living speech” in the forms of writing, art, photography, and music—become raw materials used to produce “dead speech” in those same forms. LLMs, to continue with Marx’s horror-story metaphor, feed “vampire-like” on human culture. Without our words and pictures and songs, they would cease to function. They would become as silent as a corpse in a casket.

Read on (and thanks).

Large Language Manglers

ventriloquist

Who’s the dummy?

I was reading Joanna Stern’s report in the Wall Street Journal about the new AI features that Apple is rushing to complete for the iPhone 16s. (Can’t LLMs debug their own code? I thought that was a done deal.) Among the promised features is a Rewrite function that will translate your messages and other writings into different styles of prose. One style is called Professional. Stern tested it on a note she was writing to her mom. Here’s the original:

I’ll be home tomorrow. 

Here’s how it reads after the rewrite:

I anticipate returning home tomorrow.

So, if I’m getting this right, you’d use Professional mode any time you want to sound like you have a stick up your ass. I anticipate forgoing its deployment.

This is all very silly, or at least would be if we hadn’t lost our collective mind. For years now, we’ve been acclimating ourselves to having machines speak on our behalf. It began with autocorrect and autoedit functions in word processors and has continued through ever more aggressive autocomplete functions on phones. Having an app fiddle with your writing now seems normal, even necessary given how much time we all spend messaging, posting, and commenting. The endless labor of self-expression cries out for the efficiency of automation.

We don’t even care that computers, despite years of experience, still do a crappy job of what would seem to be pretty simple algorithmic work. Here’s a sloppy text that I wrote with the aid of my messaging app. It’s filled with typos, weird punctuation, and bizarre word substitutions, but I’m sure you’ll get the gist. If not, who cares? Along with speeding up exchanges, the implicit it’s autocomplete’s fault! excuse that now accompanies every messy text has the added benefit of covering up the fact that we can’t be bothered to spend five seconds proofreading the messages we send to friends and family members. We’ve got headlines to read, YouTubes to watch.

Since OpenAI introduced ChatGPT two years ago, people have taken to using it for all sorts of formal writing tasks, from college papers to corporate memos to government reports. I was recently talking with a Methodist bishop, and she told me that a colleague now uses generative AI to help him write sermons. Apple’s Rewrite, and the similar writing tools being introduced by Google, Microsoft, Meta, and others, extends the AI-based outsourcing of personal speech into more intimate areas, shaping the way we talk with the people closest to us. It may start with rewriting—to help us “deliver the right words to meet the occasion,” as Apple describes it—but it will soon expand into the automated production of condolence messages, wedding vows, and the like. LLMs give us ventriloquism in reverse. The mechanical dummy speaks through your mouth.

It’s also the next stage in the long-running industrialization of human communication—one of the subjects of my forthcoming book Superbloom. For nearly two centuries, we’ve embraced the relentless speeding up of communication by mechanical means, believing that the industrial ideals of efficiency, productivity, and optimization are as applicable to speech as to the manufacture of widgets. More recently, we’ve embraced the mechanization of editing, allowing software to replace people in choosing the information we see (and don’t see). With LLMs, the industrialization ethic moves at last into the creation of the very content of our speech.

It’s hard to know what to say. Why not make it easier? Or, as Apple Rewrite Professional puts it: The rendering of thoughts into prose is one on the most challenging endeavors in which a human being can engage. It would be advisable to subject the task to a process of simplification.

Introducing Superbloom

The poppies come out every March in Walker Canyon, an environmentally sensitive spot in the Temescal Mountains seventy miles southeast of Los Angeles, but the show they put on in early 2019 was something special. Thanks to a wet winter in the normally arid region, seeds that had long lain dormant germinated, and the poppies appeared in numbers not seen in years. The flowers covered the canyon’s slopes in carpets of vivid, almost fluorescent orange — the shade you get on hunters’ vests and caps. On social media, word of the so-called superbloom spread quickly. First on the scene were the influencers.

So begins my new book, Superbloom: How Technologies of Connection Tear Us Apart, to be published in January 2025 by W. W. Norton.

Fifteen years ago, when I was finishing up my book The Shallows: What the Internet Is Doing to Our Brains, I knew that I was telling only part of the story of the net’s effects. The book focused on the personal consequences of our entry into an artificial environment geared to agitation and distraction —  the way it shapes our thoughts and perceptions, our ways of reading and sense-making. What it didn’t cover is the social and political effects of the technology. Back then — this was 2009 — smartphones were brand new, app stores had only recently opened, and social media platforms like Facebook, Twitter, and Instagram were just beginning to draw a mass audience. TikTok wouldn’t appear for nearly a decade. The social world we live in today, in short, didn’t exist. As for psychological and sociological studies of online socializing, they were few and their results were mixed.

We know a lot more now. Although we socialize through social media more than ever today, our attitude toward the experience has, for many good reasons, shifted from enthusiasm to wariness. Our view of the companies running the platforms, meanwhile, has pinballed from celebratory to contemptuous. There’s talk of warning labels, breakups, outright bans. But even as public opinion shifted over the last seven or eight years, I sensed that there was something important missing from all the debates and discussions. That sense, which strengthened in 2019 when I taught an undergraduate seminar on social media at Williams College in Massachusetts, inspired me to begin the research that led to Superbloom.

In the book, I try to put the phenomenon of social media into a broader context, one spanning the history of communication technology as well as the psychological and sociological evidence of how mediated communication works on the human psyche and influences people’s relationships. At the center of the book is a paradox that was summed up well by the Canadian scholar Harold Innis in a 1947 lecture: “Enormous improvements in communication have made understanding more difficult.” No one paid attention to the idea back then, but I think we need to pay attention to it now.

Superbloom is available for preordering. I hope you’ll read it.

photo: cultivar413 (cc).

Culture vultures

I’m not tearing up over Elon Musk’s termination, with extreme prejudice, of Twitter. Kill the blue bird, gut it, stuff it, and stick it in a media museum to collect dust. Think of all the extra time journalists will now have for journalism.

But there is something ominous about a superbillionaire taking over what had become a sort of public square, a center of discourse, for crying out loud, and doing with it what he pleases, including some pretty perverted acts. I mean, that X logo? Virginia Heffernan compares it to “the skull and crossbones on cartoon bottles of poison.” To me, it looks like something that a cop might spray-paint on a floor to mark the spot where a corpse lay before it was removed—the corpse in this case being the bird’s.

Musk’s toying dismemberment of Twitter feels even more unsettling in the wake of the announcement yesterday that private-equity giant KKR is buying Simon & Schuster, publisher of Catch-22 and Den of Thieves, among other worthy titles, for a measly billion and a half. Says S&S CEO Jon Karp: “They plan to invest in us and make us even greater than we already are. What more could a publishing company want?” That would have made a funny tweet.

Both gambits are asset plays, or, maybe a better term, asset undertakings. I don’t understand everything Musk’s doing—manic episodes have their own logic—but he does get an established social-media platform and a big pile of content to feed into the large language model he’s building at xAI. (Fun game: connect the Xs.) KKR gets its own pile of content to, uh, leverage. It intentions probably aren’t entirely literary.

Well-turned sentences had a decent run, but after TikTok they’ve become depreciating assets. Traditional word-based culture—and, sure, I’ll stick Twitter into that category—is beginning to look like a feeding ground for vultures. Tell Colleen Hoover to turn out the lights when she leaves.

Vision Pro’s big reveal

At first glance, there doesn’t seem to be much to connect Meta’s $500 Quest 3 face strap-on for gamer-proles with Apple’s $3,500 Vision Pro face tiara for elite beings of a hypothetical nature, but the devices do share one important thing in common: redundancy. Both offer a set of features that lag far behind our already well-established psychic capabilities. They offer kludgy imitations of what our minds now do effortlessly. Our reality has been augmented, virtual, and mixed for a long time, and we’re at home in it. Bulky headgear that projects images onto fields of vision feels like a leap backwards.

Baudrillard explained it all thirty years ago in The Perfect Crime:

The virtual camera is in our heads. No need of a medium to reflect our problems in real time: every existence is telepresent to itself. The TV and the media long since left their media space to invest “real” life from the inside, precisely as a virus does a normal cell. No need of the headset and the data suit: it is our will that ends up moving about the world as though inside a computer-generated image.

Who needs real goggles when we already wear virtual ones?

Vision Pro’s value seems to lie largely in the realm of metaphor. There’s that brilliant little reality dial—the “digital crown”—that allows you to fade in and out of the world, an analog rendering of the way our consciousness now wavers between presence and absence, here and not-here. And there’s the projection of your eyes onto the outer surface of the lens, so those around you can judge your degree of social and emotional availability at any given moment. Your eyes disappear, Apple explains, as you become more “immersed,” as you retreat from your physical surroundings into the screen’s captivating images. See you later. Your fingers keep moving, though, worrying their virtual worry beads, the body reduced to interface. In its metaphors, Vision Pro reveals us for what we have become: avatars in the uncanny valley.

Apple presents its Vision line as the next logical step in the progression of computing: from desktop computing to mobile computing to, now, “spatial computing.” Apps float in the air. The invisible data streams that already swirl around us become visible. The world is the computer. Maybe that is the future of computing. Maybe not. In most situations, the smartphone still seems more practical, flexible, and user-friendly than something that, like the xenomorph in Alien, commandeers the better part of your face.

The vision that Vision offers us seems more retrospective than prospective. It shows us a time when entering a virtual world required a gizmo. That’s the past, not the future.

At the Concord station (for Leo Marx)

Leo Marx has died, at the mighty age of 102. His work, particularly The Machine in the Garden, inspired many people who write on the cultural consequences of technological progress, myself included. As a small tribute, I’m posting this excerpt from The Shallows, in which Marx’s influence is obvious.

It was a warm summer morning in Concord, Massachusetts. The year was 1844. Nathaniel Hawthorne was sitting in a small clearing in the woods, a particularly peaceful spot known around town as Sleepy Hollow. Deep in concentration, he was attending to every passing impression, turning himself into what Ralph Waldo Emerson, the leader of Concord’s transcendentalist movement, had eight years earlier termed a “transparent eyeball.”

Hawthorne saw, as he would record in his notebook later that day, how “sunshine glimmers through shadow, and shadow effaces sunshine, imaging that pleasant mood of mind where gayety and pensiveness intermingle.” He felt a slight breeze, “the gentlest sigh imaginable, yet with a spiritual potency, insomuch that it seems to penetrate, with its mild, ethereal coolness, through the outward clay, and breathe upon the spirit itself, which shivers with gentle delight.” He smelled on the breeze a hint of “the fragrance of the white pines.” He heard “the striking of the village clock” and “at a distance mowers whetting their scythes,” though “these sounds of labor, when at a proper remoteness, do but increase the quiet of one who lies at his ease, all in a mist of his own musings.”

Abruptly, his reverie was broken:

But, hark! there is the whistle of the locomotive,—the long shriek, harsh above all other harshness, for the space of a mile cannot mollify it into harmony. It tells a story of busy men, citizens from the hot street, who have come to spend a day in a country village,—men of business,—in short, of all unquietness; and no wonder that it gives such a startling shriek, since it brings the noisy world into the midst of our slumbrous peace.

Leo Marx opens The Machine in the Garden, his classic 1964 study of technology’s influence on American culture, with a recounting of Hawthorne’s morning in Sleepy Hollow. The writer’s real subject, Marx argues, is “the landscape of the psyche” and in particular “the contrast between two conditions of consciousness.” The quiet clearing in the woods provides the solitary thinker with “a singular insulation from disturbance,” a protected space for reflection. The clamorous arrival of the train, with its load of “busy men,” brings “the psychic dissonance associated with the onset of industrialism.” The contemplative mind is overwhelmed by the noisy world’s mechanical busyness.

The stress that Google and other Internet companies place on the efficiency of information exchange as the key to intellectual progress is nothing new. It’s been, at least since the start of the Industrial Revolution, a common theme in the history of the mind. It provides a strong and continuing counterpoint to the very different view, promulgated by the American transcendentalists as well as the earlier English romantics, that true enlightenment comes only through contemplation and introspection. The tension between the two perspectives is one manifestation of the broader conflict between, in Marx’s terms, “the machine” and “the garden”—the industrial ideal and the pastoral ideal—that has played such an important role in shaping modern society.

When carried into the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of contemplative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in what Google calls the “world of numbers,” but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual locomotion.

Even as the printing press, invented by Johannes Gutenberg in the fifteenth century, made the literary mind the general mind, it set in motion the process that now threatens to render the literary mind obsolete. When books and periodicals began to flood the marketplace, people for the first time felt overwhelmed by information. Robert Burton, in his 1628 masterwork An Anatomy of Melancholy, described the “vast chaos and confusion of books” that confronted the seventeenth-century reader: “We are oppressed with them, our eyes ache with reading, our fingers with turning.” A few years earlier, in 1600, another English writer, Barnaby Rich, had complained, “One of the great diseases of this age is the multitude of books that doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought into the world.”

Ever since, we have been seeking, with mounting urgency, new ways to bring order to the confusion of information we face every day. For centuries, the methods of personal information management tended to be simple, manual, and idiosyncratic—filing and shelving routines, alphabetization, annotation, notes and lists, catalogues and concordances, indexes, rules of thumb. There were also the more elaborate, but still largely manual, institutional mechanisms for sorting and storing information found in libraries, universities, and commercial and governmental bureaucracies. During the twentieth century, as the information flood swelled and data-processing technologies advanced, the methods and tools for both personal and institutional information management became more complex, more systematic, and increasingly automated. We began to look to the very machines that exacerbated information overload for ways to alleviate the problem.

Vannevar Bush sounded the keynote for our modern approach to managing information in his much-discussed article “As We May Think,” which appeared in the Atlantic Monthly in 1945. Bush, an electrical engineer who had served as Franklin Roosevelt’s science adviser during World War II, worried that progress was being held back by scientists’ inability to keep abreast of information relevant to their work. The publication of new material, he wrote, “has been extended far beyond our present ability to make use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”

With his memex, Bush anticipated both the personal computer and the hypermedia system of the internet. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Englebart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.” Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!” We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”

The automatic muse

In the fall of 1917, the Irish poet William Butler Yeats, now in middle age and having twice had marriage proposals turned down, first by his great love Maud Gonne and next by Gonne’s daughter Iseult, offered his hand to a well-off young Englishwoman named Georgie Hyde-Lees. She accepted, and the two were wed a few weeks later, on October 20, in a small ceremony in London.

Hyde-Lees was a psychic, and four days into their honeymoon she gave her husband a demonstration of her ability to channel the words of spirits through automatic writing. Yeats was fascinated by the messages that flowed through his wife’s pen, and in the ensuing years the couple held more than 400 such seances, the poet poring over each new script. At one point, Yeats announced that he would devote the rest of his life to interpreting the messages. “No,” the spirits responded, “we have come to give you metaphors for poetry.” And so they did, in abundance. Many of Yeats’s great late poems, with their gyres, staircases, and phases of the moon, were inspired by his wife’s mystical scribbles.

One way to think about AI-based text-generation tools like OpenAI’s GPT-3 is as clairvoyants. They are mediums that bring the words of the past into the present in a new arrangement. GPT-3 is not creating text out of nothing, after all. It is drawing on a vast corpus of human expression and, through a quasi-mystical statistical procedure (no one can explain exactly what it is doing), synthesizing all those old words into something new, something intelligible to and requiring interpretation by its interlocutor. When we talk to GPT-3, we are, in a way, communing with the dead. One of Hyde-Lees’ spirits said to Yeats, “this script has its origin in human life — all religious systems have their origin in God & descend to man — this ascends.” The same could be said of the script generated by GPT-3. It has its origin in human life; it ascends.

It’s telling that one of the first commercial applications of GPT-3, Sudowrite, is being marketed as a therapy for writer’s block. If you’re writing a story or essay and you find yourself stuck, you can plug the last few sentences of your work into Sudowrite, and it will generate the next few sentences, in a variety of versions. It may not give you metaphors for poetry (though it could), but it will give you some inspiration, stirring thoughts and opening possible new paths. It’s an automatic muse, a mechanical Georgie Hyde-Lees.

Sudowrite, and GPT-3 in general, has already been used for a lot of stunts. Kevin Roose, the New York Times technology columnist, recently used it to generate a substantial portion of a review of a mediocre new book on artificial intelligence. (The title of the review was, naturally, “A Robot Wrote this Book Review.”) Commenting on Sudowrite’s output, Roose wrote, “within a few minutes, the AI was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.”

But the potential of these AI-powered automatic writers goes far beyond journalistic parlor tricks. They promise to serve as new tools for the creation of art. One of the most remarkable pieces of writing I read this year was Vauhini Vara’s story “Ghosts” in The Believer. While locked down in 2020, Vara became obsessed with GPT-3. “I sought out examples of GPT-3’s work, and they astonished me,” she writes in an introduction to her piece. “Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter — but often poetically so, almost truer than writing any human would produce.” Yeats would have understood.

Vara’s older sister contracted cancer in high school, and died a few years later. The experience left Vara traumatized, and, though an accomplished writer, she had never been able to write about it. But with GPT-3 she began to find the words. “I found myself irresistibly attracted to GPT-3,” she explains “— to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them.” She began to feed GPT-3 some sentences about her sister’s illness, and the system started to weave those sentences into stories — fantastical and uncanny, but also stirring, and ultimately heartbreaking. The Believer story chronicles eight of her sessions with GPT-3. It reads as a conversation between a writer and her muse, a conversation that begins tentatively and grows richer and truer as it goes on.

At one point, after Vara confesses to feeling like “a ghost” after her sister’s death, GPT-3 continues the thought:

So I can’t describe her to you. But I can describe what it felt like to have her die. It felt like my life was an accident — or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else — a stranger — would have to live, in my place. I was that stranger. I still am.

What gives the exchange all the more resonance is the sense that, in drawing on its corpus of past human speech to generate its mysterious new pastiche, GPT-3 is expressing the pain of others who have suffered unbearable losses. Spirits are talking.