Not being there: from virtuality to remoteness

I used to be virtual. Now I’m remote.

The way we describe our digitally mediated selves, the ones that whirl through computer screens like silks through a magician’s hands, has changed during the pandemic. The change is more than just a matter of terminology. It signals a shift in perspective and perhaps in attitude. “Virtual” told us that distance doesn’t matter; “remote” says that it matters a lot. “Virtual” suggested freedom; “remote” suggests incarceration.

The idea of virtuality-as-liberation came to the fore in Silicon Valley after the invention of the World Wide Web in 1989, but its origins go back to the beginnings of the computer age. In the 1940s and 1950s, as Katherine Hayles describes in How We Became Posthuman, the pioneers of digital computing — Turing, Shannon, Wiener, et al. — severed mind from body. They defined intelligence as “a property of the formal manipulation of symbols rather than enaction in the human life-world.” Our essence as thinking beings, they implied, is independent of our bodies. It lies in patterns of information and hence can be represented through electronic data processing. The self can be abstracted, virtualized.

Though rigorously materialist in its conception, this new mind-body dualism soon took on the characteristics of a theology. Not only would we be able to represent our essence through data, the argument went, but the transfer of the self to a computer would be an act of transcendence. It would free us from the constraints of the physical — from the body and its fixed location in space. As virtual beings, we would exist everywhere all at once. We would experience the “bodiless exultation of cyberspace,” as William Gibson put it in his 1984 novel Neuromancer. The sense of disembodiment as a means of emancipation was buttressed by the rise of schools of social critics who argued that “identity” could and should be separated from biology. If the self is a pattern of data, then the self is a “construct” that is infinitely flexible.

The arrival of social media seemed to bring us closer to the virtual ideal. It gave everyone easy access to multimedia software tools for creating rich representations of the self, and it provided myriad digital theaters, or “platforms,” for these representations to perform in. More and more, self-expression became a matter of symbol-processing, of information-patterning. The content of our character became the character of our content, and vice versa.

The pandemic has brought us back to our bodies, with a vengeance. It has done this not through re-embodiment but, paradoxically, through radical disembodiment. We’ve been returned to our bodies by being forced into further separation from them, by being cut off from, to quote Hayles again, “enaction in the human life-world.” As we retreated from the physical world, social media immediately expanded to subsume everyday activities that traditionally lay outside the scope of media. The computer — whether in the form of phone, laptop, or desktop — became our most important piece of personal protective equipment. It became the sterile enclosure, the prophylactic, that enabled us to go about the business of our lives — work, school, meetings, appointments, socializing, shopping — without actually inhabiting our lives. It allowed us to become remote.

In many ways, this has been a good thing. Without the tools of social media, and our experience in using them, the pandemic would have been even more of a trial. We would have felt even more isolated, our agency more circumscribed. Social media schooled us in the arts of social distancing before those arts became mandatory. But the pandemic has also given us a lesson, a painful one, in the limits of remoteness. In promising to eliminate distance, virtuality also promised to erase the difference between presence and absence. We would always be there, wherever “there” happened to be. That seemed plausible when our virtual selves were engaged in the traditional pursuits of media — news and entertainment, play and performance, information production and information gathering — but it was revealed to be an illusion as soon as social media became our means of living. Being remote is a drag. The state of absence, a physical state but also a psychic one, is a state of loneliness and frustration, angst and ennui.

What the pandemic has revealed is that when taken to an extreme — the extreme Silicon Valley saw as an approaching paradise — virtuality does not engender a sense of liberation and exultation. It engenders a sense of confinement and despair. Absence will never be presence. A body in isolation is a self in isolation.

Think about the cramped little cells in which we appear when we’re on Zoom. It’s hard to imagine a better metaphor for our situation. The architecture of Zoom is the architecture of the Panopticon, but it comes with a twist that Jeremy Bentham never anticipated. On Zoom, each of us gets to play the roles of both jailer and jailed. We are the watcher and the watched, simultaneously. Each role is an exercise in remoteness, and each is demeaning. Each makes us feel small.

What happens when the pandemic subsides? We almost certainly will rejoice in our return to the human life-world — the world of embodiment, presence, action. We’ll celebrate our release from remoteness. But will we rebel against social media and its continuing encroachment on our lives? I have my doubts. As the research of Sherry Turkle and others has shown, one of the attractions of virtualization has always been the sense of safety it provides. Even without a new virus on the prowl, the embodied world, the world of people and things, presents threats, not just physical but also social and psychological. Presence is also exposure. When we socialize through a screen, we feel protected from many of those threats — less fearful, more in control — even if we also feel more isolated and constrained and adrift.

If, in the wake of the pandemic, we end up feeling more vulnerable to the risks inherent in being physically in the world, we may, despite our immediate relief, continue to seek refuge in our new habits of remoteness. We won’t feel liberated, but at least we’ll feel protected.

What is it like to be a smartphone?

“The fact that we cannot expect ever to accommodate in our language a detailed description of Martian or bat phenomenology should not lead us to dismiss as meaningless the claim that bats and Martians have experiences fully comparable in richness of detail to our own.” –Thomas Nagel

What is it like to be a smartphone? In all the chatter about the future of artificial intelligence, the question has been glossed over or, worse, treated as settled. The longstanding assumption, a reflection of the anthropomorphic romanticism of computer scientists, science fiction writers, and internet entrepreneurs, has been that a self-aware computer would have a mind, and hence a consciousness, similar to our own. We, supreme programmers, would create machine consciousness in our own image.

The assumption is absurd, and not just because the sources and workings of our own consciousness remain unknown to us and hence unavailable as models for coders and engineers. Consciousness is entwined with being, and being with body, and a computer’s body and (speculatively) being have nothing in common with our own. A far more reasonable assumption is that the consciousness of a computer, should it arise, would be completely different from the consciousness of a human being. It would be so different that we probably wouldn’t even recognize it as a consciousness.

As the philosopher Thomas Nagel observed in “What Is It Like to Be a Bat?,” his classic 1974 article, we humans are unable to inhabit the consciousness of any other animal. We can’t know the “subjective character” of other animals’ experience any more than they can understand ours.  We are, however, able to see that, excepting perhaps the simplest of life forms, an animal has a consciousness — or at least a beingness. The animal, we understand, is a living thing with a mind, a sensorium, a nature. We know it feels like something to be that animal, even though we can’t know what that something is.

We understand this about other animals because we share with them a genetic heritage. Because they are products of the same evolutionary process that gave rise to ourselves and because their bodies and brains have the same essential biology, the same material substrate, as our own, they resemble us in both their physical characteristics and their behavior. It would be impossible, given this obvious likeness, to see them as anything other than living beings.

There would be no such shared heritage or shared substrate, no such likeness, between ourselves and any artificial intelligence that may spring into being through the workings of a computer or a network of computers. Our relationship to an AI, and its to us, would be characterized by radical unlikeness. Confronted with an AI, we would not only be unable to inhabit its consciousness or otherwise sense the character of its being; we would be unable to recognize that it even has a consciousness or a being. It would remain, in our perception, an inanimate thing that we have constructed.

But, you might ask, wouldn’t its being be an emanation of its programming? That might be true to some extent — though who can say where being comes from? — but even so, the programming would be of no help in understanding the character of a computer’s being. You would not be able to know what it’s like to be an AI by examining the 1s and 0s of its machine code any more than you’d be able to understand your own being by examining the As, Cs, Gs, and Ts of your genetic code. A conscious computer would likely be unaware of the routines of its software — just as we’re unaware of how our DNA shapes our body and being or even of the myriad signals that zip through our nervous system every moment. An intelligent computer may perform all sorts of practical functions, including taking our inputs and supplying us with outputs, without having any awareness that it is performing those functions. Its being may lie entirely elsewhere.

The Turing test, in all its variations, would also be useless in identifying an AI. It merely tests for a machine’s ability to feign likeness with ourselves. It provides no insight into the AI’s being, which, again, could be entirely separate from its ability to trick us into sensing it is like us. The Turing test tells us about our own skills; it says nothing about the character of the artificial being.

All of this raises another possibility. It may be that we are already  surrounded by AIs but have no idea that they exist. Their beingness is invisible to us, just as ours is to them. We are both objects in the same place, but as beings we inhabit different universes. Our smartphones may right now be having, to borrow Nagel’s words, “experiences fully comparable in richness of detail to our own.”

Look at your phone. You see a mere tool, there to do your bidding, and perhaps that’s the way your phone sees you, the dutiful but otherwise unremarkable robot that from time to time plugs it into an electrical socket.

The love that lays the swale in rows

There’s a line of verse I’m always coming back to, and it’s been on my mind more than usual these last few months:

The fact is the sweetest dream that labor knows.

It’s the second to last line of one of Robert Frost’s earliest and best poems, a sonnet called “Mowing.” He wrote it just after the turn of the twentieth century, when he was a young man, in his twenties, with a young family. He was working as a farmer, raising chickens and tending a few apple trees on a small plot of land his grandfather had bought for him in Derry, New Hampshire. It was a difficult time in his life. He had little money and few prospects. He had dropped out of two colleges, Dartmouth and Harvard, without earning a degree. He had been unsuccessful in a succession of petty jobs. He was sickly. He had nightmares. His firstborn child, a son, had died of cholera at the age of three. His marriage was troubled. “Life was peremptory,” Frost would later recall, “and threw me into confusion.”

But it was during those lonely years in Derry that he came into his own as a writer and an artist. Something about farming—the long, repetitive days, the solitary work, the closeness to nature’s beauty and carelessness—inspired him. The burden of labor eased the burden of life. “If I feel timeless and immortal it is from having lost track of time for five or six years there,” he would write of his stay in Derry. “We gave up winding clocks. Our ideas got untimely from not taking newspapers for a long period. It couldn’t have been more perfect if we had planned it or foreseen what we were getting into.” In the breaks between chores on the farm, Frost somehow managed to write most of the poems for his first book, A Boy’s Will; about half the poems for his second book, North of Boston; and a good number of other poems that would find their way into subsequent volumes.

“Mowing,” from A Boy’s Will, was the greatest of his Derry lyrics. It was the poem in which he found his distinctive voice: plainspoken and conversational, but also sly and dissembling. (To really understand Frost—to really understand anything, including yourself—requires as much mistrust as trust.) As with many of his best works, “Mowing” has an enigmatic, almost hallucinatory quality that belies the simple and homely picture it paints—in this case of a man cutting a field of grass for hay. The more you read the poem, the deeper and stranger it becomes:

There was never a sound beside the wood but one,
And that was my long scythe whispering to the ground.
What was it it whispered? I knew not well myself;
Perhaps it was something about the heat of the sun,
Something, perhaps, about the lack of sound—
And that was why it whispered and did not speak.
It was no dream of the gift of idle hours,
Or easy gold at the hand of fay or elf:
Anything more than the truth would have seemed too weak
To the earnest love that laid the swale in rows,
Not without feeble-pointed spikes of flowers
(Pale orchises), and scared a bright green snake.
The fact is the sweetest dream that labor knows.
My long scythe whispered and left the hay to make.

We rarely look to poetry for instruction anymore, but here we see how a poet’s scrutiny of the world can be more subtle and discerning than a scientist’s. Frost understood the meaning of the mental state we now call “flow” long before psychologists and neurobiologists delivered the empirical evidence. His mower is not an airbrushed peasant, a rustic caricature. He’s a farmer, a man doing a hard job on a still, hot summer day. He’s not dreaming of “idle hours” or “easy gold.” His mind is on his work—the bodily rhythm of the cutting, the weight of the tool in his hands, the stalks piling up around him. He’s not seeking some greater truth beyond the work. The work is the truth.

The fact is the sweetest dream that labor knows.

There are mysteries in that line. Its power lies in its refusal to mean anything more or less than what it says. But it seems clear that what Frost is getting at, in the line and in the poem, is the centrality of action to both living and knowing. Only through work that brings us into the world do we approach a true understanding of existence, of “the fact.” It’s not an understanding that can be put into words. It can’t be made explicit. It’s nothing more than a whisper. To hear it, you need to get very near its source. Labor, whether of the body or the mind, is more than a way of getting things done. It’s a form of contemplation, a way of seeing the world face-to-face rather than through a glass. Action un-mediates perception, gets us close to the thing itself. It binds us to the earth, Frost implies, as love binds us to one another. The antithesis of transcendence, work puts us in our place.

Frost is a poet of labor. He’s always coming back to those revelatory moments when the active self blurs into the surrounding world—when, as he would write in another poem, “the work is play for mortal stakes.” Richard Poirier, in his book Robert Frost: The Work of Knowing, described with great sensitivity the poet’s view of the essence and essentialness of hard work: “Any intense labor enacted in his poetry, like mowing or apple-picking, can penetrate to the visions, dreams, myths that are at the heart of reality, constituting its articulate form for those who can read it with a requisite lack of certainty and an indifference to merely practical possessiveness.” The knowledge gained through such efforts may be as shadowy and elusive as a dream, but “in its mythic propensities, the knowledge is less ephemeral than are the apparently more practical results of labor, like food or money.”

When we embark on a task, with our bodies or our minds, on our own or alongside others, we usually have a practical goal in sight. Our eyes are looking ahead to the product of our work—a store of hay for feeding livestock, perhaps. But it’s through the work itself that we come to a deeper understanding of ourselves and our situation. The mowing, not the hay, is what matters most.

*  *  *

Frost is not romanticizing some distant, pre-technological past. Although he was dismayed by those who allowed themselves to become “bigoted in reliance / On the gospel of modern science,” he felt a kinship with scientists and inventors. As a poet, he shared with them a common spirit and pursuit. They were all explorers of the mysteries of earthly life, excavators of meaning from matter. They were all engaged in work that, as Poirier described it, “can extend the capability of human dreaming.” For Frost, the greatest value of “the fact”—whether apprehended in the world or expressed in a work of art or made manifest in a tool or other invention—lay in its ability to expand the scope of individual knowing and hence open new avenues of perception, action, and imagination. In the long poem “Kitty Hawk,” written near the end of his life, he celebrated the Wright brothers’ flight “Into the unknown, / Into the sublime.” In making their own “pass / At the infinite,” the brothers also made the experience of flight, and the sense of unboundedness it provides, possible for all of us.

Technology is as crucial to the work of knowing as it is to the work of production. The human body, in its native, unadorned state, is a feeble thing. It’s constrained in its strength, its dexterity, its sensory range, its calculative prowess, its memory. It quickly reaches the limits of what it can do. But the body encompasses a mind that can imagine, desire, and plan for achievements the body alone can’t fulfill. This tension between what the body can accomplish and what the mind can envision is what gave rise to and continues to propel and shape technology. It’s the spur for humankind’s extension of itself and elaboration of nature. Technology isn’t what makes us “posthuman” or “transhuman,” as some writers and scholars these days suggest. It’s what makes us human. Technology is in our nature. Through our tools we give our dreams form. We bring them into the world. The practicality of technology may distinguish it from art, but both spring from a similar, distinctly human yearning.

One of the many jobs the human body is unsuited to is cutting grass. (Try it if you don’t believe me.) What allows the mower to do his work, what allows him to be a mower, is the tool he wields, his scythe. The mower is, and has to be, technologically enhanced. The tool makes the mower, and the mower’s skill in using the tool remakes the world for him. The world becomes a place in which he can act as a mower, in which he can lay the swale in rows. This idea, which on the surface may sound trivial or even tautological, points to something elemental about life and the formation of the self.

“The body is our general means of having a world,” wrote the French philosopher Maurice Merleau-Ponty in his 1945 masterwork Phenomenology of Perception. Our physical makeup—the fact that we walk upright on two legs at a certain height, that we have a pair of hands with opposable thumbs, that we have eyes which see in a particular way, that we have a certain tolerance for heat and cold—determines our perception of the world in a way that precedes, and then molds, our conscious thoughts about the world. We see mountains as lofty not because mountains are lofty but because our perception of their form and height is shaped by our own stature. We see a stone as, among other things, a weapon because the particular construction of our hand and arm enables us to pick it up and throw it. Perception, like cognition, is embodied.

It follows that whenever we gain a new talent, we not only change our bodily capacities, we change the world. The ocean extends an invitation to the swimmer that it withholds from the person who has never learned to swim. With every skill we master, the world reshapes itself to reveal greater possibilities. It becomes more interesting, and being in it becomes more rewarding. This may be what Baruch Spinoza, the seventeenth-century Dutch philosopher who rebelled against René Descartes’ division of mind and body, was getting at when he wrote, “The human mind is capable of perceiving a great many things, and is the more capable, the more its body can be disposed in a great many ways.” John Edward Huth, a physics professor at Harvard, testifies to the regeneration that attends the mastery of a skill. A decade ago, inspired by Inuit hunters and other experts in natural wayfinding, he undertook “a self-imposed program to learn navigation through environmental clues.” Through months of rigorous outdoor observation and practice, he taught himself how to read the nighttime and daytime skies, interpret the movements of clouds and waves, decipher the shadows cast by trees. “After a year of this endeavor,” he recalled in a recent essay, “something dawned on me: the way I viewed the world had palpably changed. The sun looked different, as did the stars.” Huth’s enriched perception of the environment, gained through a kind of “primal empiricism,” struck him as being “akin to what people describe as spiritual awakenings.”

Technology, by enabling us to act in ways that go beyond our bodily limits, also alters our perception of the world and what the world signifies to us. Technology’s transformative power is most apparent in tools of discovery, from the microscope and the particle accelerator of the scientist to the canoe and the spaceship of the explorer, but the power is there in all tools, including the ones we use in our everyday lives. Whenever an instrument allows us to cultivate a new talent, the world becomes a different and more intriguing place, a setting of even greater opportunity. To the possibilities of nature are added the possibilities of culture. “Sometimes,” wrote Merleau-Ponty, “the signification aimed at cannot be reached by the natural means of the body. We must, then, construct an instrument, and the body projects a cultural world around itself.” The value of a well-made and well-used tool lies not only in what it produces for us but what it produces in us. At its best, technology opens fresh ground. It gives us a world that is at once more understandable to our senses and better suited to our intentions—a world in which we’re more at home. Used thoughtfully and with skill, a tool becomes much more than a means of production or consumption. It becomes a means of experience. It gives us more ways to lead rich and engaged lives.

Look more closely at the scythe. It’s a simple tool, but an ingenious one. Invented around 500 BC, by the Romans or the Gauls, it consists of a curved blade, forged of iron or steel, attached to the end of a long wooden pole, or snath. The snath typically has, about halfway down its length, a small wooden grip, or nib, that makes it possible to grasp and swing the implement with two hands. The scythe is a variation on the much older sickle, a similar but short-handled cutting tool that was invented in the Stone Age and came to play an essential role in the early development of agriculture and, in turn, of civilization. What made the scythe a momentous innovation in its own right is that its long snath allowed a farmer or other laborer to cut grass at ground level while standing upright. Hay or grain could be harvested, or a pasture cleared, more quickly than before. Agriculture leaped forward.

The scythe enhanced the productivity of the worker in the field, but its benefit went beyond what could be measured in yield. The scythe was a congenial tool, far better suited to the bodily work of mowing than the sickle had been. Rather than stooping or squatting, the farmer could walk with a natural gait and use both his hands, as well as the full strength of his torso, in his job. The scythe served as both an aid and an invitation to the skilled work it enabled. We see in its form a model for technology on a human scale, for tools that extend the productive capabilities of society without circumscribing the individual’s scope of action and perception. Indeed, as Frost makes clear in “Mowing,” the scythe intensifies its user’s involvement with and apprehension of the world. The mower swinging a scythe does more, but he also knows more. Despite outward appearances, the scythe is a tool of the mind as well as the body.

Not all tools are so congenial. Some deter us from skilled action. The technologies of computerization and automation that hold such sway over us today rarely invite us into the world or encourage us to develop new talents that enlarge our perceptions and expand our possibilities. They mostly have the opposite effect. They’re designed to be disinviting. They pull us away from the world. That’s a consequence not only of prevailing design practices, which place ease and efficiency above all other concerns, but also of the fact that, in our personal lives, the computer, particularly in the form of the smartphone, has become a media device, its software painstakingly programmed to grab and hold our attention. As most people know from experience, the computer screen is intensely compelling, not only for the conveniences it offers but also for the many diversions it provides. There’s always something going on, and we can join in at any moment with the slightest of effort. Yet the screen, for all its enticements and stimulations, is an environment of sparseness—fast-moving, efficient, clean, but revealing only a shadow of the world.

That’s true even of the most meticulously crafted simulations of space that we find in virtual-reality applications such as games, architectural models, three-dimensional maps, and the video-meeting tools used to mimic classrooms, conference rooms, and cocktail parties. Artificial renderings of space may provide stimulation to our eyes and to a lesser degree our ears, but they tend to starve our other senses—touch, smell, taste—and greatly restrict the movements of our bodies. A study of rodents, published in Science in 2013, indicated that the brain cells used in navigation are much less active when animals make their way through computer-generated landscapes than when they traverse the real world. “Half of the neurons just shut up,” reported one of the researchers, UCLA neurophysicist Mayank Mehta. He believes that the drop-off in mental activity likely stems from the lack of “proximal cues”—environmental smells, sounds, and textures that provide clues to location—in digital simulations of space. “A map is not the territory it represents,” the Polish philosopher Alfred Korzybski famously remarked, and a computer rendering is not the territory it represents either. When we enter the virtual world, we’re required to shed much of our body. That doesn’t free us; it emaciates us.

The world in turn is made less meaningful. As we adapt to our streamlined environment, we render ourselves incapable of perceiving what the world offers its most ardent inhabitants. We travel blindfolded. The result is existential impoverishment, as nature and culture withdraw their invitations to act and to perceive. The self can only thrive, can only grow, when it encounters and overcomes “resistance from surroundings,” wrote the American pragmatist John Dewey in Art as Experience. “An environment that was always and everywhere congenial to the straightaway execution of our impulsions would set a term to growth as sure as one always hostile would irritate and destroy. Impulsion forever boosted on its forward way would run its course thoughtless, and dead to emotion.”

Ours may be a time of material comfort and technological wonder, but it’s also a time of aimlessness and gloom. During the first decade of this century, the number of Americans taking prescription drugs to treat depression or anxiety rose by nearly a quarter. One in five adults now regularly takes such medications. Many also take sleep aids such as Ambien. The suicide rate among middle-age Americans increased by nearly 30 percent over the same ten years, according to a report from the Centers for Disease Control and Prevention. More than 10 percent of American schoolchildren, and nearly 20 percent of high school–age boys, have been given a diagnosis of attention-deficit/hyperactivity disorder, and two-thirds of that group take drugs like Ritalin and Adderall to treat the condition. The current pandemic has only exacerbated the discontent.

The reasons for our malaise are many and only dimly understood. But one of them may be that through the pursuit of a frictionless existence, we’ve succeeded in turning the landscape of our lives into a barren place. Drugs that numb the nervous system provide a way to rein in our vital, animal sensorium, to shrink our being to a size that better suits our constricted environs.

* * *

Frost’s sonnet also contains, as one of its many whispers, a warning about technology’s ethical hazards. There’s a brutality to the mower’s scythe. It indiscriminately cuts down flowers—those tender, pale orchises—along with the stalks of grass. It frightens innocent animals, like the bright green snake. If technology embodies our dreams, it also embodies other, less benign qualities in our makeup, such as our will to power and the arrogance and insensitivity that accompany it. Frost returns to this theme a little later in A Boy’s Will, in a second lyric about cutting hay, “The Tuft of Flowers.” The poem’s narrator comes upon a freshly mown field and, while following the flight of a passing butterfly with his eyes, discovers in the midst of the cut grass a small cluster of flowers, “a leaping tongue of bloom” that “the scythe had spared”:

The mower in the dew had loved them thus,
By leaving them to flourish, not for us,
Nor yet to draw one thought of us to him,
But from sheer morning gladness to the brim.

Working with a tool is never just a practical matter, Frost is telling us, with characteristic delicacy. It always entails moral choices and has moral consequences. It’s up to us, as users and makers of tools, to humanize technology, to aim its cold blade wisely. That requires vigilance and care.

The scythe is still employed in subsistence farming in many parts of the world. But it has no place on the modern farm, the development of which, like the development of the modern factory, office, and home, has required ever-more complex and efficient equipment. The threshing machine was invented in the 1780s, the mechanical reaper appeared around 1835, the baler came a few years after that, and the combine harvester began to be produced commercially toward the end of the nineteenth century. The pace of technological advance has only accelerated in the decades since, and today the trend is reaching its logical conclusion with the computerization of agriculture. The working of the soil, which Thomas Jefferson saw as the most vigorous and virtuous of occupations, is being off-loaded almost entirely to machines. Farmhands are being replaced by “drone tractors” and other robotic systems that, using sensors, satellite signals, and software, plant seeds, fertilize and weed fields, harvest and package crops, and milk cows and tend other livestock. In development are robo-shepherds that guide flocks through pastures. Even if scythes still whispered in the fields of the industrial farm, no one would be around to hear them.

The congeniality of hand tools encourages us to take responsibility for their use. Because we sense the tools as extensions of our bodies, parts of ourselves, we have little choice but to be intimately involved in the ethical choices they present. The scythe doesn’t choose to slash or spare the flowers; the mower does. As we become more expert in the use of a tool, our sense of responsibility for it naturally strengthens. To the novice mower, a scythe may feel like a foreign object in the hands; to the accomplished mower, hands and scythe become one thing. Talent tightens the bond between an instrument and its user. This feeling of physical and ethical entanglement doesn’t have to go away as technologies become more complex. In reporting on his historic solo flight across the Atlantic in 1927, Charles Lindbergh spoke of his plane and himself as if they were a single being: “We have made this flight across the ocean, not I or it.” The airplane was a complicated system encompassing many components, but to a skilled pilot it still had the intimate quality of a hand tool. The love that lays the swale in rows is also the love that parts the clouds for the stick-and-rudder man.

Automation weakens the bond between tool and user not because computer-controlled systems are complex but because they ask so little of us. They hide their workings in secret code. They resist any involvement of the operator beyond the bare minimum. They discourage the development of skillfulness in their use. Automation ends up having an anesthetizing effect. We no longer feel our tools as parts of ourselves. In a renowned 1960 paper, “Man-Computer Symbiosis,” the psychologist and engineer J. C. R. Licklider described the shift in our relation to technology well. “In the man-machine systems of the past,” he wrote, “the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye.” The introduction of the computer changed all that. “‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.” The more automated everything gets, the easier it becomes to see technology as a kind of implacable, alien force that lies beyond our control and influence. Attempting to alter the path of its development seems futile. We press the on switch and follow the programmed routine.

To adopt such a submissive posture, however understandable it may be, is to shirk our responsibility for managing progress. A robotic harvesting machine may have no one in the driver’s seat, but it is every bit as much a product of conscious human thought as a humble scythe is. We may not incorporate the machine into our brain maps, as we do the hand tool, but on an ethical level the machine still operates as an extension of our will. Its intentions are our intentions. If a robot scares a bright green snake (or worse), we’re still to blame. We shirk a deeper responsibility as well: that of overseeing the conditions for the construction of the self. As computer systems and software applications come to play an ever-larger role in shaping our lives and the world, we have an obligation to be more, not less, involved in decisions about their design and use—before progress forecloses our options. We should be careful about what we make.

If that sounds naive or hopeless, it’s because we have been misled by a metaphor. We’ve defined our relation with technology not as that of body and limb or even that of sibling and sibling but as that of master and slave. The idea goes way back. It took hold at the dawn of Western philosophical thought, emerging first with the ancient Athenians. Aristotle, in discussing the operation of households at the beginning of his Politics, argued that slaves and tools are essentially equivalent, the former acting as “animate instruments” and the latter as “inanimate instruments” in the service of the master of the house. If tools could somehow become animate, Aristotle posited, they would be able to substitute directly for the labor of slaves. “There is only one condition on which we can imagine managers not needing subordinates, and masters not needing slaves,” he mused, anticipating the arrival of computer automation and even machine learning. “This condition would be that each [inanimate] instrument could do its own work, at the word of command or by intelligent anticipation.” It would be “as if a shuttle should weave itself, and a plectrum should do its own harp-playing.”

The conception of tools as slaves has colored our thinking ever since. It informs society’s recurring dream of emancipation from toil. “All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery,” wrote Oscar Wilde in 1891. “On mechanical slavery, on the slavery of the machine, the future of the world depends.” John Maynard Keynes, in a 1930 essay, predicted that mechanical slaves would free humankind from “the struggle for subsistence” and propel us to “our destination of economic bliss.” In 2013, Mother Jones columnist Kevin Drum declared that “a robotic paradise of leisure and contemplation eventually awaits us.” By 2040, he forecast, our computer slaves—“they never get tired, they’re never ill-tempered, they never make mistakes”—will have rescued us from labor and delivered us into a new Eden. “Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

With its roles reversed, the metaphor also informs society’s nightmares about technology. As we become dependent on our technological slaves, the thinking goes, we turn into slaves ourselves. From the eighteenth century on, social critics have routinely portrayed factory machinery as forcing workers into bondage. “Masses of labourers,” wrote Marx and Engels in their Communist Manifesto, “are daily and hourly enslaved by the machine.” Today, people complain all the time about feeling like slaves to their appliances and gadgets. “Smart devices are sometimes empowering,” observed The Economist in “Slaves to the Smartphone,” an article published in 2012. “But for most people the servant has become the master.” More dramatically still, the idea of a robot uprising, in which computers with artificial intelligence transform themselves from our slaves to our masters, has for a century been a central theme in dystopian fantasies about the future. The very word “robot,” coined by a science fiction writer in 1920, comes from robota, a Czech term for servitude.

The master-slave metaphor, in addition to being morally fraught, distorts the way we look at technology. It reinforces the sense that our tools are separate from ourselves, that our instruments have an agency independent of our own. We start to judge our technologies not on what they enable us to do but rather on their intrinsic qualities as products—their cleverness, their efficiency, their novelty, their style. We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.

The metaphor encourages society to take a simplistic and fatalistic view of technology and progress. If we assume that our tools act as slaves on our behalf, always working in our best interest, then any attempt to place limits on technology becomes hard to defend. Each advance grants us greater freedom and takes us a stride closer to, if not utopia, then at least the best of all possible worlds. Any misstep, we tell ourselves, will be quickly corrected by subsequent innovations. If we just let progress do its thing, it will find remedies for the problems it creates. “Technology is not neutral but serves as an overwhelming positive force in human culture,” writes one pundit, expressing the self-serving Silicon Valley ideology that in recent years has gained wide currency. “We have a moral obligation to increase technology because it increases opportunities.” The sense of moral obligation strengthens with the advance of automation, which, after all, provides us with the most animate of instruments, the slaves that, as Aristotle anticipated, are most capable of releasing us from our labors.

The belief in technology as a benevolent, self-healing, autonomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.” Being freed up sounds a lot more pleasant than being fired.

There’s a callousness to such grandiose futurism. As history reminds us, high-flown rhetoric about using technology to liberate workers often masks a contempt for labor. It strains credulity to imagine today’s technology moguls, with their libertarian leanings and impatience with government, agreeing to the kind of vast wealth-redistribution scheme that would be necessary to fund the self-actualizing leisure-time pursuits of the jobless multitudes. Even if society were to come up with some magic spell, or magic algorithm, for equitably parceling out the spoils of automation, there’s good reason to doubt whether anything resembling the “economic bliss” imagined by Keynes would ensue.

In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably feel less like paradise than like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past have been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.” For technology to fulfill humankind’s abiding “wish to be liberated from labor’s ‘toil and trouble’ ” at this point would be perverse. It would cast us deeper into a purgatory of malaise. What automation confronts us with, Arendt concluded, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.” Utopianism, she understood, is a form of self-delusion.

* * *

A while back, I had a chance meeting on the campus of a small, liberal arts college with a freelance photographer who was working on an assignment for the school. He was standing under a tree, waiting for some uncooperative clouds to get out of the way of the sun. I noticed he had a large-format film camera set up on a bulky tripod—it was hard to miss, as it looked almost absurdly old-fashioned—and I asked him why he was still using film. He told me that he had eagerly embraced digital photography a few years earlier. He had replaced his film cameras and his darkroom with digital cameras and a computer running the latest image-processing software. But after a few months, he switched back. It wasn’t that he was dissatisfied with the operation of the equipment or the resolution or accuracy of the images. It was that the way he went about his work had changed.

The constraints inherent in taking and developing pictures on film—the expense, the toil, the uncertainty—had encouraged him to work slowly when he was on a shoot, with deliberation, thoughtfulness, and a deep, physical sense of presence. Before he took a picture, he would compose the shot in his mind, attending to the scene’s light, color, framing, and form. He would wait patiently for the right moment to release the shutter. With a digital camera, he could work faster. He could take a slew of images, one after the other, and then use his computer to sort through them and crop and tweak the most promising ones. The act of composition took place after a photo was taken. The change felt intoxicating at first. But he found himself disappointed with the results. The images left him cold. Film, he realized, imposed a discipline of perception, of seeing, which led to richer, more artful, more moving photographs. Film demanded more of him. And so he went back to the older technology.

The photographer wasn’t the least bit antagonistic toward computers. He wasn’t beset by any abstract concerns about a loss of agency or autonomy. He wasn’t a crusader. He just wanted the best tool for the job—the tool that would encourage and enable him to do his finest, most fulfilling work. What he came to realize is that the newest, most automated, most expedient tool is not always the best choice. Although I’m sure he would bristle at being likened to the Luddites of the early nineteenth century, his decision to forgo the latest technology, at least in some stages of his work, was an act of rebellion resembling that of the old English machine-breakers, if without the fury. Like the Luddites, he understood that decisions about technology are also decisions about ways of working and ways of living—and he took control of those decisions rather than ceding them to others or giving way to the momentum of progress. He stepped back and thought critically about technology.

As a society, we’ve become suspicious of such acts. Out of ignorance or laziness or timidity, we’ve turned the Luddites into cartoon characters, emblems of backwardness. We assume that anyone who rejects a new tool in favor of an older one is guilty of nostalgia, of making choices sentimentally rather than rationally. But the real sentimental fallacy is the assumption that the new thing is always better suited to our purposes and intentions than the old thing. That’s the view of a child, naive and pliable. What makes one tool superior to another has nothing to do with how new it is. What matters is how it enlarges us or diminishes us, how it shapes our experience of nature and culture and one another. To cede choices about the texture of our daily lives to a grand abstraction called progress is folly.

Technology is a pillar and a glory of civilization. But it is also a test that we set for ourselves. It challenges us to think about what’s important in our lives, to ask ourselves what human being means. Computerization, as it extends its reach into the most intimate spheres of our existence, raises the stakes of the test. We can allow ourselves to be carried along by the technological current, wherever it may be taking us, or we can push against it. To resist invention is not to reject invention. It’s to humble invention, to bring progress down to earth. “Resistance is futile,” goes the glib Star Trek cliché beloved by techies. But that’s the opposite of the truth. Resistance is never futile. If the source of our vitality is, as Emerson taught us, “the active soul,” then our highest obligation is to resist any force, whether institutional or commercial or technological, that would enfeeble or enervate the active soul.

One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a challenge, we may be motivated by an anticipation of the ends of our labor, but, as Frost saw, it’s the work—the means—that makes us who we are. Automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?


This essay is adapted from the book The Glass Cage, published by W. W. Norton & Company. Copyright by Nicholas Carr.

The Shallows: tenth anniversary edition

My book The Shallows: What the Internet Is Doing to Our Brains turns ten this year, and to mark the occasion, my publisher, W. W. Norton, is publishing a new and expanded tenth-anniversary edition. It will be out on March 3.

Along with a new introduction, the new edition includes, as an Afterword, a new chapter that explores relevant technological and cultural developments over the last decade, with a particular focus on the cognitive and behavioral effects of smartphones and social media. The new chapter, titled “The Most Interesting Thing in the World,” also reviews salient research that’s appeared in the years since the first edition came out.

You can preorder the new edition from your local bookstore or through Amazon, Barnes & Noble, Powell’s, and other online booksellers.

Here’s a preview of the new Introduction:

Welcome to The Shallows. When I wrote this book ten years ago, the prevailing view of the Internet was sunny, often ecstatically so. We reveled in the seemingly infinite bounties of the online world. We admired the wizards of Silicon Valley and trusted them to act in our best interest. We took it on faith that computer hardware and software would make our lives better, our minds sharper. In a 2010 Pew Research survey of some 400 prominent thinkers, more than 80 percent agreed that, “by 2020, people’s use of the Internet [will have] enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices.”[

The year 2020 has arrived. We’re not smarter. We’re not making better choices.

The Shallows explains why we were mistaken about the Net. When it comes to the quality of our thoughts and judgments, the amount of information a communication medium supplies is less important than the way the medium presents the information and the way, in turn, our minds take it in. The brain’s capacity is not unlimited. The passageway from perception to understanding is narrow. It takes patience and concentration to evaluate new information — to gauge its accuracy, to weigh its relevance and worth, to put it into context — and the Internet, by design, subverts patience and concentration. When the brain is overloaded by stimuli, as it usually is when we’re peering into a network-connected computer screen, attention splinters, thinking becomes superficial, and memory suffers. We become less reflective and more impulsive. Far from enhancing human intelligence, I argue, the Internet degrades it.

Much has changed in the decade since The Shallows came out. Smartphones have become our constant companions. Social media has insinuated itself into everything we do. The dark things that can happen when everyone’s connected have happened. Our faith in Silicon Valley has been broken, yet the big Internet companies wield more power than ever. This tenth anniversary edition of The Shallows takes stock of the changes. It includes an extensive new afterword in which I examine the cognitive and cultural consequences of the rise of smartphones and social media, drawing on the large body of new research that has appeared since 2010. I have left the original text of the book largely unchanged. I’m biased, but I think The Shallows has aged well. To my eyes, it’s more relevant today than it was ten years ago. I hope you find it worthy of your attention.

From context collapse to content collapse

When social media was taking shape fifteen-odd years ago, the concept of “context collapse” helped frame and explain the phenomenon. Young scholars like Danah Boyd and Michael Wesch, building on the work of Joshua Meyrowitz, Erving Goffman, and other sociologists and media theorists, argued that networks like Friendster, MySpace, YouTube, and, later, Facebook and Twitter were dissolving the boundaries between social groups that had long shaped personal relations and identities. Before social media, you spoke to different “audiences” — family members, friends, colleagues, and so forth — in different ways. You modulated your tone of voice, your words, your behavior, and even your appearance to suit whatever social “context” you were in (workplace, home, school, nightclub, etc.) and then readjusted the presentation of yourself when you moved into another context.

On a social network, the theory went, all those different contexts collapsed into a single context. Whenever you posted a message or a photograph or a video, it could be seen by your friends, your parents, your coworkers, your bosses, and your teachers, not to mention the amorphous mass known as the general public. And, because the post was recorded, it could be seen by future audiences as well as the immediate one. When people realized they could no longer present versions of themselves geared to different audiences — it was all one audience now — they had to grapple with a new sort of identity crisis. Wesch described the experience in suitably melodramatic terms in an influential 2009 article about the pioneering vloggers on YouTube:

The problem is not a lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a black hole sucking all of time and space — virtually all possible contexts —in on itself. The would-be vlogger, now frozen in front of this black hole of contexts, faces a crisis of self-presentation.

As everyone rushed to join Facebook and other social networks, context collapse and the attendant crisis of self-presentation became universal. In a 2010 interview with the journalist David Kirkpatrick, Facebook founder Mark Zuckerberg put it bluntly: “You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” Zuckerberg praised context collapse as a force for moral cleanliness: “Having two identities for yourself is an example of a lack of integrity.” Facebook forces us to be pure.

But just as Zuckerberg was declaring context collapse an inevitability, the public rebelled. Desiring to keep social spheres separate, people began looking for ways to reestablish the old social boundaries within the new media environment. We decided — most of us, anyway — that we don’t want all the world to be our stage, at least not all the time. We want to perform different parts on different stages for different audiences. We’re happier as character actors than as stars.

The recent history of social media isn’t a story of context collapse. It’s a story of its opposite: context restoration. Young people led the way, moving much of their online conversation from the public platform of Facebook, where parents and teachers lurked, to the more intimate platform of Snapchat, where they could restrict their audience and where messages disappeared quickly. Private accounts became popular on other social networks as well. Group chats and group texts proliferated. On Instagram, people established pseudonymous accounts — fake Instagrams, or finstas — limited to their closest friends. Responding to the trend, Facebook itself introduced tools that allow members to restrict who can see a post and to specify how long the post stays visible. (Apparently, Zuckerberg has decided he’s comfortable undermining the integrity of the public.)

Context collapse remains an important conceptual lens, but what’s becoming clear now is that a very different kind of collapse — content collapse — will be the more consequential legacy of social media. Content collapse, as I define it, is the tendency of social media to blur traditional distinctions among once distinct types of information — distinctions of form, register, sense, and importance. As social media becomes the main conduit for information of all sorts — personal correspondence, news and opinion, entertainment, art, instruction, and on and on — it homogenizes that information as well as our responses to it.

Content began collapsing the moment it began to be delivered through computers. Digitization made it possible to deliver information that had required specialized mediums — newspapers and magazines, vinyl records and cassettes, radios, TVs, telephones, cinemas, etc. — through a single, universal medium. In the process, the formal standards and organizational hierarchies inherent to the old mediums began to disappear. The computer flattened everything.

I remember, years ago, being struck by the haphazardness of the headlines flowing through my RSS reader. I’d look at the latest update to the New York Times feed, for instance, and I’d see something like this:

Dam Collapse Feared as Flood Waters Rise in Midwest
Nike’s New Sneaker Becomes Object of Lust
Britney Spears Cleans Up Her Act
Scores Dead in Baghdad Car-Bomb Attack
A Spicy New Take on Bean Dip

It wasn’t just that the headlines, free-floating, decontextualized motes of journalism ginned up to trigger reflexive mouse clicks, had displaced the stories. It was that the whole organizing structure of the newspaper, its epistemological architecture, had been junked. The news section (with its local, national, and international subsections), the sports section, the arts section, the living section, the opinion pages: they’d all been fed through a shredder, then thrown into a wind tunnel. What appeared on the screen was a jumble, high mixed with low, silly with smart, tragic with trivial. The cacophony of the RSS feed, it’s now clear, heralded a sea change in the distribution and consumption of information. The new order would be disorder.

The collapse gained momentum after Facebook introduced its News Feed in 2006. To a dog’s breakfast of news headlines, the News Feed added a dog’s breakfast of personal posts and messages and then mixed in another dog’s breakfast of sponsored posts and ads. It looked, smelled, and tasted like the meal Brad Pitt feeds his pitbull in Once Upon a Time … in Hollywood. After a brief period of complaining, with the usual and empty #deletefacebook threats, the public embraced the News Feed. The convenience of getting all content of interest through a single stream — no need to jump from site to site anymore — overrode the initial concerns. Now, everything would take the form of an “update.”

In discussing the appeal of the News Feed in that same interview with Kirkpatrick, Zuckerberg observed, “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” The statement is grotesque not because it’s false — it’s completely true — but because it’s a category error. It yokes together in an obscene comparison two events of radically different scale and import. And yet, in his tone-deaf way, Zuckerberg managed to express the reality of content collapse. When it comes to information, social media renders category errors obsolete.

The rise of the smartphone has completed the collapse of content. The diminutive size of the device’s screen further compacted all forms of information. The instant notifications and infinite scrolls that became the phone’s default design standards required that all information be rendered in a way that could be taken in at a glance, further blurring the old distinctions between types of content. Now all information belongs to a single category, and it all pours through a single channel.

Many of the qualities of social media that make people uneasy stem from content collapse. First, by leveling everything, social media also trivializes everything — freed of barriers, information, like water, pools at the lowest possible level. A presidential candidate’s policy announcement is given equal weight to a snapshot of your niece’s hamster and a video of the latest Kardashian contouring. Second, as all information consolidates on social media, we respond to it using the same small set of tools the platforms provide for us. Our responses become homogenized, too. That’s true of both the form of the responses (repost, retweet, like, heart, hashtag, fire emoji) and their content (Love! Hate! Cringe!). The software’s formal constraints place tight limits on our expressiveness, no matter what we’re talking about.

Third, content collapse puts all types of information into direct competition. The various producers and providers of content, from journalists to influencers to politicians to propagandists, all need to tailor their content and its presentation to the algorithms that determine what people see. The algorithms don’t make formal or qualitative distinctions; they judge everything by the same criteria. And those criteria tend to promote oversimplification, emotionalism, tendentiousness, tribalism — the qualities that make a piece of information stand out, at least momentarily, from the screen’s blur.

Finally, content collapse consolidates power over information, and conversation, into the hands of the small number of companies that own the platforms and write the algorithms. The much maligned gatekeepers of the past could exert editorial control only over a particular type of content that flowed through a particular medium — a magazine, a radio station, a TV network. Our new gatekeepers control information of all kinds. When content collapses, there’s only one gate.


TikTok and the coming of infinite media

If Instagram showed us what a world without art looks like, TikTok shows us what a world without shame looks like. The old virtues of restraint — prudence, discretion, tact — are gone. There is only one virtue: to be seen. In TikTok’s world, which more and more is our world, shamelessness has lost its negative connotations and become an asset. You may not get fifteen minutes of fame, but you will get fifteen seconds.

The rise of TikTok heralds something bigger, though: a reconfiguration of media. As mass media defined the twentieth century, so the twenty-first will be defined by infinite media. The media business has always aspired to endlessness, to securing an unbroken hold on the sense organs of the public. TikTok at last achieves it. More than YouTube, more than Facebook, more than Instagram, more than Twitter, TikTok reveals the sticky new atmosphere of our lives.

Infinite media requires endlessness on two fronts: supply and demand. Shamelessness, in this context, is best understood as a supply-side resource, a means of production. To manufacture the unlimited supply of content that an app like TikTok needs, the total productive capacity of the masses needs to be mobilized. That requires not just the ready availability of media-production tools (the smartphone’s camera and microphone and its editing software) and the existence of a universal broadcast network (the internet), but also a culture that encourages and celebrates self-exposure and self-promotion. Vanity must go unchecked by modesty. The showoff, once a risible figure, must become an aspirational one.

On the demand side, too, TikTok achieves endlessness. It is endless horizontally, each video an infinitely looping GIF, and it is endless vertically, the videos stacked up in an infinite scroll. There is no exit from TikTok’s cinema. One college student I know, having recently downloaded the app, told me that she now finds herself watching TikToks until her iPhone battery dies. She can’t pull her eyes away from the screen, but she is still able to withstand the temptation to recharge her phone while the app’s running. Electrical failure is the last defense against infinite media.

TikTok’s Chinese owner, ByteDance, specializes in using machine-learning algorithms to tailor content to individual appetites. (With artificial intelligence, there is accounting for taste.) “Personalised information flows will dictate the way,” the company declares in a vaguely Maoish aphorism in its mission statement. It doesn’t need to build exhaustive data profiles of its users as, say, Facebook does. It just watches what you watch, and how you watch it, and then feeds you whatever video has the highest calculated probability of tickling your fancy. You feel the frisson of discovery, but behind the scenes it’s just a machine pumping out widgets. “TikTok deals in the illusion, at least, of revelation,” New York Times critic Amanda Hess writes. Not to mention the illusion, at least, of egalitarianism, of communalism, of joy.

When I tap the heart on some high school kid’s weird video, I feel a flicker of pride, as if I am supporting him in some way. But all I am really doing is demanding more.

TikTok is at once a manifestation and a parody of what Stanford communication professor Fred Turner has termed the “democratic surround.” From the 1940s through the 1960s, media-minded intellectuals promoted the ideal of a polyphonic multimedia experience that would be created and consumed by the public. The democratic surround would not only free the masses from centrally controlled media, with its authoritarian aura, but would raise the collective consciousness. TikTok gives us the democratic surround, but it turns out to be a pantomime. The central authority is still there, hidden behind a mask of your face.*

Infinite media sucks in all media, from news to entertainment to communication. Look at what’s going on in pop. Each TikTok has a soundtrack, a looping clip spinning on a wee turntable in the corner of the screen. The music business, seeing TikTok’s ability to turn songs into memes, has already developed a craving for the app’s yee yee juice. As Jia Tolentini explains in the New Yorker:

Certain musical elements serve as TikTok catnip: bass-heavy transitions that can be used as punch lines; rap songs that are easy to lip-synch or include a narrative-friendly call and response. A twenty-six-year-old Australian producer named Adam Friedman, half of the duo Cookie Cutters, told me that he was now concentrating on lyrics that you could act out with your hands. “I write hooks, and I try it in the mirror—how many hand movements can I fit into fifteen seconds?” he said. “You know, goodbye, call me back, peace out, F you.”

The aural hooks amplify the visual hooks, and vice versa, to saturate the sensorium. When it comes to the infinite, more is always better.

Boomers may struggle to make sense of TikTok, but they’ll appreciate its most obvious antecedent: the Ed Sullivan Show. Squeeze old Ed through a wormhole and give him a spin in a Vitamix, and you get TikTok. There’s Liza Minnelli singing “MacArthur Park,” then there’s a guy spinning plates on the ends of sticks, then there’s Señor Wences ventriloquizing through a hand puppet. Except it’s all us. We’re Liza, we’re the plate-spinning guy, we’re Señor Wences, we’re the puppet. We’re even Ed, flicking acts on and off the stage with the capriciousness of a pagan god.

Every Sunday night during the sixties the nation found itself glued to the set, engrossed in a variety show. It was an omen.

___________
*In a recent essay, collected in the book Trump and the Media (reviewed here by me), Turner argues that the democratization of media may paradoxically breed authoritarianism.


Larry and Sergey: a valediction

Photographer: “How ’bout we do the shoot in a hot tub?”

Larry and Sergey: “Sure!”

Never such innocence again.

Can billionaires be tragic figures? Lear must have been worth a billion or two, in today’s dollars. And surely the family fortunes of Hamlet and Macbeth crossed the magical ten-figure line. I’d go so far as to suggest that, these days, you have to be a billionaire to be a tragic figure. The most the rest of us can aspire to is pathos, our woes memorialized by a Crying Face emoji.

Larry Page and Sergey Brin spent the first fifteen years of their careers building the greatest information network the world has ever known and the last five trying to escape it. Having made everything visible, they made themselves invisible. Larry has even managed to keep the names of his two kids secret, an act of paternal love that is also, given Google’s mission “to organize the world’s information and make it universally accessible and useful,” an act of corporate treason.

Look at them in that hot tub. They’re as bubbly as the water. And that’s the way they appear in all the pictures of them that date back to the turn of the millennium. Larry and Sergey may well have been the last truly happy human beings on the planet. They were doing what they loved, and they were convinced that what they loved would redeem the world. That kind of happiness requires a combination of idealism and confidence that isn’t possible anymore. When, in 1965, an interviewer from Cahiers du Cinema pointed out to Jean-Luc Godard that “there is a good deal of blood” in his movie Pierrot le Fou, Godard replied, “Not blood, red.” What the cinema did to blood, the internet has done to happiness. It turned it into an image that is repeated endlessly on screens but no longer refers to anything real.

They were prophets, Larry and Sergey. When, in their famous 1998 grad-school paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” they introduced Google to the world, they warned that if the search engine were ever to leave the “academic realm” and become a business, it would be corrupted. It would become “a black art” and “be advertising oriented.” That’s exactly what happened — not just to Google but to the internet as a whole. The white-robed wizards of Silicon Valley now ply the black arts of algorithmic witchcraft for power and money. 

When, in May, Larry and Sergey were spotted at one of Google’s all-company TGIF meetings, the sighting was treated as a kind of religious vision. It was the first time the duo had bothered to show up at one of the gatherings all year. Their announcement last week that they’re resigning from their managerial roles at the company they founded was a formality. Larry and Sergey have been in ghost mode for a long time now — off the map, nontransparent, unspiderable. Search for them all you want. They’re not there.