Category Archives: Uncategorized

The love that lays the swale in rows

There’s a line of verse I’m always coming back to, and it’s been on my mind more than usual these last few months:

The fact is the sweetest dream that labor knows.

It’s the second to last line of one of Robert Frost’s earliest and best poems, a sonnet called “Mowing.” He wrote it just after the turn of the twentieth century, when he was a young man, in his twenties, with a young family. He was working as a farmer, raising chickens and tending a few apple trees on a small plot of land his grandfather had bought for him in Derry, New Hampshire. It was a difficult time in his life. He had little money and few prospects. He had dropped out of two colleges, Dartmouth and Harvard, without earning a degree. He had been unsuccessful in a succession of petty jobs. He was sickly. He had nightmares. His firstborn child, a son, had died of cholera at the age of three. His marriage was troubled. “Life was peremptory,” Frost would later recall, “and threw me into confusion.”

But it was during those lonely years in Derry that he came into his own as a writer and an artist. Something about farming—the long, repetitive days, the solitary work, the closeness to nature’s beauty and carelessness—inspired him. The burden of labor eased the burden of life. “If I feel timeless and immortal it is from having lost track of time for five or six years there,” he would write of his stay in Derry. “We gave up winding clocks. Our ideas got untimely from not taking newspapers for a long period. It couldn’t have been more perfect if we had planned it or foreseen what we were getting into.” In the breaks between chores on the farm, Frost somehow managed to write most of the poems for his first book, A Boy’s Will; about half the poems for his second book, North of Boston; and a good number of other poems that would find their way into subsequent volumes.

“Mowing,” from A Boy’s Will, was the greatest of his Derry lyrics. It was the poem in which he found his distinctive voice: plainspoken and conversational, but also sly and dissembling. (To really understand Frost—to really understand anything, including yourself—requires as much mistrust as trust.) As with many of his best works, “Mowing” has an enigmatic, almost hallucinatory quality that belies the simple and homely picture it paints—in this case of a man cutting a field of grass for hay. The more you read the poem, the deeper and stranger it becomes:

There was never a sound beside the wood but one,
And that was my long scythe whispering to the ground.
What was it it whispered? I knew not well myself;
Perhaps it was something about the heat of the sun,
Something, perhaps, about the lack of sound—
And that was why it whispered and did not speak.
It was no dream of the gift of idle hours,
Or easy gold at the hand of fay or elf:
Anything more than the truth would have seemed too weak
To the earnest love that laid the swale in rows,
Not without feeble-pointed spikes of flowers
(Pale orchises), and scared a bright green snake.
The fact is the sweetest dream that labor knows.
My long scythe whispered and left the hay to make.

We rarely look to poetry for instruction anymore, but here we see how a poet’s scrutiny of the world can be more subtle and discerning than a scientist’s. Frost understood the meaning of the mental state we now call “flow” long before psychologists and neurobiologists delivered the empirical evidence. His mower is not an airbrushed peasant, a rustic caricature. He’s a farmer, a man doing a hard job on a still, hot summer day. He’s not dreaming of “idle hours” or “easy gold.” His mind is on his work—the bodily rhythm of the cutting, the weight of the tool in his hands, the stalks piling up around him. He’s not seeking some greater truth beyond the work. The work is the truth.

The fact is the sweetest dream that labor knows.

There are mysteries in that line. Its power lies in its refusal to mean anything more or less than what it says. But it seems clear that what Frost is getting at, in the line and in the poem, is the centrality of action to both living and knowing. Only through work that brings us into the world do we approach a true understanding of existence, of “the fact.” It’s not an understanding that can be put into words. It can’t be made explicit. It’s nothing more than a whisper. To hear it, you need to get very near its source. Labor, whether of the body or the mind, is more than a way of getting things done. It’s a form of contemplation, a way of seeing the world face-to-face rather than through a glass. Action un-mediates perception, gets us close to the thing itself. It binds us to the earth, Frost implies, as love binds us to one another. The antithesis of transcendence, work puts us in our place.

Frost is a poet of labor. He’s always coming back to those revelatory moments when the active self blurs into the surrounding world—when, as he would write in another poem, “the work is play for mortal stakes.” Richard Poirier, in his book Robert Frost: The Work of Knowing, described with great sensitivity the poet’s view of the essence and essentialness of hard work: “Any intense labor enacted in his poetry, like mowing or apple-picking, can penetrate to the visions, dreams, myths that are at the heart of reality, constituting its articulate form for those who can read it with a requisite lack of certainty and an indifference to merely practical possessiveness.” The knowledge gained through such efforts may be as shadowy and elusive as a dream, but “in its mythic propensities, the knowledge is less ephemeral than are the apparently more practical results of labor, like food or money.”

When we embark on a task, with our bodies or our minds, on our own or alongside others, we usually have a practical goal in sight. Our eyes are looking ahead to the product of our work—a store of hay for feeding livestock, perhaps. But it’s through the work itself that we come to a deeper understanding of ourselves and our situation. The mowing, not the hay, is what matters most.

*  *  *

Frost is not romanticizing some distant, pre-technological past. Although he was dismayed by those who allowed themselves to become “bigoted in reliance / On the gospel of modern science,” he felt a kinship with scientists and inventors. As a poet, he shared with them a common spirit and pursuit. They were all explorers of the mysteries of earthly life, excavators of meaning from matter. They were all engaged in work that, as Poirier described it, “can extend the capability of human dreaming.” For Frost, the greatest value of “the fact”—whether apprehended in the world or expressed in a work of art or made manifest in a tool or other invention—lay in its ability to expand the scope of individual knowing and hence open new avenues of perception, action, and imagination. In the long poem “Kitty Hawk,” written near the end of his life, he celebrated the Wright brothers’ flight “Into the unknown, / Into the sublime.” In making their own “pass / At the infinite,” the brothers also made the experience of flight, and the sense of unboundedness it provides, possible for all of us.

Technology is as crucial to the work of knowing as it is to the work of production. The human body, in its native, unadorned state, is a feeble thing. It’s constrained in its strength, its dexterity, its sensory range, its calculative prowess, its memory. It quickly reaches the limits of what it can do. But the body encompasses a mind that can imagine, desire, and plan for achievements the body alone can’t fulfill. This tension between what the body can accomplish and what the mind can envision is what gave rise to and continues to propel and shape technology. It’s the spur for humankind’s extension of itself and elaboration of nature. Technology isn’t what makes us “posthuman” or “transhuman,” as some writers and scholars these days suggest. It’s what makes us human. Technology is in our nature. Through our tools we give our dreams form. We bring them into the world. The practicality of technology may distinguish it from art, but both spring from a similar, distinctly human yearning.

One of the many jobs the human body is unsuited to is cutting grass. (Try it if you don’t believe me.) What allows the mower to do his work, what allows him to be a mower, is the tool he wields, his scythe. The mower is, and has to be, technologically enhanced. The tool makes the mower, and the mower’s skill in using the tool remakes the world for him. The world becomes a place in which he can act as a mower, in which he can lay the swale in rows. This idea, which on the surface may sound trivial or even tautological, points to something elemental about life and the formation of the self.

“The body is our general means of having a world,” wrote the French philosopher Maurice Merleau-Ponty in his 1945 masterwork Phenomenology of Perception. Our physical makeup—the fact that we walk upright on two legs at a certain height, that we have a pair of hands with opposable thumbs, that we have eyes which see in a particular way, that we have a certain tolerance for heat and cold—determines our perception of the world in a way that precedes, and then molds, our conscious thoughts about the world. We see mountains as lofty not because mountains are lofty but because our perception of their form and height is shaped by our own stature. We see a stone as, among other things, a weapon because the particular construction of our hand and arm enables us to pick it up and throw it. Perception, like cognition, is embodied.

It follows that whenever we gain a new talent, we not only change our bodily capacities, we change the world. The ocean extends an invitation to the swimmer that it withholds from the person who has never learned to swim. With every skill we master, the world reshapes itself to reveal greater possibilities. It becomes more interesting, and being in it becomes more rewarding. This may be what Baruch Spinoza, the seventeenth-century Dutch philosopher who rebelled against René Descartes’ division of mind and body, was getting at when he wrote, “The human mind is capable of perceiving a great many things, and is the more capable, the more its body can be disposed in a great many ways.” John Edward Huth, a physics professor at Harvard, testifies to the regeneration that attends the mastery of a skill. A decade ago, inspired by Inuit hunters and other experts in natural wayfinding, he undertook “a self-imposed program to learn navigation through environmental clues.” Through months of rigorous outdoor observation and practice, he taught himself how to read the nighttime and daytime skies, interpret the movements of clouds and waves, decipher the shadows cast by trees. “After a year of this endeavor,” he recalled in a recent essay, “something dawned on me: the way I viewed the world had palpably changed. The sun looked different, as did the stars.” Huth’s enriched perception of the environment, gained through a kind of “primal empiricism,” struck him as being “akin to what people describe as spiritual awakenings.”

Technology, by enabling us to act in ways that go beyond our bodily limits, also alters our perception of the world and what the world signifies to us. Technology’s transformative power is most apparent in tools of discovery, from the microscope and the particle accelerator of the scientist to the canoe and the spaceship of the explorer, but the power is there in all tools, including the ones we use in our everyday lives. Whenever an instrument allows us to cultivate a new talent, the world becomes a different and more intriguing place, a setting of even greater opportunity. To the possibilities of nature are added the possibilities of culture. “Sometimes,” wrote Merleau-Ponty, “the signification aimed at cannot be reached by the natural means of the body. We must, then, construct an instrument, and the body projects a cultural world around itself.” The value of a well-made and well-used tool lies not only in what it produces for us but what it produces in us. At its best, technology opens fresh ground. It gives us a world that is at once more understandable to our senses and better suited to our intentions—a world in which we’re more at home. Used thoughtfully and with skill, a tool becomes much more than a means of production or consumption. It becomes a means of experience. It gives us more ways to lead rich and engaged lives.

Look more closely at the scythe. It’s a simple tool, but an ingenious one. Invented around 500 BC, by the Romans or the Gauls, it consists of a curved blade, forged of iron or steel, attached to the end of a long wooden pole, or snath. The snath typically has, about halfway down its length, a small wooden grip, or nib, that makes it possible to grasp and swing the implement with two hands. The scythe is a variation on the much older sickle, a similar but short-handled cutting tool that was invented in the Stone Age and came to play an essential role in the early development of agriculture and, in turn, of civilization. What made the scythe a momentous innovation in its own right is that its long snath allowed a farmer or other laborer to cut grass at ground level while standing upright. Hay or grain could be harvested, or a pasture cleared, more quickly than before. Agriculture leaped forward.

The scythe enhanced the productivity of the worker in the field, but its benefit went beyond what could be measured in yield. The scythe was a congenial tool, far better suited to the bodily work of mowing than the sickle had been. Rather than stooping or squatting, the farmer could walk with a natural gait and use both his hands, as well as the full strength of his torso, in his job. The scythe served as both an aid and an invitation to the skilled work it enabled. We see in its form a model for technology on a human scale, for tools that extend the productive capabilities of society without circumscribing the individual’s scope of action and perception. Indeed, as Frost makes clear in “Mowing,” the scythe intensifies its user’s involvement with and apprehension of the world. The mower swinging a scythe does more, but he also knows more. Despite outward appearances, the scythe is a tool of the mind as well as the body.

Not all tools are so congenial. Some deter us from skilled action. The technologies of computerization and automation that hold such sway over us today rarely invite us into the world or encourage us to develop new talents that enlarge our perceptions and expand our possibilities. They mostly have the opposite effect. They’re designed to be disinviting. They pull us away from the world. That’s a consequence not only of prevailing design practices, which place ease and efficiency above all other concerns, but also of the fact that, in our personal lives, the computer, particularly in the form of the smartphone, has become a media device, its software painstakingly programmed to grab and hold our attention. As most people know from experience, the computer screen is intensely compelling, not only for the conveniences it offers but also for the many diversions it provides. There’s always something going on, and we can join in at any moment with the slightest of effort. Yet the screen, for all its enticements and stimulations, is an environment of sparseness—fast-moving, efficient, clean, but revealing only a shadow of the world.

That’s true even of the most meticulously crafted simulations of space that we find in virtual-reality applications such as games, architectural models, three-dimensional maps, and the video-meeting tools used to mimic classrooms, conference rooms, and cocktail parties. Artificial renderings of space may provide stimulation to our eyes and to a lesser degree our ears, but they tend to starve our other senses—touch, smell, taste—and greatly restrict the movements of our bodies. A study of rodents, published in Science in 2013, indicated that the brain cells used in navigation are much less active when animals make their way through computer-generated landscapes than when they traverse the real world. “Half of the neurons just shut up,” reported one of the researchers, UCLA neurophysicist Mayank Mehta. He believes that the drop-off in mental activity likely stems from the lack of “proximal cues”—environmental smells, sounds, and textures that provide clues to location—in digital simulations of space. “A map is not the territory it represents,” the Polish philosopher Alfred Korzybski famously remarked, and a computer rendering is not the territory it represents either. When we enter the virtual world, we’re required to shed much of our body. That doesn’t free us; it emaciates us.

The world in turn is made less meaningful. As we adapt to our streamlined environment, we render ourselves incapable of perceiving what the world offers its most ardent inhabitants. We travel blindfolded. The result is existential impoverishment, as nature and culture withdraw their invitations to act and to perceive. The self can only thrive, can only grow, when it encounters and overcomes “resistance from surroundings,” wrote the American pragmatist John Dewey in Art as Experience. “An environment that was always and everywhere congenial to the straightaway execution of our impulsions would set a term to growth as sure as one always hostile would irritate and destroy. Impulsion forever boosted on its forward way would run its course thoughtless, and dead to emotion.”

Ours may be a time of material comfort and technological wonder, but it’s also a time of aimlessness and gloom. During the first decade of this century, the number of Americans taking prescription drugs to treat depression or anxiety rose by nearly a quarter. One in five adults now regularly takes such medications. Many also take sleep aids such as Ambien. The suicide rate among middle-age Americans increased by nearly 30 percent over the same ten years, according to a report from the Centers for Disease Control and Prevention. More than 10 percent of American schoolchildren, and nearly 20 percent of high school–age boys, have been given a diagnosis of attention-deficit/hyperactivity disorder, and two-thirds of that group take drugs like Ritalin and Adderall to treat the condition. The current pandemic has only exacerbated the discontent.

The reasons for our malaise are many and only dimly understood. But one of them may be that through the pursuit of a frictionless existence, we’ve succeeded in turning the landscape of our lives into a barren place. Drugs that numb the nervous system provide a way to rein in our vital, animal sensorium, to shrink our being to a size that better suits our constricted environs.

* * *

Frost’s sonnet also contains, as one of its many whispers, a warning about technology’s ethical hazards. There’s a brutality to the mower’s scythe. It indiscriminately cuts down flowers—those tender, pale orchises—along with the stalks of grass. It frightens innocent animals, like the bright green snake. If technology embodies our dreams, it also embodies other, less benign qualities in our makeup, such as our will to power and the arrogance and insensitivity that accompany it. Frost returns to this theme a little later in A Boy’s Will, in a second lyric about cutting hay, “The Tuft of Flowers.” The poem’s narrator comes upon a freshly mown field and, while following the flight of a passing butterfly with his eyes, discovers in the midst of the cut grass a small cluster of flowers, “a leaping tongue of bloom” that “the scythe had spared”:

The mower in the dew had loved them thus,
By leaving them to flourish, not for us,
Nor yet to draw one thought of us to him,
But from sheer morning gladness to the brim.

Working with a tool is never just a practical matter, Frost is telling us, with characteristic delicacy. It always entails moral choices and has moral consequences. It’s up to us, as users and makers of tools, to humanize technology, to aim its cold blade wisely. That requires vigilance and care.

The scythe is still employed in subsistence farming in many parts of the world. But it has no place on the modern farm, the development of which, like the development of the modern factory, office, and home, has required ever-more complex and efficient equipment. The threshing machine was invented in the 1780s, the mechanical reaper appeared around 1835, the baler came a few years after that, and the combine harvester began to be produced commercially toward the end of the nineteenth century. The pace of technological advance has only accelerated in the decades since, and today the trend is reaching its logical conclusion with the computerization of agriculture. The working of the soil, which Thomas Jefferson saw as the most vigorous and virtuous of occupations, is being off-loaded almost entirely to machines. Farmhands are being replaced by “drone tractors” and other robotic systems that, using sensors, satellite signals, and software, plant seeds, fertilize and weed fields, harvest and package crops, and milk cows and tend other livestock. In development are robo-shepherds that guide flocks through pastures. Even if scythes still whispered in the fields of the industrial farm, no one would be around to hear them.

The congeniality of hand tools encourages us to take responsibility for their use. Because we sense the tools as extensions of our bodies, parts of ourselves, we have little choice but to be intimately involved in the ethical choices they present. The scythe doesn’t choose to slash or spare the flowers; the mower does. As we become more expert in the use of a tool, our sense of responsibility for it naturally strengthens. To the novice mower, a scythe may feel like a foreign object in the hands; to the accomplished mower, hands and scythe become one thing. Talent tightens the bond between an instrument and its user. This feeling of physical and ethical entanglement doesn’t have to go away as technologies become more complex. In reporting on his historic solo flight across the Atlantic in 1927, Charles Lindbergh spoke of his plane and himself as if they were a single being: “We have made this flight across the ocean, not I or it.” The airplane was a complicated system encompassing many components, but to a skilled pilot it still had the intimate quality of a hand tool. The love that lays the swale in rows is also the love that parts the clouds for the stick-and-rudder man.

Automation weakens the bond between tool and user not because computer-controlled systems are complex but because they ask so little of us. They hide their workings in secret code. They resist any involvement of the operator beyond the bare minimum. They discourage the development of skillfulness in their use. Automation ends up having an anesthetizing effect. We no longer feel our tools as parts of ourselves. In a renowned 1960 paper, “Man-Computer Symbiosis,” the psychologist and engineer J. C. R. Licklider described the shift in our relation to technology well. “In the man-machine systems of the past,” he wrote, “the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye.” The introduction of the computer changed all that. “‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.” The more automated everything gets, the easier it becomes to see technology as a kind of implacable, alien force that lies beyond our control and influence. Attempting to alter the path of its development seems futile. We press the on switch and follow the programmed routine.

To adopt such a submissive posture, however understandable it may be, is to shirk our responsibility for managing progress. A robotic harvesting machine may have no one in the driver’s seat, but it is every bit as much a product of conscious human thought as a humble scythe is. We may not incorporate the machine into our brain maps, as we do the hand tool, but on an ethical level the machine still operates as an extension of our will. Its intentions are our intentions. If a robot scares a bright green snake (or worse), we’re still to blame. We shirk a deeper responsibility as well: that of overseeing the conditions for the construction of the self. As computer systems and software applications come to play an ever-larger role in shaping our lives and the world, we have an obligation to be more, not less, involved in decisions about their design and use—before progress forecloses our options. We should be careful about what we make.

If that sounds naive or hopeless, it’s because we have been misled by a metaphor. We’ve defined our relation with technology not as that of body and limb or even that of sibling and sibling but as that of master and slave. The idea goes way back. It took hold at the dawn of Western philosophical thought, emerging first with the ancient Athenians. Aristotle, in discussing the operation of households at the beginning of his Politics, argued that slaves and tools are essentially equivalent, the former acting as “animate instruments” and the latter as “inanimate instruments” in the service of the master of the house. If tools could somehow become animate, Aristotle posited, they would be able to substitute directly for the labor of slaves. “There is only one condition on which we can imagine managers not needing subordinates, and masters not needing slaves,” he mused, anticipating the arrival of computer automation and even machine learning. “This condition would be that each [inanimate] instrument could do its own work, at the word of command or by intelligent anticipation.” It would be “as if a shuttle should weave itself, and a plectrum should do its own harp-playing.”

The conception of tools as slaves has colored our thinking ever since. It informs society’s recurring dream of emancipation from toil. “All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery,” wrote Oscar Wilde in 1891. “On mechanical slavery, on the slavery of the machine, the future of the world depends.” John Maynard Keynes, in a 1930 essay, predicted that mechanical slaves would free humankind from “the struggle for subsistence” and propel us to “our destination of economic bliss.” In 2013, Mother Jones columnist Kevin Drum declared that “a robotic paradise of leisure and contemplation eventually awaits us.” By 2040, he forecast, our computer slaves—“they never get tired, they’re never ill-tempered, they never make mistakes”—will have rescued us from labor and delivered us into a new Eden. “Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

With its roles reversed, the metaphor also informs society’s nightmares about technology. As we become dependent on our technological slaves, the thinking goes, we turn into slaves ourselves. From the eighteenth century on, social critics have routinely portrayed factory machinery as forcing workers into bondage. “Masses of labourers,” wrote Marx and Engels in their Communist Manifesto, “are daily and hourly enslaved by the machine.” Today, people complain all the time about feeling like slaves to their appliances and gadgets. “Smart devices are sometimes empowering,” observed The Economist in “Slaves to the Smartphone,” an article published in 2012. “But for most people the servant has become the master.” More dramatically still, the idea of a robot uprising, in which computers with artificial intelligence transform themselves from our slaves to our masters, has for a century been a central theme in dystopian fantasies about the future. The very word “robot,” coined by a science fiction writer in 1920, comes from robota, a Czech term for servitude.

The master-slave metaphor, in addition to being morally fraught, distorts the way we look at technology. It reinforces the sense that our tools are separate from ourselves, that our instruments have an agency independent of our own. We start to judge our technologies not on what they enable us to do but rather on their intrinsic qualities as products—their cleverness, their efficiency, their novelty, their style. We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.

The metaphor encourages society to take a simplistic and fatalistic view of technology and progress. If we assume that our tools act as slaves on our behalf, always working in our best interest, then any attempt to place limits on technology becomes hard to defend. Each advance grants us greater freedom and takes us a stride closer to, if not utopia, then at least the best of all possible worlds. Any misstep, we tell ourselves, will be quickly corrected by subsequent innovations. If we just let progress do its thing, it will find remedies for the problems it creates. “Technology is not neutral but serves as an overwhelming positive force in human culture,” writes one pundit, expressing the self-serving Silicon Valley ideology that in recent years has gained wide currency. “We have a moral obligation to increase technology because it increases opportunities.” The sense of moral obligation strengthens with the advance of automation, which, after all, provides us with the most animate of instruments, the slaves that, as Aristotle anticipated, are most capable of releasing us from our labors.

The belief in technology as a benevolent, self-healing, autonomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.” Being freed up sounds a lot more pleasant than being fired.

There’s a callousness to such grandiose futurism. As history reminds us, high-flown rhetoric about using technology to liberate workers often masks a contempt for labor. It strains credulity to imagine today’s technology moguls, with their libertarian leanings and impatience with government, agreeing to the kind of vast wealth-redistribution scheme that would be necessary to fund the self-actualizing leisure-time pursuits of the jobless multitudes. Even if society were to come up with some magic spell, or magic algorithm, for equitably parceling out the spoils of automation, there’s good reason to doubt whether anything resembling the “economic bliss” imagined by Keynes would ensue.

In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably feel less like paradise than like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past have been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.” For technology to fulfill humankind’s abiding “wish to be liberated from labor’s ‘toil and trouble’ ” at this point would be perverse. It would cast us deeper into a purgatory of malaise. What automation confronts us with, Arendt concluded, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.” Utopianism, she understood, is a form of self-delusion.

* * *

A while back, I had a chance meeting on the campus of a small, liberal arts college with a freelance photographer who was working on an assignment for the school. He was standing under a tree, waiting for some uncooperative clouds to get out of the way of the sun. I noticed he had a large-format film camera set up on a bulky tripod—it was hard to miss, as it looked almost absurdly old-fashioned—and I asked him why he was still using film. He told me that he had eagerly embraced digital photography a few years earlier. He had replaced his film cameras and his darkroom with digital cameras and a computer running the latest image-processing software. But after a few months, he switched back. It wasn’t that he was dissatisfied with the operation of the equipment or the resolution or accuracy of the images. It was that the way he went about his work had changed.

The constraints inherent in taking and developing pictures on film—the expense, the toil, the uncertainty—had encouraged him to work slowly when he was on a shoot, with deliberation, thoughtfulness, and a deep, physical sense of presence. Before he took a picture, he would compose the shot in his mind, attending to the scene’s light, color, framing, and form. He would wait patiently for the right moment to release the shutter. With a digital camera, he could work faster. He could take a slew of images, one after the other, and then use his computer to sort through them and crop and tweak the most promising ones. The act of composition took place after a photo was taken. The change felt intoxicating at first. But he found himself disappointed with the results. The images left him cold. Film, he realized, imposed a discipline of perception, of seeing, which led to richer, more artful, more moving photographs. Film demanded more of him. And so he went back to the older technology.

The photographer wasn’t the least bit antagonistic toward computers. He wasn’t beset by any abstract concerns about a loss of agency or autonomy. He wasn’t a crusader. He just wanted the best tool for the job—the tool that would encourage and enable him to do his finest, most fulfilling work. What he came to realize is that the newest, most automated, most expedient tool is not always the best choice. Although I’m sure he would bristle at being likened to the Luddites of the early nineteenth century, his decision to forgo the latest technology, at least in some stages of his work, was an act of rebellion resembling that of the old English machine-breakers, if without the fury. Like the Luddites, he understood that decisions about technology are also decisions about ways of working and ways of living—and he took control of those decisions rather than ceding them to others or giving way to the momentum of progress. He stepped back and thought critically about technology.

As a society, we’ve become suspicious of such acts. Out of ignorance or laziness or timidity, we’ve turned the Luddites into cartoon characters, emblems of backwardness. We assume that anyone who rejects a new tool in favor of an older one is guilty of nostalgia, of making choices sentimentally rather than rationally. But the real sentimental fallacy is the assumption that the new thing is always better suited to our purposes and intentions than the old thing. That’s the view of a child, naive and pliable. What makes one tool superior to another has nothing to do with how new it is. What matters is how it enlarges us or diminishes us, how it shapes our experience of nature and culture and one another. To cede choices about the texture of our daily lives to a grand abstraction called progress is folly.

Technology is a pillar and a glory of civilization. But it is also a test that we set for ourselves. It challenges us to think about what’s important in our lives, to ask ourselves what human being means. Computerization, as it extends its reach into the most intimate spheres of our existence, raises the stakes of the test. We can allow ourselves to be carried along by the technological current, wherever it may be taking us, or we can push against it. To resist invention is not to reject invention. It’s to humble invention, to bring progress down to earth. “Resistance is futile,” goes the glib Star Trek cliché beloved by techies. But that’s the opposite of the truth. Resistance is never futile. If the source of our vitality is, as Emerson taught us, “the active soul,” then our highest obligation is to resist any force, whether institutional or commercial or technological, that would enfeeble or enervate the active soul.

One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a challenge, we may be motivated by an anticipation of the ends of our labor, but, as Frost saw, it’s the work—the means—that makes us who we are. Automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?


This essay is adapted from the book The Glass Cage, published by W. W. Norton & Company. Copyright by Nicholas Carr.

The Shallows: tenth anniversary edition

My book The Shallows: What the Internet Is Doing to Our Brains turns ten this year, and to mark the occasion, my publisher, W. W. Norton, is publishing a new and expanded tenth-anniversary edition. It will be out on March 3.

Along with a new introduction, the new edition includes, as an Afterword, a new chapter that explores relevant technological and cultural developments over the last decade, with a particular focus on the cognitive and behavioral effects of smartphones and social media. The new chapter, titled “The Most Interesting Thing in the World,” also reviews salient research that’s appeared in the years since the first edition came out.

You can preorder the new edition from your local bookstore or through Amazon, Barnes & Noble, Powell’s, and other online booksellers.

Here’s a preview of the new Introduction:

Welcome to The Shallows. When I wrote this book ten years ago, the prevailing view of the Internet was sunny, often ecstatically so. We reveled in the seemingly infinite bounties of the online world. We admired the wizards of Silicon Valley and trusted them to act in our best interest. We took it on faith that computer hardware and software would make our lives better, our minds sharper. In a 2010 Pew Research survey of some 400 prominent thinkers, more than 80 percent agreed that, “by 2020, people’s use of the Internet [will have] enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices.”[

The year 2020 has arrived. We’re not smarter. We’re not making better choices.

The Shallows explains why we were mistaken about the Net. When it comes to the quality of our thoughts and judgments, the amount of information a communication medium supplies is less important than the way the medium presents the information and the way, in turn, our minds take it in. The brain’s capacity is not unlimited. The passageway from perception to understanding is narrow. It takes patience and concentration to evaluate new information — to gauge its accuracy, to weigh its relevance and worth, to put it into context — and the Internet, by design, subverts patience and concentration. When the brain is overloaded by stimuli, as it usually is when we’re peering into a network-connected computer screen, attention splinters, thinking becomes superficial, and memory suffers. We become less reflective and more impulsive. Far from enhancing human intelligence, I argue, the Internet degrades it.

Much has changed in the decade since The Shallows came out. Smartphones have become our constant companions. Social media has insinuated itself into everything we do. The dark things that can happen when everyone’s connected have happened. Our faith in Silicon Valley has been broken, yet the big Internet companies wield more power than ever. This tenth anniversary edition of The Shallows takes stock of the changes. It includes an extensive new afterword in which I examine the cognitive and cultural consequences of the rise of smartphones and social media, drawing on the large body of new research that has appeared since 2010. I have left the original text of the book largely unchanged. I’m biased, but I think The Shallows has aged well. To my eyes, it’s more relevant today than it was ten years ago. I hope you find it worthy of your attention.

Larry and Sergey: a valediction

Photographer: “How ’bout we do the shoot in a hot tub?”

Larry and Sergey: “Sure!”

Never such innocence again.

Can billionaires be tragic figures? Lear must have been worth a billion or two, in today’s dollars. And surely the family fortunes of Hamlet and Macbeth crossed the magical ten-figure line. I’d go so far as to suggest that, these days, you have to be a billionaire to be a tragic figure. The most the rest of us can aspire to is pathos, our woes memorialized by a Crying Face emoji.

Larry Page and Sergey Brin spent the first fifteen years of their careers building the greatest information network the world has ever known and the last five trying to escape it. Having made everything visible, they made themselves invisible. Larry has even managed to keep the names of his two kids secret, an act of paternal love that is also, given Google’s mission “to organize the world’s information and make it universally accessible and useful,” an act of corporate treason.

Look at them in that hot tub. They’re as bubbly as the water. And that’s the way they appear in all the pictures of them that date back to the turn of the millennium. Larry and Sergey may well have been the last truly happy human beings on the planet. They were doing what they loved, and they were convinced that what they loved would redeem the world. That kind of happiness requires a combination of idealism and confidence that isn’t possible anymore. When, in 1965, an interviewer from Cahiers du Cinema pointed out to Jean-Luc Godard that “there is a good deal of blood” in his movie Pierrot le Fou, Godard replied, “Not blood, red.” What the cinema did to blood, the internet has done to happiness. It turned it into an image that is repeated endlessly on screens but no longer refers to anything real.

They were prophets, Larry and Sergey. When, in their famous 1998 grad-school paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” they introduced Google to the world, they warned that if the search engine were ever to leave the “academic realm” and become a business, it would be corrupted. It would become “a black art” and “be advertising oriented.” That’s exactly what happened — not just to Google but to the internet as a whole. The white-robed wizards of Silicon Valley now ply the black arts of algorithmic witchcraft for power and money. They wanted most of all to be Gandalf, but they became Saruman.

When, in May, Larry and Sergey were spotted at one of Google’s all-company TGIF meetings, the sighting was treated as a kind of religious vision. It was the first time the duo had bothered to show up at one of the gatherings all year. Their announcement last week that they’re resigning from their managerial roles at the company they founded was a formality. Larry and Sergey have been in ghost mode for a long time now — off the map, nontransparent, unspiderable. Search for them all you want. They’re not there.

From public intellectual to public influencer

The corpse of the public intellectual has been much chewed upon. But only now is its full historical context coming into view. What seemed a death, we’re beginning to see, was but the larval stage of a metamorphosis. The public intellectual has been reborn as the public influencer.

The parallels are clear. Both the public intellectual and the public influencer play a quasi-independent role separate from but still dependent on a traditional, culturally powerful institution. Both, in other words, remake a private, institutional role as a public, personal one. In the case of the public intellectual, the institution was the academy and the role was thinking. In the case of the public influencer, the institution is the corporation and the role is marketing. The shift makes sense. Marketing, after all, has displaced thinking as our primary culture-shaping activity, the source of what we perceive ourselves to be. The public square having moved from the metaphorical marketplace of ideas to the literal marketplace of goods, it’s only natural that we should look to a new kind of guru to guide us.

Both the public intellectual and the public influencer gain their cultural cachet from their mastery of the dominant media of the day. For the public intellectual, it was the printed page. For the public influencer, it’s the internet, especially social media. The tool of the public intellectual was the pen; the product, the word. The tool of the public influencer is the smartphone camera; the product, the image. Instagram is the new Partisan Review. But while the medium has changed, the way the cultural maestro exerts influence remains the same. It’s by understanding and wielding the power of media to gain attention and shape perception.

Both the public intellectual and the public influencer play an instrumental role in shaping cultural ideals and tying them to the individual’s sense of self. When the public intellectual was ascendant, cultural ideals revolved around the public good. Today, they revolve around the consumer good. The idea that the self emerges from the construction of a set of values and beliefs has faded. What the public influencer understands more sharply than most is that the path of self-definition now winds through the aisles of a cultural supermarket. We shop for our identity as we shop for our toothpaste, choosing from a wide selection of readymade products. The influencer displays the wares and links us to the purchase, always with the understanding that returns and exchanges will be easy and free.

The remnants of the public-intellectual class resent the rise of the influencer. Some of that resentment stems from the has-been’s natural envy of the is-now. But there’s a material angle to it as well. The one big difference between the public influencer and the public intellectual lies in compensation. Public intellectuals were forced to subsist on citations, the thinnest of gruel. Influencers get fame. They get cash. They get merch — stuff to wear, stuff to eat, stuff to sit on. And, the final insult, they receive in abundance what public intellectuals most craved but could never have: our hearts.

On autopilot: the dangers of overautomation

The grounding of Boeing’s popular new 737 Max 8 planes, after two recent crashes, has placed a new focus on flight automation. Here’s an excerpt from my 2014 book on automation and its human consequences, The Glass Cage, that seems relevant to the discussion.

The lives of aviation’s pioneers were exciting but short. Lawrence Sperry died in 1923 when his plane crashed into the English Channel. Wiley Post died in 1935 when his plane went down in Alaska. Antoine de Saint-Exupéry died in 1944 when his plane disappeared over the Mediterranean. Premature death was a routine occupational hazard for pilots during aviation’s early years; romance and adventure carried a high price. Passengers died with alarming frequency, too. As the airline industry took shape in the 1920s, the publisher of a U.S. aviation journal implored the government to improve flight safety, noting that “a great many fatal accidents are daily occurring to people carried in airplanes by inexperienced pilots.”

Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billion people who boarded U.S. flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per million.

But this sunny story carries a dark footnote. The overall decline in plane crashes masks the recent arrival of  “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into what has become a rare role, they too often make mistakes. The consequences, as the Continental Connection and Air France disasters of 2009 show, can be catastrophic. Over the last 30 years, scores of psychologists, engineers, and other ergonomics, or “human factors,” researchers have studied what’s gained and lost when pilots share the work of flying with software. What they’ve learned is that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human factors expert at Britain’s University of Bristol, calls “a deskilling of the crew.”

Concerns about the unintended side effects of flight automation aren’t new. They date back at least to the early days of fly-by-wire controls. A 1989 report from NASA’s Ames Research Center noted that, as computers had begun to multiply on airplanes during the preceding decade, industry and governmental researchers “developed a growing discomfort that the cockpit may be becoming too automated, and that the steady replacement of human functioning by devices could be a mixed blessing.” Despite a general enthusiasm for computerized flight, many in the airline industry worried that “pilots were becoming over-dependent on automation, that manual flying skills may be deteriorating, and that situational awareness might be suffering.”

Many studies since then have linked particular accidents or near misses to breakdowns of automated systems or to “automation-induced errors” on the part of flight crews. In 2010, the Federal Aviation Administration released some preliminary results of a major study of airline flights over the preceding ten years, which showed that pilot errors had been involved in more than 60 percent of crashes. The research further indicated, according to a report from FAA scientist Kathy Abbott, that automation has made such errors more likely. Pilots can be distracted by their interactions with onboard computers, Abbott said, and they can “abdicate too much responsibility to the automated systems.”

In the worst cases, automation can place added and unexpected demands on pilots during moments of crisis—when, for instance, the technology fails. The pilots may have to interpret computerized alarms, input data, and scan information displays even as they’re struggling to take manual control of the plane and orient themselves to their circumstances. The tasks and attendant distractions increase the odds that the aviators will make mistakes. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a psychologist and human-factors expert at Virginia’s Old Dominion University, has explained, “The irony behind automation arises from a growing body of research demonstrating that automated systems often increase workload and create unsafe working conditions.”

The anecdotal and theoretical evidence collected through accident reports, surveys, and studies received empirical backing from a rigorous experiment conducted by Matthew Ebbatson, a young human factors researcher at the University of Cranfield, a top U.K. engineering school. Frustrated by the lack of hard, objective data on what he termed “the loss of manual flying skills in pilots of highly automated airliners,” Ebbatson set out to fill the gap. He recruited 66 veteran pilots from a British airline and had each of them get into a flight simulator and perform a challenging maneuver—bringing a Boeing 737 with a blown engine in for a landing in bad weather. The simulator disabled the plane’s automated systems, forcing the pilots to fly by hand. Some of the pilots did exceptionally well in the test, Ebbatson reported, but many of them performed poorly, barely exceeding “the limits of acceptability.”

Ebbatson then compared detailed measures of each pilot’s performance in the simulator—the pressure they exerted on the yoke, the stability of their airspeed, the degree of variation in their course—with their historical flight records. He found a direct correlation between a pilot’s aptitude at the controls and the amount of time the pilot had spent flying by hand, without the aid of automation. The correlation was particularly strong with the amount of manual flying done during the preceding two months. The analysis indicated that “manual flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice.” Particularly “vulnerable to decay,” Ebbatson noted, was a pilot’s ability to maintain “airspeed control”—a skill that’s crucial to recognizing, avoiding, and recovering from stalls and other dangerous situations.

It’s no mystery why automation takes a toll on pilot performance. Like many challenging jobs, flying a plane involves a combination of psychomotor skills and cognitive skills—thoughtful action and active thinking, in simple terms. A pilot needs to manipulate tools and instruments with precision while swiftly and accurately making calculations, forecasts, and assessments in his head. And while he goes through these intricate mental and physical maneuvers, he needs to remain vigilant, alert to what’s going on around him and adept at distinguishing important signals from unimportant ones. He can’t allow himself either to lose focus or to fall victim to tunnel vision. Mastery of such a multifaceted set of skills comes only with rigorous practice. A beginning pilot tends to be clumsy at the controls, pushing and pulling the yoke with more force than is necessary. He often has to pause to remember what he should do next, to walk himself methodically through the steps of a process. He has trouble shifting seamlessly between manual and cognitive tasks. When a stressful situation arises, he can easily become overwhelmed or distracted and end up overlooking a critical change in his circumstances.

In time, after much rehearsal, the novice gains confidence. He becomes less halting in his work and much more precise in his actions. There’s little wasted effort. As his experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings. The models enable him to interpret and react to stimuli as if by instinct, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying becomes second nature. Years before researchers began to plumb the workings of pilots’ brains, Wiley Post described the experience of expert flight in plain, precise terms. He flew, he said in 1935, “without mental effort, letting my actions be wholly controlled by my subconscious mind.” He wasn’t born with that ability. He developed it through lots of hard work.

When computers enter the picture, the nature and the rigor of the work changes, as does the learning the work engenders. As software assumes moment-by-moment control of the craft, the pilot is relieved of much manual labor. This reallocation of responsibility can provide an important benefit. It can reduce the pilot’s workload and allow him to concentrate on the cognitive aspects of flight. But there’s a cost. Exercised much less frequently, the psychomotor skills get rusty, which can hamper the pilot on those rare but critical occasions when he’s required to take back the controls. There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk. When more advanced computers begin to take over planning and analysis functions, such as setting and adjusting a flight plan, the pilot becomes less engaged not only physically but mentally. Because the precision and speed of pattern recognition appear to depend on regular practice, the pilot’s mind may become less agile in interpreting and reacting to fast-changing situations. He may suffer what Ebbatson calls “skill fade” in his mental as well as his motor abilities.

Pilots themselves are not blind to automation’s toll. They’ve always been wary about ceding responsibility to machinery. Airmen in World War I, justifiably proud of their skill in maneuvering their planes during dogfights, wanted nothing to do with the fancy Sperry autopilots that had recently been introduced. In 1959, the original Mercury astronauts famously rebelled against NASA’s plan to remove manual flight controls from spacecraft. But aviators’ concerns are more acute now. Even as they praise the enormous gains being made in flight technology, and acknowledge the safety and efficiency benefits, they worry about the erosion of their talents. As part of his research, Ebbatson surveyed commercial pilots, asking them whether “they felt their manual flying ability had been influenced by the experience of operating a highly automated aircraft.” Fully 77 percent reported that “their skills had deteriorated”; just 7 percent felt their skills had improved.

The worries seem particularly pronounced among more experienced pilots, especially those who began their careers before computers became entwined with so many aspects of aviation. Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview, he put the problem in stark terms: “We’re forgetting how to fly.”

Thieves of experience: On the rise of surveillance capitalism

This review of Shoshana Zuboff’s The Age of Surveillance Capitalism appeared originally in the Los Angeles Review of Books.

1. The Resurrection

We sometimes forget that, at the turn of the century, Silicon Valley was in a funk, economic and psychic. The great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment capital along with the savings of many Americans. Trophy startups like Pets.com, Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines. Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits and decamping. Venture funding had dried up. As a business proposition, the information superhighway was looking like a cul-de-sac.

Today, less than 20 years on, everything has changed. The top American internet companies are among the most profitable and highly capitalized businesses in history. Not only do they dominate the technology industry but they have much of the world economy in their grip. Their founders and early backers sit atop Rockefeller-sized fortunes. Cities and states court them with billions of dollars in tax breaks and other subsidies. Bright young graduates covet their jobs. Along with their financial clout, the internet giants hold immense social and cultural sway, influencing how all of us think, act, and converse.

Silicon Valley’s Phoenix-like resurrection is a story of ingenuity and initiative. It is also a story of callousness, predation, and deceit. Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.

Zuboff’s fierce indictment of the big internet firms goes beyond the usual condemnations of privacy violations and monopolistic practices. To her, such criticisms are sideshows, distractions that blind us to a graver danger: By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.

Silicon Valley’s Phoenix-like resurrection is a story
of ingenuity and initiative. It is also
a story of callousness, predation, and deceit.

Capitalism has always been a fraught system. Capable of both tempering and magnifying human flaws, particularly the lust for power, it can expand human possibility or constrain it, liberate people or oppress them. (The same can be said of technology.) Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.

By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers. But, as Zuboff makes clear, this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders. Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms. In contrast to the businesses of the industrial era, whose interests were by necessity entangled with those of the public, internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.

2. The Map

It all began innocently. In the 1990s, before they founded Google, Larry Page and Sergey Brin were computer-science students who shared a fascination with the arcane field of network theory and its application to the internet. They saw that by scanning web pages and tracing the links between them, they would be able to create a map of the net with both theoretical and practical value. The map would allow them to measure the importance of every page, based on the number of other pages that linked to it, and that data would, in turn, provide the foundation for a powerful search engine. Because the map could also be used to record the routes and choices of people as they traveled through the network, it would provide a finely detailed account of human behavior.

In Google’s early days, Page and Brin were wary of exploiting the data they collected for monetary gain, fearing it would corrupt their project. They limited themselves to using the information to improve search results, for the benefit of users. That changed after the dot-com bust. Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it. Under pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.

Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance. Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists, brand preferences, and other material desires. The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.

Once it embraced surveillance as the core of its business, Google changed. Its innocence curdled, and its idealism became a means of obfuscation.

Even as its army of PR agents and lobbyists continued to promote a cuddly Nerds-in-Toyland image for the firm, the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements. Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors. As one Google executive quoted by Zuboff put it, “Larry [Page] opposed any path that would reveal our technological secrets or stir the privacy pot and endanger our ability to gather data.”

As networked computers came to mediate more and more of people’s everyday lives, the map of the online world created by Page and Brin became far more lucrative than they could have anticipated. Zuboff reminds us that, throughout history, the charting of a new territory has always granted the mapmaker an imperial power. Quoting the historian John B. Harley, she writes that maps “are essential for the effective ‘pacification, civilization, and exploitation’ of territories imagined or claimed but not yet seized in practice. Places and people must be known in order to be controlled.” An early map of the United States bore the motto “Order upon the Land.” Should Google ever need a new slogan to replace its original, now-discarded “Don’t be evil,” it would be hard-pressed to find a better one than that.

3. The Heist

Zuboff opens her book with a look back at a prescient project from the year 2000 on the future of home automation by a group of Georgia Tech computer scientists. Anticipating the arrival of “smart homes,” the scholars described how a mesh of environmental and wearable sensors, linked wirelessly to computers, would allow all sorts of domestic routines, from the dimming of bedroom lights to the dispensing of medications to the entertaining of children, to be programmed to suit a house’s occupants.

Essential to the effort would be the processing of intimate data on people’s habits, predilections, and health. Taking it for granted that such information should remain private, the researchers envisaged a leak-proof “closed loop” system that would keep the data within the home, under the purview and control of the homeowner. The project, Zuboff explains, reveals the assumptions about “datafication” that prevailed at the time: “(1) that it must be the individual alone who decides what experience is rendered as data, (2) that the purpose of the data is to enrich the individual’s life, and (3) that the individual is the sole arbiter of how the data are put to use.”

What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information. It turned the details of the lives of millions and then billions of people into its own property. The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.

Without permission, without compensation,
and with little in the way of resistance, Google seized and
declared ownership over everyone’s information.

Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it. The public’s naivete and apathy were only part of the story, however. Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.

Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the courts. Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.

The bullying style of TOS agreements also characterizes the practice, common to Google and other technology companies, of threatening users with a loss of “functionality” should they try to opt out of data sharing protocols or otherwise attempt to escape surveillance. Anyone who tries to remove a pre-installed Google app from an Android phone, for instance, will likely be confronted by a vague but menacing warning: “If you disable this app, other apps may no longer function as intended.” This is a coy, high-tech form of blackmail: “Give us your data, or the phone dies.”

In pulling off its data grab, Google also benefited from the terrorist attacks of September 11, 2001. As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations. The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,” Zuboff writes.

Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agency and the Central Intelligence Agency. But they also benefited indirectly. Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public. One of the unintended consequences of this uniquely distressing moment in American history, Zuboff observes, was that “the fledgling practices of surveillance capitalism were allowed to root and grow with little regulatory or legislative challenge.” Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.

What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not. “Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses,” explained Supreme Court Justice William O. Douglas in a 1967 opinion. “Those who wrote the Bill of Rights believed that every individual needs both to communicate with others and to keep his affairs to himself. That dual aspect of privacy means that the individual should have the freedom to select for himself the time and circumstances when he will share his secrets with others and decide the extent of that sharing.”

Google and other internet firms usurp this essential freedom. “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.

4. The Script

Fearing Google’s expansion and coveting its profits, other internet, media, and communications companies rushed into the prediction market, and competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives. There are the chatbots like Alexa and Cortana, the digital assistants like Amazon Echo and Google Home, the wearable computers like Fitbit and Apple Watch. There are the navigation, banking, and health apps installed on smartphones and the new wave of automotive media and telematics systems like CarPlay, Android Auto, and Progressive’s Snapshot. And there are the myriad sensors and transceivers of smart homes, smart cities, and the so-called internet of things. Big Brother would be impressed.

But spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.

Google realized early on that the internet allowed market research to be conducted on a massive scale and at virtually no cost. Every click could become part of an experiment. The company used its research findings to fine-tune its sites and services. It meticulously designed every element of the online experience, from the color of links to the placement of ads, to provoke the desired responses from users. But it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react. The company rolled out its now ubiquitous “Like” button, for example, after early experiments showed it to be a perfect operant-conditioning device, reliably pushing users to spend more time on the site, and share more information.

It was Facebook, with its incredibly detailed data
on people’s social lives, that grasped digital media’s
full potential for behavior modification.

Zuboff describes a revealing and in retrospect ominous Facebook study that was conducted during the 2010 U.S. congressional election and published in 2012 in Nature under the title “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” The researchers, a group of data scientists from Facebook and the University of California at San Diego, manipulated voting-related messages displayed in Facebook users’ news feeds on election day (without the users’ knowledge). One set of users received a message encouraging them to vote, a link to information on poll locations, and an “I Voted” button. A second set saw the same information along with photos of friends who had clicked the button.

The researchers found that seeing the pictures of friends increased the likelihood that people would seek information on polling places and end up clicking the “I Voted” button themselves. “The results show,” they reported, “that [Facebook] messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people.” Through a subsequent examination of actual voter records, the researchers estimated that, as a result of the study and its “social contagion” effect, at least 340,000 additional votes were cast in the election.

Nudging people to vote may seem praiseworthy, even if done surreptitiously. What the study revealed, though, is how even very simple social-media messages, if carefully designed, can mold people’s opinions and decisions, including those of a political nature. As the researchers put it, “online political mobilization works.” Although few heeded it at the time, the study provided an early warning of how foreign agents and domestic political operatives would come to use Facebook and other social networks in clandestine efforts to shape people’s views and votes. Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.

To Zuboff, the experiment and its aftermath carry an even broader lesson, and a grim warning. All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.” This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists. “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”

Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots. What the industries of the future will seek to manufacture is the self.

5. The Bargain

The Age of Surveillance Capitalism is a long, sprawling book, but there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on. The benefits can’t be dismissed as illusory, and the public can no longer claim ignorance about what’s sacrificed in exchange for them. Over the last two years, the press has uncovered one scandal after another involving malfeasance by big internet firms, Facebook in particular. We know who we’re dealing with.

This is not to suggest that our lives are best evaluated with spreadsheets. Nor is it to downplay the abuses inherent to a system that places control over knowledge and discourse in the hands of a few companies that have both incentive and means to manipulate what we see and do. It is to point out that a full examination of surveillance capitalism requires as rigorous and honest an accounting of its boons as of its banes.

In the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way. Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one — but her case would have been stronger still had she more fully addressed the benefits side of the ledger.

The book has other, more cosmetic flaws. Zuboff is prone to wordiness and hackneyed phrasing, and she at times delivers her criticism in overwrought prose that blunts its effect. A less tendentious, more dispassionate tone would make her argument harder for Silicon Valley insiders and sympathizers to dismiss. The book is also overstuffed. Zuboff feels compelled to make the same point in a dozen different ways when a half dozen would have been more than sufficient. Here, too, stronger editorial discipline would have sharpened the message.

Whatever its imperfections, The Age of Surveillance Capitalism is an original and often brilliant work, and it arrives at a crucial moment, when the public and its elected representatives are at last grappling with the extraordinary power of digital media and the companies that control it. Like another recent masterwork of economic analysis, Thomas Piketty’s 2013 Capital in the Twenty-First Century, the book challenges assumptions, raises uncomfortable questions about the present and future, and stakes out ground for a necessary and overdue debate. Shoshana Zuboff has aimed an unsparing light onto the shadowy new landscape of our lives. The picture is not pretty.

The map and the script

Shoshana Zuboff’s epic critique of Silicon Valley, The Age of Surveillance Capitalism, is out today, and so is my review, “Thieves of Experience: How Google and Facebook Corrupted Capitalism,” in the Los Angeles Review of Books. It begins:

We sometimes forget that, at the turn of the century, Silicon Valley was in a funk, economic and psychic. The great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment capital along with the savings of many Americans. Trophy startups like Pets.com, Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines. Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits and decamping. Venture funding had dried up. As a business proposition, the information superhighway was looking like a cul-de-sac.

Today, less than 20 years on, everything has changed. The top American internet companies are among the most profitable and highly capitalized businesses in history. Not only do they dominate the technology industry but they have much of the world economy in their grip. Their founders and early backers sit atop Rockefeller-sized fortunes. Cities and states court them with billions of dollars in tax breaks and other subsidies. Bright young graduates covet their jobs. Along with their financial clout, the internet giants hold immense social and cultural sway, influencing how all of us think, act, and converse.

Silicon Valley’s Phoenix-like resurrection is a story of ingenuity and initiative. It is also a story of callousness, predation, and deceit. …

Read on.