Category Archives: Uncategorized

A smaller, nastier world

I have an essay in the Boston Globe‘s Ideas section that takes a hard look at the popular notion that communication networks make the world a better place.

Here’s a taste:

If our assumption that communications technology brings people together were true, we should today be seeing a planetary outbreak of peace, love, and understanding. Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone. Nearly 2 billion are on Facebook, more than a billion upload and download YouTube videos, and billions more converse through messaging apps like WhatsApp and WeChat. With smartphone in hand, everyone becomes a media hub, transmitting and receiving ceaselessly.

Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised.

For years now, psychological and sociological studies have been casting doubt on the idea that communication dissolves differences. The research suggests that the opposite is true: free-flowing information makes personal and cultural differences more salient, turning people against one another instead of bringing them together. “Familiarity breeds contempt” is one of the gloomiest of proverbs. It is also, the evidence says, one of the truest.

Read on.

Uber’s ghost map and the meaning of greyballing

Uber is not only a scofflaw, but, as Mike Isaac of the New York Times reported last week, the company has been running an elaborate program to deceive and evade cops and other local officials in cities where its car service has been banned or lacks authorization to operate. The centerpiece of the scheme is a piece of software called Greyball, which uses a variety of data, including credit-card records, to identify what Uber calls “opponents.” When an opponent hails a car using the Uber app, the app presents the opponent with a fake map, filled with “ghost cars” that don’t actually exist. The map overlays a fictional story, intended to mislead, on a representation of actual city streets. Beyond the ethical and legal questions it raises, Greyball sheds important light on the digital representations of reality that we increasingly rely on to live our lives. These representations do more than mediate reality; they manufacture reality.

Traditional cartographers knew that they were creating mere representations of the world, but their goal was to achieve representational accuracy. They strove to provide map users with an objectively true, if necessarily incomplete, rendering of reality. As the semantician Alfred Korzybski wrote in his 1933 book Science and Sanity, “A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” There were times when mapmakers were pulled into propaganda campaigns, made to produce distorted maps to trick people for political ends, but those episodes were exceptions to the rule. The cartographic ideal was always to produce “correct” representations of the world that people could rely on for navigational or educational purposes. The mapmaker served the interests of the map user.

The digital maps that we see on our phones are different. They are created primarily for marketing rather than cartographic purposes. The interests they ultimately serve are those of the companies that create them and incorporate them into broader products or services. While a digital map can be useful to the user, its usefulness no longer derives from its accuracy or correctness in representing territory. In a digital map, the traditional map becomes a substrate on which a new, and fictionalized, representation of the world is presented. The digital map that appears on phones and other screens is at least twice removed from reality. What it tells us is that we need to refine and extend Korzybski’s famous distinction. It is no longer enough to say that the map is not the territory. What we have to say now is this: the map is not the map.

Uber’s ghost map provides a particularly stark example of the way a digital representation of the actual world can be manipulated, surreptitiously, to create a digital representation of a fictional world. As Uber itself has admitted, Greyball has been used in many different circumstances in order “to hide the standard city app view for individual riders, enabling Uber to show that same rider a different version.” In addition to deceiving authorities, the software has been used, the company says, for such purposes as “the testing of new features by employees; marketing promotions; fraud prevention; to protect our partners from physical harm; and to deter riders using the app in violation of our terms of service.” That sounds like a pretty much unbounded portfolio of potential uses. Have you been greyballed? It’s impossible to say.

But even Uber’s “standard city app view” presents a fictionalized picture of the world, at once useful and seductive:

The Uber map is a media production. It presents a little, animated entertainment in which you, the user, play the starring role. You are placed at the very center of things, wherever you happen to be, and you are surrounded by a pantomime of oversized automobiles poised to fulfill your desires, to respond immediately to your beckoning. It’s hard not to feel flattered by the illusion of power that the Uber map grants you. Every time you open the app, you become a miniature superhero on a city street. You send out a bat signal, and the batmobile speeds your way. By comparison, taking a bus or a subway, or just hoofing it, feels almost insulting.

In a similar way, a Google map also sets you in a fictionalized story about a place, whether you use the map for navigation or for searching. You are given a prominent position on the map, usually, again, at its very center, and around you a city personalized to your desires takes shape. Certain business establishments and landmarks are highlighted, while other ones are not. Certain blocks are highlighted as “areas of interest“; others are not. Sometimes the highlights are paid for, as advertising; other times they reflect Google’s assessment of you and your preferences. You’re not allowed to know precisely why your map looks the way it does. The script is written in secret.

It’s not only maps. The news and message feeds presented to you by Facebook, or Apple or Google or Twitter, are also stories about the world, fictional representations manufactured both to appeal to your desires and biases and to provide a compelling context for advertising. Mark Zuckerberg may wring his hands over “fake news,” but fake news is to the usual Facebook feed what the Greyball map is to the usual Uber map: an extreme example of the norm.

When I talk about “you,” I don’t really mean you. The “you” around which the map or the news feed or any other digitized representation of the world coalesces is itself a representation. As John Cheney-Lippold explains in his forthcoming book We Are Data, companies like Facebook and Google create digital versions of their users derived through an algorithmic analysis of the data they collect about their users. The companies rely on these necessarily fictionalized representations for both technical reasons (human beings can’t be computed; to be rendered computable, you have to be turned into a digital representation) and commercial reasons (a digital representation of a person can be bought and sold). The “you” on the Uber map or in the Facebook feed is a fake — a character in a story — but it’s a useful and a flattering fake, so you accept it as an accurate portrayal of yourself: an “I” for an I.

Greyballing is not an aberration of the virtual world. Greyballing is the essence of virtuality.

Images: Uber.

Whose self does the self-flushing toilet flush?

The public restroom, never a pleasant place, has in recent years become a dystopia. It presents us with a preview, in microcosm, of our automated future. Motion detectors and other sensors register our presence, read our intentions, and, on our behalf, turn on the lights, flush the toilets, open the taps, squirt out the liquid soap, and dispense towelettes for drying. There is a weird tension between the primitiveness of the bodily functions being executed in the contemporary restroom and the sophistication of the technology facilitating the execution. The pee and the poop, if I may be indelicate, seem out of place in the very place designed to accommodate them. Nowhere so much as in a public restroom does one wish one were a robot.

Yet, as Ian Bogost reminds us, in an illuminating Atlantic piece, the inconvenient truth about the automated public restroom is that nothing works worth a crap. Whatever it is that has been automated here bears no resemblance to even the most rudimentary of human skills. The automated toilet flushes prematurely, often repeatedly, while we are still seated upon it, and then, once we’ve reassumed an erect posture and want nothing more than to exit the stall, it refuses to flush at all. The automated soap dispenser either doesn’t work or spits soap on our trousers. The automated faucet either doesn’t work or sprays out such a gusher that the water bounces off the sink and soaks our shirt. The automated towel dispenser hands us a strip of ugly brown paper that would be too small to dry the hands of a hamster.

We reassure ourselves, as we leave the restroom damp and shamefaced, that the entire experience, however miserable in raw human terms, has been carefully engineered to maximize efficiency and save precious resources. Our discomfort is simply the price we have to pay for advanced technology that is “green” and “smart.” But, as Bogost also reminds us, this is an illusion. Thanks to what’s called “phantom flushing,” sensor-flush toilets end up using nearly 50 percent more water than do manual-flush toilets, according to one real-world study. The reality is probably equally perverse with sensor-controlled faucets, soap dispensers, and paper-towel dispensers, which demand that the user activate them repeatedly in order to get the required amount in the required place.

Bogost argues that the automated restroom has been designed not to save resources but rather to reduce labor costs: “When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.” I would bet that even here the desired benefit is illusory. Automated restrooms, with their wasteful ways and wayward sprays, seem to me to be at least as filthy as manually operated restrooms, requiring at least as much janitorial labor. And the greater complexity of the fixtures means more breakdowns and more repairs, increasing maintenance labor. In short, the automated restroom fails on pretty much every measure. Yet we accept it as good and necessary because it fits the prevailing paradigm of progress, in which technological advances are viewed as social advances.

In seeing society through its bathrooms, Bogost is working in the tradition of the great Siegfried Giedion, who devoted a hundred-page chapter of Mechanization Takes Command (1948) to the industrialization and democratization of the bathroom.

The bath and its purposes have different meanings for different ages. The manner in which a civilization integrates bathing within its life, as well as the type of bathing it prefers, yields searching insight into the inner nature of the period.

Bogost sums up the broader meaning of the automated restroom this way: “Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.” This is the WALL-E effect. As we become more dependent on automation, we become less likely to develop the skills and common sense required to perform even the most basic of tasks in the world, and hence we become even more dependent on automation (and on the companies orchestrating the automation) and less able to judge whether the automation is even any good. In this fashion, the “self” migrates, along with its agency, from the person to the device.

Zuckerberg’s world

The word “community” appears, by my rough count, 98 times in Mark Zuckerberg’s latest message to the masses. In a post-fact world, truth is approached through repetition. The message that is transmitted most often is the fittest message, the message that wins. Verification becomes a matter of pattern recognition. It’s the epistemology of the meme, the sword by which Facebook lives and dies.

Today I want to focus on the most important question of all: are we building the world we all want?

It’s a good question, though I’m not sure there is any world that we all want, and if there is one, I’m not sure Mark Zuckerberg is the guy I’d appoint to define it. And yet, from his virtual pulpit, surrounded by his 86 million followers, the young Facebook CEO hesitates not a bit to speak for everyone, in the first person plural. There is no opt-out to his “we.” It’s the default setting and, in Zuckerberg’s totalizing utopian vision, the setting is hardwired, universal, and nonnegotiable.

Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.  …

Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial.

The reason the idea  — that community-building on a planetary scale is practicable, necessary, and altogether good — did not seem controversial in the beginning was that Zuckerberg, like Silicon Valley in general, operated in a technological bubble, outside of politics, outside of history. Now that history has broken through the bubble and upset the algorithms, history must be put back in its place. Technological determinism must again be made synonymous with historical determinism.

In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.

Infrastructure is destiny. (The word “infrastructure” appears 24 times in Zuckerberg’s message.) Society is not a fluctuating arrangement of contending and at times noxious interests brought into a tenuous equilibrium through a difficult, ongoing process of negotiation and struggle. Society is itself a technology, a built thing that, correctly constructed, “works for all of us.” Get the specs right, and the human community will scale as a computer network scales. Global harmony becomes a technological inevitability.

Just as the internet is a network of networks, so society, in Zuckerberg’s view, is a community of communities. “Building a global community that works for everyone,” he writes, “starts with the millions of smaller communities and intimate social structures we turn to for our personal, emotional and spiritual needs.” He points to “churches” and “sports teams” as examples of “local groups” that “share important roles as social infrastructure.” They form the “sub-communities” that are then connected, as roads are connected to form a highway system, into the “global community.” He comes back to the same two examples a little later, when he writes of people “coming together around religion or sports.”

Zuckerberg’s conflation of religion and sports is odd but illuminating. In his view, the tenets of a religion matter no more than the rules of a game; what’s essential about a church and a sports team is that they both form social infrastructure that serves to “bring us together and reinforce our values.” It’s only by separating individual beliefs from community formation, and then pretending those beliefs don’t really matter, that Zuckerberg is able to sustain the fantasy that all sub-communities share a set of values — values that derive from community itself, independent of the members’ motivations in forming a group. These common values play the same role in building a global community that common standards play in building the internet: they enable seamless interconnectivity.

Zuckerberg remains oblivious to the fact that a sub-community, particularly a religious one, may be formed on a foundation of belief that is incompatible with, and in opposition to, the beliefs of the surrounding community. As the Wall Street Journal‘s Ian Lovett writes today in an article on a traditionalist Catholic community that has grown around a Benedictine monastery in Oklahoma, “The 100 or so people living here are part of a burgeoning movement among traditional Christians. Feeling besieged by secular society, they are taking refuge in communities like this one, clustered around churches and monasteries, where faith forms the backbone of daily life.” Such communities are very different from sports teams. Their formative beliefs aren’t some sort of standardized Lego infrastructure that enables the expression of universal community values. The beliefs of the individuals in the community are the values of the community, and they are anything but common standards.

The problems with Zuckerberg’s self-serving fantasy about social relations become even more pronounced when we turn to “sub-communities” of creeps and miscreants who share poisonous beliefs — neo-Nazi groups, say, or racist groups or misogynistic groups or groups of murderous ideologues (or even groups of amoral entrepreneurs who seek to make a quick buck by spreading fake news stories through the web). Here, too, the beliefs of the individual members of the community form the values of the community — values that, thankfully, are anything but common standards. “The purpose of any community is to bring people together to do things we couldn’t do on our own,” Zuckerberg writes, without any recognition that those “things” could be bad things. Even though the actions of sociopathic groups, in particular their use of Facebook and other social networks not as a metaphorical infrastructure for global harmony but as a very real infrastructure for recruitment, propaganda, planning, and organization, would seem to be one of the spurs for Zuckerberg’s message, he is blind to the way they contradict that message. Nastiness, envy, chauvinism, mistrust, distrust, anger, vanity, greed, enmity, hatred: for Zuckerberg, these aren’t features of the human condition; they are bugs in the network.

Tension and conflict, then, become technical problems, amenable to technical solutions. And so, rather than questioning Facebook’s assumptions about society — might global community-building, pursued through media structures, end up encouraging polarization and tribalism? — and the role the company plays in society, Zuckerberg ends up back where he always ends up: with a batch of new hacks. There will be new algorithmic filters, new layers of artificial intelligence, new commenting and rating systems, new techniques for both encryption and surveillance. The bugs — bad actors and bad code — will be engineered out of the system. Zuckerberg’s program, as Ars Technica’s Annalee Newitz points out, is filled with contradictions, which he either won’t acknowledge or, thanks to his techno-utopian tunnel vision, can’t see. He makes a big deal, for instance, of a new initiative through which Facebook will provide management tools for organizing what he calls “very meaningful” communities — groups characterized by passionate members under the direction of a strong leader. The example Zuckerberg offers — a group dedicated to helping refugees find homes — sounds great, but it’s not hard to see how such tools, deployed in the context of Facebook’s emotionalist echo chamber, could be used to mobilize some very nasty groups, of just the sort that Facebook is hoping to purge from its network. “The best communities in the world have leaders,” Zuckerberg said in an interview promoting his so-called manifesto. So do the worst, Mark.

Toward the end of his message, Zuckerberg writes, “In recent campaigns around the world — from India and Indonesia across Europe to the United States — we’ve seen the candidate with the largest and most engaged following on Facebook usually wins.” One might think that this observation would inspire some soul-searching on Zuckerberg’s part. But he offers it as a boast. Facebook is never the problem; it is always the solution.

No one wants to break a butterfly on a wheel, even if the butterfly is a billionaire. And only a fool would look to an official communiqué from the CEO of a big company for honest, subtle thinking about complicated social issues. And yet, in Zuckerberg’s long message, there is one moment of clarity, when he states the plain truth: “Social media is a short-form medium where resonant messages get amplified many times. This rewards simplicity and discourages nuance.” The medium, he continues, often “oversimplifies important topics and pushes us toward extremes.” This insight might have led Zuckerberg to a forthright accounting of the limitations of Facebook as a communications system. He might have pointed out that while Facebook is well designed for some things — banter among friends, the sharing of photos and videos, the coordination of group actions (for better or worse), the circulation of information in emergencies, advertising — it is ill designed for other things. It’s lousy as a news medium. It’s terrible as a forum for political discourse. It’s not the place to go to get a deep, well-rounded view of society. As a community, it’s pretty sketchy. And, he might have concluded, if you expect Facebook to solve the problems of the world, you’ve taken me far too seriously.

Image: “Lego City: Collapse” by Eirik Newth.

Anxiety and surveillance: pillars of the new economy

The terms addiction and compulsion tend to be used loosely and often interchangeably. But in an article in the Wall Street Journal, science writer Sharon Begley draws a simple but illuminating distinction between the two psychological disorders: addiction is born of pleasure, while compulsion is anxiety’s child.

Behavioral addictions begin in pleasure. But compulsions, according to a growing body of scientific evidence, are born in anxiety and remain strangers to joy. They are repetitive behaviors that we engage in repeatedly to alleviate the angst brought on by the possibility of harmful consequences.

The drunk seeks to regain the sense of well-being that the last shot of bourbon provided. The compulsive hoarder seeks to alleviate the dread that something valuable has been lost. Compulsion is both “balm and curse,” writes Begley. A compulsive act briefly mitigates feelings of anxiety, but the very experience of relief reinforces the anxiety. The anxiety ends up feeling more real, more pressing — and even more in need of relief. Anxiety and compulsion become a self-reinforcing cycle.

Compulsions can be so severe as to be debilitating. But they also, and much more routinely, take milder forms. They alter our thoughts and behavior, sometimes in deep ways, without making us dysfunctional in society. In fact, by tempering our anxiety, they may serve as a kind of therapy that protects our social functionality. Since ours is, as Auden suggested, an age of anxiety, it’s no surprise that it is also an age of compulsion.

The near-universal compulsion of the present day is, as we all know and as behavioral studies prove, the incessant checking of the smartphone. As Begley notes, with a little poetic hyperbole, we all “feel compelled to check our phones before we get out of bed in the morning and constantly throughout the day, because FOMO — the fear of missing out — fills us with so much anxiety that it feels like fire ants swarming every neuron in our brain.” With its perpetually updating, tightly personalized messaging, networking, searching, and shopping apps, the smartphone creates the anxiety that it salves. It’s a machine almost perfectly designed to turn its owner into a compulsive.

Needless to say, a portable, pocket-sized product that spurs and sustains compulsive use can be a very lucrative product for any company able to tap into its hypnotic power. The smartphone is the perfect consumer good for the age of anxiety. It’s hardly an exaggeration to say that, from a commercial standpoint, the smartphone is to compulsion what the cigarette pack was to addiction.

In a recent post, I highlighted the business scholar Shoshana Zuboff’s idea that, with the arrival of the internet, capitalism has begun to take on a new form. Traditional product-based competition (sell an attractive good at a fair price) is being displaced by data-based competition (collect the richest store of information about the identity and behavior of individual consumers). In this new industrial system, which Zuboff calls surveillance capitalism, “profits derive from the unilateral surveillance and modification of human behavior.”

While surveillance capitalism taps the invasive powers of the Internet as the source of capital formation and wealth creation, it is now, as I have suggested, poised to transform commercial practice across the real world too.  An analogy is the rapid spread of mass production and administration throughout the industrialized world in the early twentieth century, but with one major caveat. Mass production was interdependent with its populations who were its consumers and employees. In contrast, surveillance capitalism preys on dependent populations who are neither its consumers nor its employees and are largely ignorant of its procedures.

The concept of surveillance capitalism helps explain the dynamics of a growing part of the economy. But it doesn’t explain everything. It focuses on the supply side (what motivates companies) while largely ignoring the demand side (what motivates consumers). I’d suggest that the secret to understanding the demand side may lie in the anxiety-compulsion cycle. What motivates consumers is anxiety — not just the fear of missing out, but also the dread of becoming invisible or losing status, the worry that others might know something that you don’t know, the nervousness that a message might have been misconstrued, and so on — and this anxiety spurs the compulsive behavior that generates ever more personal data for surveillance capitalists to harvest. We divulge our secrets because we can’t help ourselves.

This powerful, compulsion-fueled business model may have emerged by accident — I’m pretty sure that Larry Page and Sergey Brin didn’t found Google with the intent of spreading social anxiety and then capitalizing on it through surveillance systems — but it is now sustained by design. Facebook doesn’t hire cognitive psychologists and maintain a behavioral research lab for nothing. Rewards now flow to the competitor that is best able to maximize consumer anxiety in a way that spurs more compulsive behavior that in turn generates more valuable consumer data that can, to complete the cycle, be deployed to further manipulate consumer psychology.

That’s a dark way of putting it, to be sure — it ignores the real benefits that consumers gain from many online services — but it does seem to explain the governing logic of what we once happily termed “the new economy.”

Photo: University of Alaska Anchorage.

You’ve got mail

From an essay on Radiohead by Mark Greif, in his book Against Everything:

A description of the condition of the late 1990s could go like this: At the turn of the millennium, each individual sat at a meeting point of shouted orders and appeals, the TV, the radio, the phone and cell, the billboard, the airport screen, the inbox, the paper junk mail. Each person discovered that he lived at one knot of a network, existing without his consent, which connected him to any number of recorded voices, written messages, means of broadcast, channels of entertainment, and avenues of choice. It was a culture of broadcast: an indiscriminate seeding, which needed to reach only a very few, covering vast tracts of our consciousness. To make a profit, only one message in ten thousand needed to take root; therefore messages were strewn everywhere. To live in this network felt like something, but surprisingly little in the culture of broadcast itself tried to capture what it felt like. Instead, it kept bringing pictures of an unencumbered, luxurious life, songs of ease and freedom, and technological marvels, which did not feel like the life we lived.

And if you noticed you were not represented? It felt as if one of the few unanimous aspects of this culture was that it forbade you to complain, since if you complained, you were a trivial human, a small person, who misunderstood the generosity and benignity of the message system. It existed to help you. Now, if you accepted the constant promiscuous broadcasts as normalcy, there were messages in them to inflate and pet and flatter you. If you simply said that this chatter was altering your life, killing your privacy or ending the ability to think in silence, there were alternative messages that whispered of humiliation, craziness, vanishing. What sort of crank needs silence? What could be more harmless than a few words of advice? The messages did not come from somewhere; they were not central, organized, intelligent, intentional. It was up to you to change the channel, not answer the phone, stop your ears, shut your eyes, dig a hole for yourself and get in it. Really, it was your responsibility. The metaphors in which people tried to complain about these developments, by ordinary law and custom, were pollution (as in “noise pollution”) and theft (as in “stealing our time”). But we all knew the intrusions felt like violence. Physical violence, with no way to strike back.

You’ve got mail! That old AOL audio announcement always felt perfectly anodyne — so anodyne that it almost seemed fated to become the hook for a romcom starring Tom Hanks and Meg Ryan. Yet at the same time, and this has become clearer in retrospect, it was a threat. The computerized voice, chipper, friendly, always feigning surprise and excitement at the news it delivered, carried a demanding and judgmental undertone. It was a parental voice. You’ve got mail — and you need to go to your inbox and attend to this new mail quickly. Right now, in fact. Only a churlish, sad, unsociable creep would let a new message sit unread in an inbox. You don’t want to be a churlish, sad, unsociable creep, do you?

A threat, and a prophecy. Even as AOL faded away, you’ve got mail burrowed deeper into our consciousness. It became more than a voice in our heads. It became the voice in our heads. It was never a voice of our own — it comes out of the mouth of a stranger, a stranger with an agenda — and yet it now runs through our minds as if on a continuous tape loop. Its implicit command no longer feels like a command. It feels, almost, like a natural phenomenon. We do its bidding intuitively. To be irritated by the voice, even in passing, is, as Greif suggests, to admit to a smallness of self. And so we bury ever deeper that sense of being violated. It’s not the messages that matter anymore. Messages come and go. It’s the messaging that matters. Messaging has become our state of being, the atmosphere in our heads.

And, yes, you’ve got mail.

From Fordism to Googlism

From “The Watchers,” an article by Jonathan Shaw in the new issue of Harvard Magazine:

[Shoshana] Zuboff says that corporate use of personal data has set society on a path to a new form of capitalism that departs from earlier norms of market democracy. She draws an analogy from the perfection of the assembly line: Ford engineers’ discovery a century ago, after years of trial and error, that they had created “a logic of high-volume, low-unit cost, which really had never existed before with all the pieces aligned.” Today, many corporations follow a similar trajectory by packaging personal data and behavioral information and selling it to advertisers: what she calls “surveillance capitalism.”

“Google was ground zero,” Zuboff begins. At first, information was used to benefit end users, to improve searches, just as Apple and Amazon use their customers’ data largely to customize those individuals’ online experiences. Google’s founders once said they weren’t interested in advertising. But Google “didn’t have a product to sell,” she explains, and as the 2001 bubble fell into crisis, the company was under pressure to transform investment into earnings. “They didn’t start by saying, ‘Well, we can make a lot of money assaulting privacy,’” she continues. Instead, “trial and error and experimentation and adapting their capabilities in new directions” led them to sell ads based on personal information about users. Like the tinkerers at Ford, Google engineers discovered “a way of using their capabilities in the context of search to do something utterly different from anything they had imagined when they started out.” Instead of using the personal data to benefit the sources of that information, they commodified it, using what they knew about people to match them with paying advertisers. As the advertising money flowed into Google, it became a “powerful feedback loop of almost instantaneous success in these new markets.”

“Those feedback loops become drivers themselves,” Zuboff explains. “This is how the logic of accumulation develops … and ultimately flourishes and becomes institutionalized. That it has costs, and that the costs fall on society, on individuals, on the values and principles of the liberal order for which human beings have struggled and sacrificed much over millennia—that,” she says pointedly, “is off the balance sheet.”

Privacy values in this context become externalities, like pollution or climate change, “for which surveillance capitalists are not accountable.” In fact, Zuboff believes, “Principles of individual self-determination are impediments to this economic juggernaut; they have to be vanquished. They are friction.” The resulting battles will be political. They will be fought in legislatures and in the courts, she says. Meanwhile, surveillance capitalists have learned to use all necessary means to defend their claims, she says: “through rhetoric, persuasion, threat, seduction, deceit, fraud, and outright theft. They will fight in whatever way they must for this economic machine to keep growing. … This is an economic logic that must delete privacy in order to be successful.”