Author Archives: Nick

From public intellectual to public influencer

The corpse of the public intellectual has been much chewed upon. But only now is its full historical context coming into view. What seemed a death, we’re beginning to see, was but the larval stage of a metamorphosis. The public intellectual has been reborn as the public influencer.

The parallels are clear. Both the public intellectual and the public influencer play a quasi-independent role separate from but still dependent on a traditional, culturally powerful institution. Both, in other words, remake a private, institutional role as a public, personal one. In the case of the public intellectual, the institution was the academy and the role was thinking. In the case of the public influencer, the institution is the corporation and the role is marketing. The shift makes sense. Marketing, after all, has displaced thinking as our primary culture-shaping activity, the source of what we perceive ourselves to be. The public square having moved from the metaphorical marketplace of ideas to the literal marketplace of goods, it’s only natural that we should look to a new kind of guru to guide us.

Both the public intellectual and the public influencer gain their cultural cachet from their mastery of the dominant media of the day. For the public intellectual, it was the printed page. For the public influencer, it’s the internet, especially social media. The tool of the public intellectual was the pen; the product, the word. The tool of the public influencer is the smartphone camera; the product, the image. Instagram is the new Partisan Review. But while the medium has changed, the way the cultural maestro exerts influence remains the same. It’s by understanding and wielding the power of media to gain attention and shape perception.

Both the public intellectual and the public influencer play an instrumental role in shaping cultural ideals and tying them to the individual’s sense of self. When the public intellectual was ascendant, cultural ideals revolved around the public good. Today, they revolve around the consumer good. The idea that the self emerges from the construction of a set of values and beliefs has faded. What the public influencer understands more sharply than most is that the path of self-definition now winds through the aisles of a cultural supermarket. We shop for our identity as we shop for our toothpaste, choosing from a wide selection of readymade products. The influencer displays the wares and links us to the purchase, always with the understanding that returns and exchanges will be easy and free.

The remnants of the public-intellectual class resent the rise of the influencer. Some of that resentment stems from the has-been’s natural envy of the is-now. But there’s a material angle to it as well. The one big difference between the public influencer and the public intellectual lies in compensation. Public intellectuals were forced to subsist on citations, the thinnest of gruel. Influencers get fame. They get cash. They get merch — stuff to wear, stuff to eat, stuff to sit on. And, the final insult, they receive in abundance what public intellectuals most craved but could never have: our hearts.

On autopilot: the dangers of overautomation

The grounding of Boeing’s popular new 737 Max 8 planes, after two recent crashes, has placed a new focus on flight automation. Here’s an excerpt from my 2014 book on automation and its human consequences, The Glass Cage, that seems relevant to the discussion.

The lives of aviation’s pioneers were exciting but short. Lawrence Sperry died in 1923 when his plane crashed into the English Channel. Wiley Post died in 1935 when his plane went down in Alaska. Antoine de Saint-Exupéry died in 1944 when his plane disappeared over the Mediterranean. Premature death was a routine occupational hazard for pilots during aviation’s early years; romance and adventure carried a high price. Passengers died with alarming frequency, too. As the airline industry took shape in the 1920s, the publisher of a U.S. aviation journal implored the government to improve flight safety, noting that “a great many fatal accidents are daily occurring to people carried in airplanes by inexperienced pilots.”

Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billion people who boarded U.S. flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per million.

But this sunny story carries a dark footnote. The overall decline in plane crashes masks the recent arrival of  “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into what has become a rare role, they too often make mistakes. The consequences, as the Continental Connection and Air France disasters of 2009 show, can be catastrophic. Over the last 30 years, scores of psychologists, engineers, and other ergonomics, or “human factors,” researchers have studied what’s gained and lost when pilots share the work of flying with software. What they’ve learned is that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human factors expert at Britain’s University of Bristol, calls “a deskilling of the crew.”

Concerns about the unintended side effects of flight automation aren’t new. They date back at least to the early days of fly-by-wire controls. A 1989 report from NASA’s Ames Research Center noted that, as computers had begun to multiply on airplanes during the preceding decade, industry and governmental researchers “developed a growing discomfort that the cockpit may be becoming too automated, and that the steady replacement of human functioning by devices could be a mixed blessing.” Despite a general enthusiasm for computerized flight, many in the airline industry worried that “pilots were becoming over-dependent on automation, that manual flying skills may be deteriorating, and that situational awareness might be suffering.”

Many studies since then have linked particular accidents or near misses to breakdowns of automated systems or to “automation-induced errors” on the part of flight crews. In 2010, the Federal Aviation Administration released some preliminary results of a major study of airline flights over the preceding ten years, which showed that pilot errors had been involved in more than 60 percent of crashes. The research further indicated, according to a report from FAA scientist Kathy Abbott, that automation has made such errors more likely. Pilots can be distracted by their interactions with onboard computers, Abbott said, and they can “abdicate too much responsibility to the automated systems.”

In the worst cases, automation can place added and unexpected demands on pilots during moments of crisis—when, for instance, the technology fails. The pilots may have to interpret computerized alarms, input data, and scan information displays even as they’re struggling to take manual control of the plane and orient themselves to their circumstances. The tasks and attendant distractions increase the odds that the aviators will make mistakes. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a psychologist and human-factors expert at Virginia’s Old Dominion University, has explained, “The irony behind automation arises from a growing body of research demonstrating that automated systems often increase workload and create unsafe working conditions.”

The anecdotal and theoretical evidence collected through accident reports, surveys, and studies received empirical backing from a rigorous experiment conducted by Matthew Ebbatson, a young human factors researcher at the University of Cranfield, a top U.K. engineering school. Frustrated by the lack of hard, objective data on what he termed “the loss of manual flying skills in pilots of highly automated airliners,” Ebbatson set out to fill the gap. He recruited 66 veteran pilots from a British airline and had each of them get into a flight simulator and perform a challenging maneuver—bringing a Boeing 737 with a blown engine in for a landing in bad weather. The simulator disabled the plane’s automated systems, forcing the pilots to fly by hand. Some of the pilots did exceptionally well in the test, Ebbatson reported, but many of them performed poorly, barely exceeding “the limits of acceptability.”

Ebbatson then compared detailed measures of each pilot’s performance in the simulator—the pressure they exerted on the yoke, the stability of their airspeed, the degree of variation in their course—with their historical flight records. He found a direct correlation between a pilot’s aptitude at the controls and the amount of time the pilot had spent flying by hand, without the aid of automation. The correlation was particularly strong with the amount of manual flying done during the preceding two months. The analysis indicated that “manual flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice.” Particularly “vulnerable to decay,” Ebbatson noted, was a pilot’s ability to maintain “airspeed control”—a skill that’s crucial to recognizing, avoiding, and recovering from stalls and other dangerous situations.

It’s no mystery why automation takes a toll on pilot performance. Like many challenging jobs, flying a plane involves a combination of psychomotor skills and cognitive skills—thoughtful action and active thinking, in simple terms. A pilot needs to manipulate tools and instruments with precision while swiftly and accurately making calculations, forecasts, and assessments in his head. And while he goes through these intricate mental and physical maneuvers, he needs to remain vigilant, alert to what’s going on around him and adept at distinguishing important signals from unimportant ones. He can’t allow himself either to lose focus or to fall victim to tunnel vision. Mastery of such a multifaceted set of skills comes only with rigorous practice. A beginning pilot tends to be clumsy at the controls, pushing and pulling the yoke with more force than is necessary. He often has to pause to remember what he should do next, to walk himself methodically through the steps of a process. He has trouble shifting seamlessly between manual and cognitive tasks. When a stressful situation arises, he can easily become overwhelmed or distracted and end up overlooking a critical change in his circumstances.

In time, after much rehearsal, the novice gains confidence. He becomes less halting in his work and much more precise in his actions. There’s little wasted effort. As his experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings. The models enable him to interpret and react to stimuli as if by instinct, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying becomes second nature. Years before researchers began to plumb the workings of pilots’ brains, Wiley Post described the experience of expert flight in plain, precise terms. He flew, he said in 1935, “without mental effort, letting my actions be wholly controlled by my subconscious mind.” He wasn’t born with that ability. He developed it through lots of hard work.

When computers enter the picture, the nature and the rigor of the work changes, as does the learning the work engenders. As software assumes moment-by-moment control of the craft, the pilot is relieved of much manual labor. This reallocation of responsibility can provide an important benefit. It can reduce the pilot’s workload and allow him to concentrate on the cognitive aspects of flight. But there’s a cost. Exercised much less frequently, the psychomotor skills get rusty, which can hamper the pilot on those rare but critical occasions when he’s required to take back the controls. There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk. When more advanced computers begin to take over planning and analysis functions, such as setting and adjusting a flight plan, the pilot becomes less engaged not only physically but mentally. Because the precision and speed of pattern recognition appear to depend on regular practice, the pilot’s mind may become less agile in interpreting and reacting to fast-changing situations. He may suffer what Ebbatson calls “skill fade” in his mental as well as his motor abilities.

Pilots themselves are not blind to automation’s toll. They’ve always been wary about ceding responsibility to machinery. Airmen in World War I, justifiably proud of their skill in maneuvering their planes during dogfights, wanted nothing to do with the fancy Sperry autopilots that had recently been introduced. In 1959, the original Mercury astronauts famously rebelled against NASA’s plan to remove manual flight controls from spacecraft. But aviators’ concerns are more acute now. Even as they praise the enormous gains being made in flight technology, and acknowledge the safety and efficiency benefits, they worry about the erosion of their talents. As part of his research, Ebbatson surveyed commercial pilots, asking them whether “they felt their manual flying ability had been influenced by the experience of operating a highly automated aircraft.” Fully 77 percent reported that “their skills had deteriorated”; just 7 percent felt their skills had improved.

The worries seem particularly pronounced among more experienced pilots, especially those who began their careers before computers became entwined with so many aspects of aviation. Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview, he put the problem in stark terms: “We’re forgetting how to fly.”

Thieves of experience: On the rise of surveillance capitalism

This review of Shoshana Zuboff’s The Age of Surveillance Capitalism appeared originally in the Los Angeles Review of Books.

1. The Resurrection

We sometimes forget that, at the turn of the century, Silicon Valley was in a funk, economic and psychic. The great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment capital along with the savings of many Americans. Trophy startups like Pets.com, Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines. Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits and decamping. Venture funding had dried up. As a business proposition, the information superhighway was looking like a cul-de-sac.

Today, less than 20 years on, everything has changed. The top American internet companies are among the most profitable and highly capitalized businesses in history. Not only do they dominate the technology industry but they have much of the world economy in their grip. Their founders and early backers sit atop Rockefeller-sized fortunes. Cities and states court them with billions of dollars in tax breaks and other subsidies. Bright young graduates covet their jobs. Along with their financial clout, the internet giants hold immense social and cultural sway, influencing how all of us think, act, and converse.

Silicon Valley’s Phoenix-like resurrection is a story of ingenuity and initiative. It is also a story of callousness, predation, and deceit. Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.

Zuboff’s fierce indictment of the big internet firms goes beyond the usual condemnations of privacy violations and monopolistic practices. To her, such criticisms are sideshows, distractions that blind us to a graver danger: By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.

Silicon Valley’s Phoenix-like resurrection is a story
of ingenuity and initiative. It is also
a story of callousness, predation, and deceit.

Capitalism has always been a fraught system. Capable of both tempering and magnifying human flaws, particularly the lust for power, it can expand human possibility or constrain it, liberate people or oppress them. (The same can be said of technology.) Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.

By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers. But, as Zuboff makes clear, this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders. Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms. In contrast to the businesses of the industrial era, whose interests were by necessity entangled with those of the public, internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.

2. The Map

It all began innocently. In the 1990s, before they founded Google, Larry Page and Sergey Brin were computer-science students who shared a fascination with the arcane field of network theory and its application to the internet. They saw that by scanning web pages and tracing the links between them, they would be able to create a map of the net with both theoretical and practical value. The map would allow them to measure the importance of every page, based on the number of other pages that linked to it, and that data would, in turn, provide the foundation for a powerful search engine. Because the map could also be used to record the routes and choices of people as they traveled through the network, it would provide a finely detailed account of human behavior.

In Google’s early days, Page and Brin were wary of exploiting the data they collected for monetary gain, fearing it would corrupt their project. They limited themselves to using the information to improve search results, for the benefit of users. That changed after the dot-com bust. Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it. Under pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.

Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance. Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists, brand preferences, and other material desires. The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.

Once it embraced surveillance as the core of its business, Google changed. Its innocence curdled, and its idealism became a means of obfuscation.

Even as its army of PR agents and lobbyists continued to promote a cuddly Nerds-in-Toyland image for the firm, the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements. Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors. As one Google executive quoted by Zuboff put it, “Larry [Page] opposed any path that would reveal our technological secrets or stir the privacy pot and endanger our ability to gather data.”

As networked computers came to mediate more and more of people’s everyday lives, the map of the online world created by Page and Brin became far more lucrative than they could have anticipated. Zuboff reminds us that, throughout history, the charting of a new territory has always granted the mapmaker an imperial power. Quoting the historian John B. Harley, she writes that maps “are essential for the effective ‘pacification, civilization, and exploitation’ of territories imagined or claimed but not yet seized in practice. Places and people must be known in order to be controlled.” An early map of the United States bore the motto “Order upon the Land.” Should Google ever need a new slogan to replace its original, now-discarded “Don’t be evil,” it would be hard-pressed to find a better one than that.

3. The Heist

Zuboff opens her book with a look back at a prescient project from the year 2000 on the future of home automation by a group of Georgia Tech computer scientists. Anticipating the arrival of “smart homes,” the scholars described how a mesh of environmental and wearable sensors, linked wirelessly to computers, would allow all sorts of domestic routines, from the dimming of bedroom lights to the dispensing of medications to the entertaining of children, to be programmed to suit a house’s occupants.

Essential to the effort would be the processing of intimate data on people’s habits, predilections, and health. Taking it for granted that such information should remain private, the researchers envisaged a leak-proof “closed loop” system that would keep the data within the home, under the purview and control of the homeowner. The project, Zuboff explains, reveals the assumptions about “datafication” that prevailed at the time: “(1) that it must be the individual alone who decides what experience is rendered as data, (2) that the purpose of the data is to enrich the individual’s life, and (3) that the individual is the sole arbiter of how the data are put to use.”

What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information. It turned the details of the lives of millions and then billions of people into its own property. The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.

Without permission, without compensation,
and with little in the way of resistance, Google seized and
declared ownership over everyone’s information.

Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it. The public’s naivete and apathy were only part of the story, however. Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.

Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the courts. Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.

The bullying style of TOS agreements also characterizes the practice, common to Google and other technology companies, of threatening users with a loss of “functionality” should they try to opt out of data sharing protocols or otherwise attempt to escape surveillance. Anyone who tries to remove a pre-installed Google app from an Android phone, for instance, will likely be confronted by a vague but menacing warning: “If you disable this app, other apps may no longer function as intended.” This is a coy, high-tech form of blackmail: “Give us your data, or the phone dies.”

In pulling off its data grab, Google also benefited from the terrorist attacks of September 11, 2001. As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations. The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,” Zuboff writes.

Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agency and the Central Intelligence Agency. But they also benefited indirectly. Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public. One of the unintended consequences of this uniquely distressing moment in American history, Zuboff observes, was that “the fledgling practices of surveillance capitalism were allowed to root and grow with little regulatory or legislative challenge.” Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.

What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not. “Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses,” explained Supreme Court Justice William O. Douglas in a 1967 opinion. “Those who wrote the Bill of Rights believed that every individual needs both to communicate with others and to keep his affairs to himself. That dual aspect of privacy means that the individual should have the freedom to select for himself the time and circumstances when he will share his secrets with others and decide the extent of that sharing.”

Google and other internet firms usurp this essential freedom. “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.

4. The Script

Fearing Google’s expansion and coveting its profits, other internet, media, and communications companies rushed into the prediction market, and competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives. There are the chatbots like Alexa and Cortana, the digital assistants like Amazon Echo and Google Home, the wearable computers like Fitbit and Apple Watch. There are the navigation, banking, and health apps installed on smartphones and the new wave of automotive media and telematics systems like CarPlay, Android Auto, and Progressive’s Snapshot. And there are the myriad sensors and transceivers of smart homes, smart cities, and the so-called internet of things. Big Brother would be impressed.

But spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.

Google realized early on that the internet allowed market research to be conducted on a massive scale and at virtually no cost. Every click could become part of an experiment. The company used its research findings to fine-tune its sites and services. It meticulously designed every element of the online experience, from the color of links to the placement of ads, to provoke the desired responses from users. But it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react. The company rolled out its now ubiquitous “Like” button, for example, after early experiments showed it to be a perfect operant-conditioning device, reliably pushing users to spend more time on the site, and share more information.

It was Facebook, with its incredibly detailed data
on people’s social lives, that grasped digital media’s
full potential for behavior modification.

Zuboff describes a revealing and in retrospect ominous Facebook study that was conducted during the 2010 U.S. congressional election and published in 2012 in Nature under the title “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” The researchers, a group of data scientists from Facebook and the University of California at San Diego, manipulated voting-related messages displayed in Facebook users’ news feeds on election day (without the users’ knowledge). One set of users received a message encouraging them to vote, a link to information on poll locations, and an “I Voted” button. A second set saw the same information along with photos of friends who had clicked the button.

The researchers found that seeing the pictures of friends increased the likelihood that people would seek information on polling places and end up clicking the “I Voted” button themselves. “The results show,” they reported, “that [Facebook] messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people.” Through a subsequent examination of actual voter records, the researchers estimated that, as a result of the study and its “social contagion” effect, at least 340,000 additional votes were cast in the election.

Nudging people to vote may seem praiseworthy, even if done surreptitiously. What the study revealed, though, is how even very simple social-media messages, if carefully designed, can mold people’s opinions and decisions, including those of a political nature. As the researchers put it, “online political mobilization works.” Although few heeded it at the time, the study provided an early warning of how foreign agents and domestic political operatives would come to use Facebook and other social networks in clandestine efforts to shape people’s views and votes. Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.

To Zuboff, the experiment and its aftermath carry an even broader lesson, and a grim warning. All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.” This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists. “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”

Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots. What the industries of the future will seek to manufacture is the self.

5. The Bargain

The Age of Surveillance Capitalism is a long, sprawling book, but there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on. The benefits can’t be dismissed as illusory, and the public can no longer claim ignorance about what’s sacrificed in exchange for them. Over the last two years, the press has uncovered one scandal after another involving malfeasance by big internet firms, Facebook in particular. We know who we’re dealing with.

This is not to suggest that our lives are best evaluated with spreadsheets. Nor is it to downplay the abuses inherent to a system that places control over knowledge and discourse in the hands of a few companies that have both incentive and means to manipulate what we see and do. It is to point out that a full examination of surveillance capitalism requires as rigorous and honest an accounting of its boons as of its banes.

In the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way. Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one — but her case would have been stronger still had she more fully addressed the benefits side of the ledger.

The book has other, more cosmetic flaws. Zuboff is prone to wordiness and hackneyed phrasing, and she at times delivers her criticism in overwrought prose that blunts its effect. A less tendentious, more dispassionate tone would make her argument harder for Silicon Valley insiders and sympathizers to dismiss. The book is also overstuffed. Zuboff feels compelled to make the same point in a dozen different ways when a half dozen would have been more than sufficient. Here, too, stronger editorial discipline would have sharpened the message.

Whatever its imperfections, The Age of Surveillance Capitalism is an original and often brilliant work, and it arrives at a crucial moment, when the public and its elected representatives are at last grappling with the extraordinary power of digital media and the companies that control it. Like another recent masterwork of economic analysis, Thomas Piketty’s 2013 Capital in the Twenty-First Century, the book challenges assumptions, raises uncomfortable questions about the present and future, and stakes out ground for a necessary and overdue debate. Shoshana Zuboff has aimed an unsparing light onto the shadowy new landscape of our lives. The picture is not pretty.

The map and the script

Shoshana Zuboff’s epic critique of Silicon Valley, The Age of Surveillance Capitalism, is out today, and so is my review, “Thieves of Experience: How Google and Facebook Corrupted Capitalism,” in the Los Angeles Review of Books. It begins:

We sometimes forget that, at the turn of the century, Silicon Valley was in a funk, economic and psychic. The great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment capital along with the savings of many Americans. Trophy startups like Pets.com, Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines. Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits and decamping. Venture funding had dried up. As a business proposition, the information superhighway was looking like a cul-de-sac.

Today, less than 20 years on, everything has changed. The top American internet companies are among the most profitable and highly capitalized businesses in history. Not only do they dominate the technology industry but they have much of the world economy in their grip. Their founders and early backers sit atop Rockefeller-sized fortunes. Cities and states court them with billions of dollars in tax breaks and other subsidies. Bright young graduates covet their jobs. Along with their financial clout, the internet giants hold immense social and cultural sway, influencing how all of us think, act, and converse.

Silicon Valley’s Phoenix-like resurrection is a story of ingenuity and initiative. It is also a story of callousness, predation, and deceit. …

Read on.

Chaos and control: the story of the web

As societies grow more complex, so too do their systems of control. At best, these systems protect personal freedom, by shielding individuals from the disruptive and sometimes violent forces of social and economic chaos. But they can also have the opposite effect. They can be used to constrain and manipulate people for the commercial or political benefit of those who own, manage, or otherwise wield power over the systems. The story of the Internet is largely a story of control — its establishment, overthrow, reestablishment, and abuse. Here’s an excerpt from my book The Big Switch: Rewiring the World, from Edison to Google, published ten years ago, that traces the dynamics of control through the history of data processing, from the punch-card tabulator to the social network. I think the story helps illuminate the current, troubled state of digital media, where, oddly enough, the forces of chaos and control now exist in symbiosis.

All living systems, from amoebas to nation-states, sustain themselves through the processing of matter, energy, and information. They take in materials from their surroundings, and they use energy to transform those materials into various useful substances, discarding the waste. This continuous turning of inputs into outputs is controlled through the collection, interpretation, and manipulation of information. The process of control itself has two thrusts. It involves measurement — the comparison of the current state of a system to its desired state. And it involves two-way communication — the transmission of instructions and the collection of feedback on results. The processing of information for the purpose of control may result in the release of a hormone into the bloodstream, the expansion of a factory’s production capacity, or the launch of a missile from a warship, but it works in essentially the same way in any living system.

When in the 1880s Herman Hollerith created the punch-card tabulator, an analogue prototype of the mainframe computer, he wasn’t just pursuing his native curiosity as an engineer and an inventor. He was responding to an imbalance between, on the one hand, the technologies for processing matter and energy and, on the other, the technologies for processing information. He was trying to help resolve what James R. Beniger, in The Control Revolution, calls a “crisis of control,” a crisis that was threatening to undermine the stability of markets and bring economic and technological progress to a halt.

Throughout the first two centuries of the Industrial Revolution, the processing of matter and energy had advanced far more rapidly than the processing of information. The steam engine, used to power ships and trains and industrial machines, allowed factories, transportation carriers, retailers, and other businesses to expand their operations and their markets far beyond what was possible when production and distribution were restricted by the limitations of muscle power. Business owners, who had previously been able to observe their operations in their entirety and control them directly, now had to rely on information from many different sources to manage their companies. But they found that they lacked the means to collect and analyze the information fast enough to make timely decisions. Measurement and communication both began to break down, hamstringing management and impeding the further growth of businesses. As the sociologist Emile Durkheim observed in 1893, “The producer can no longer embrace the market in a glance, nor even in thought. He can no longer see limits, since it is, so to speak, limitless. Accordingly production becomes unbridled and unregulated.” Government officials found themselves in a similar predicament, unable to assemble and analyze the information required to regulate commerce. The processing of materials and energy had progressed so rapidly that it had gone, quite literally, out of control.

During the second half of the nineteenth century, a series of technological advances in information processing helped administrators, in both business and government, begin to re-impose control over commerce and society, bringing order to chaos and opening the way for even larger organizations. The construction of the telegraph system, begun by Samuel F.B. Morse in 1845, allowed information to be communicated instantaneously across great distances. The establishment of time zones in 1883 allowed for more precise measurement of the flows of goods. The most important of the new control technologies, however, was bureaucracy — the organization of people into hierarchical information-processing systems. Bureaucracies had, of course, been around as long as civilization itself, but, as Beniger writes, “bureaucratic administration did not begin to achieve anything approximating its modern form until the late Industrial Revolution.” Just as the division of labor in factories provided for the more efficient processing of matter, so the division of labor in government and business offices allowed for the more efficient processing of information.

But bureaucrats alone could not keep up with the flood of data that needed to be processed — the measurement and communication requirements went beyond the capacities of even large groups of human beings. Just like their counterparts on factory floors, information workers needed new tools to do their jobs. That requirement became embarrassingly obvious inside the U.S. Census Bureau at the end of the century. During the 1870s, the federal government, struggling to administer a country and an economy that were growing rapidly in size and complexity, had demanded that the Bureau greatly expand the scope of its data collection, particularly in the areas of business and transport. The 1870 the census had spanned just five subjects; the 1880 round was expanded to cover 215.

The new census turned into a disaster for the government. Even though many professional managers and clerks had been hired by the Bureau, the volume of data overwhelmed their ability to process it. By 1887, the agency found itself in the uncomfortable position of having to begin preparations for the next census even as it was still laboring to tabulate the results of the last one. It was in that context that Hollerith, who had worked on the 1880 census, rushed to invent his information-processing machine. He judged, correctly, that it would prove invaluable not only to the Census Bureau but to large companies across the nation.

The arrival of Hollerith’s tabulator was a seminal event in a new revolution — a “Control Revolution,” as Beniger terms it — that followed and was made necessary and inevitable by the Industrial Revolution. Through the Control Revolution, the technologies for processing information finally caught up with the technologies for processing matter and energy, bringing the living system of society back into equilibrium. The entire history of automated data processing, from Hollerith’s punch-card system through the digital computer and on to the modern computer network, is best understood as part of that ongoing process of reestablishing and maintaining control. “Microprocessor and computer technologies, contrary to currently fashionable opinion, are not new forces only recently unleashed upon an unprepared society,” writes Beniger, “but merely the latest installment in the continuing development of the Control Revolution.”

It should come as no surprise, then, that most of the major advances in computing and networking, from Hollerith’s time to the present, have been spurred not by a desire to liberate the masses but by a need for greater control on the part of commercial and governmental bureaucrats, often ones associated with military operations and national defense. Indeed, the very structure of a bureaucracy is replicated in the functions of a computer. A computer gathers information through its input devices, records information as files in its memory, imposes formal rules and procedures on its users through its programs, and communicates information through its output devices. It is a tool for dispensing instructions, for gathering feedback on how well those instructions are carried out, and for measuring progress toward some specified goal. In using a computer, a person becomes part of the control mechanism. He turns into a component of what the Internet pioneer J. C. R. Licklider, in the seminal 1960 paper “Man-Computer Symbiosis,” described as a system integrating man and machine into a single, programmable unit.

But while computer systems played a major role in helping businesses and governments reestablish central control over workers and citizens in the wake of the Industrial Revolution, the other side of their nature — as tools for personal empowerment — has also helped shape modern society, particularly in recent years. By shifting power from institutions to individuals, information-processing machines can disturb control as well as reinforce it. Such disturbances tend to be short-lived, however. Institutions have proven adept at reestablishing control through the development of ever more powerful information technologies. As Beniger explains, “information processing and flows need themselves to be controlled, so that informational technologies continue to be applied at higher and higher levels of control.”

The arrival of the personal computer in the 1980s, for example, posed a sudden and unexpected threat to centralized power. It initiated a new, if much more limited, crisis of control. Pioneered by countercultural hackers and hobbyists, the PC was infused from the start with a libertarian ideology. As memorably portrayed in Apple Computer’s dramatic “1984” television advertisement, the personal computer was to be a weapon against central control, a tool for destroying the Big Brother-like hegemony of the corporate mainframe. Office workers began buying PCs with their own money, bringing them to their offices, and setting them up on their desks. Bypassing corporate systems altogether, PC-empowered employees gained personal control over the data and programs they used. They gained freedom, but in the process they weakened the ability of bureaucracies to monitor and steer their work. Business executives and the IT managers that served them viewed the flood of PCs into the workplace as “a Biblical plague,” in the words of computer historian Paul Ceruzzi.

The breakdown of control proved fleeting. The client-server system, which tied all the previously autonomous PCs together into a single network connected to a central store of corporate information and software, was the means by which the bureaucrats reasserted their control over information and its processing. Together with an expansion in the size and power of IT departments, client-server systems enabled companies to restrict access to data and to limit the use of software to a set of prescribed programs. Ironically, once they were networked into a corporate system, PCs actually enabled companies to monitor, structure, and guide the work of employees more tightly than was ever possible before. “Local networking took the ‘personal’ out of personal computing,” explains Ceruzzi. “PC users in the workplace accepted this Faustian bargain. The more computer-savvy among them resisted, but the majority of office workers hardly even noticed how much this represented a shift away from the forces that drove the invention of the personal computer in the first place. The ease with which this transition took place shows that those who believed in truly autonomous, personal computing were perhaps naïve.”

The popularization of the Internet, through the World Wide Web and its browser, brought another and very similar control crisis. Although the construction of the Internet was spearheaded by the Department of Defense, a paragon of centralized power, it was designed to be a highly dispersed, loosely organized network. Since the overriding goal was to build as robust a system as possible — one that could withstand the failure of any of its parts — it was given a radically decentralized structure. Every computer, or node, operates autonomously, and communications between computers don’t have to pass through any central clearinghouse. The Net’s “internal protocols,” as New York University professor Alexander Galloway writes, “are the enemy of bureaucracy, of rigid hierarchy, and of centralization.” If a corporate computer network was akin to a railroad, with tightly scheduled and monitored traffic, the Internet was more like the highway system, with largely free-flowing and unsupervised traffic.

At work and at home, people found they could use the Web to once again bypass established centers of control, whether corporate bureaucracies, government agencies, retailing empires, or media conglomerates. Seemingly uncontrolled and uncontrollable, the Web was routinely portrayed as a new frontier, a Rousseauian wilderness in which we, as autonomous agents, were free to redefine society on our own terms. “Governments of the Industrial World,” proclaimed John Perry Barlow in his Declaration of the Independence of Cyberspace, “you are not welcome among us. You have no sovereignty where we gather.” But, as with the arrival of the PC, it didn’t take long for governments, and corporations, to begin reasserting and even extending their dominion.

The error that Barlow and many other Internet enthusiasts made was to assume that the Net’s decentralized structure is necessarily resistant to social control. They turned a technical characteristic into a metaphor for personal freedom. But, as Galloway explains, the connection of previously untethered computers into a network governed by strict protocols has actually created “a new apparatus of control.” Indeed, he writes, “the founding principle of the Net is control, not freedom — control has existed from the beginning.” As the fragmented pages of the World Wide Web turn into centrally controlled and programmed social networks and cloud-computing operations, moreover, a powerful new kind of control becomes possible. What is programming, after all, but a method of control? Even though the Internet still has no center, technically speaking, control can now be wielded, through software code, from anywhere. What’s different, in comparison to the physical world, is that acts of control become harder to detect and wielders of control more difficult to discern.

The future’s so bright, I gotta wear blinders

“This is only the beginning,” writes Kevin Kelly in an essay in Wired‘s 25th anniversary issue. “The main event has barely started.” He’s talking about the internet. If his words sound familiar, it’s because “only the beginning” has become Kelly’s stock phrase, the rhetorical device he flourishes, like a magician’s cape, to draw readers’ eyes away from what’s really going on. Back in 2005, in a Wired story called “We Are the Web,” Kelly wrote, “It is only the beginning.” And then, his enthusiasm waxing, he capitalized it: “the Beginning.” He doubled down in his 2016 book The Inevitable: “The internet is still at the beginning of its beginning.” And then: “The Beginning, of course, is just beginning.”

I predict this sentence will appear in the next thing Kelly writes: “The beginning of the Beginning will be beginning shortly.”

This is not the beginning, much less the beginning of the beginning. We’ve been cozying up to computers for a long time, and the contours of the digital era are clear. Computers have been around for the better part of a century, computer networks have been around since the 1950s, personal computers have been in popular use since the late 1970s, online communities have been around at least since 1985 (when the Well launched), and the web has been around for a quarter century. Text messaging on mobile phones started in 1984, the first BlackBerry smartphone was released in 2002, and the iPhone arrived in 2007. The social network MySpace was popular 15 years ago, and Facebook went live in 2004. Last month, Google turned 20. In looking back over the consequences of computer-mediated connectivity since at least the turn of the century, we see differences in degree, not in kind. 

A few years ago, the technology critic Michael Sacasas introduced the term “Borg Complex” to describe the attitude and rhetoric of modern-day utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool. (The Borg is an alien race in Star Trek that sucks up the minds of other races, telling its victims that “resistance is futile.”) Those afflicted with the complex, Sacasas observed, rely on a a set of largely specious assertions to dismiss concerns about any ill effects of technological progress. The Borgers are quick, for example, to make grandiose claims about the coming benefits of new technologies (remember MOOCs?) while dismissing past cultural achievements with contempt (“I don’t really give a shit if literary novels go away”).

To Sacasas’s list of such obfuscating rhetorical devices, I would add the assertion that we are “only at the beginning.” By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.) Any attempt to put constraints on technologists and technology companies becomes, in this view, a short-sighted and possibly disastrous obstruction of technology’s march toward a brighter future for everyone — what Kelly is still calling the “long boom.” You ain’t seen nothing yet, so stay out of our way and let us work our magic.

In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think. These biases can, in the long run, exert a profound influence over the organization of society and the course of history. “Bias,” it seems to me, is exactly the right word. The media we use to communicate push us to communicate in certain ways, reflecting, among other things, the workings of the underlying technologies and the financial and political interests of the businesses or governments that promulgate the technologies. (For a simple but important example, think of the way personal correspondence has been changed by the shift from letters delivered through the mail to emails delivered via the internet to messages delivered through smartphones.) A bias is an inclination. Its effects are not inevitable, but they can be strong. To temper them requires awareness and, yes, resistance.

For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course.

Is there an overarching bias to the advance of communication systems? Technology enthusiasts like Kelly would argue that there is — a bias toward greater freedom, democracy, and social harmony. As a society, we’ve largely embraced this sunny view. Harold Innis had a very different take. “Improvements in communication,” he wrote in The Bias of Communication, “make for increased difficulties of understanding.” He continued: “The large-scale mechanization of knowledge is characterized by imperfect competition and the active creation of monopolies in language which prevent understanding and hasten appeals to force.” Looking over recent events, I sense that Innis may turn out to be the more reliable prophet.

Decoding INABIAF

Used to be, in the realm of software, that bugs would turn out to be features in disguise. Nowadays it more often goes the other way around: functions presented to us as features are revealed to be bugs. As part of Wired‘s 25th anniversary celebration, I have a piece on the history of the catchphrase “it’s not a bug, it’s a feature.” A taste:

A quick scan of Google News reveals that, over the course of a single month earlier this year, It’s not a bug, it’s a feature appeared 146 times. Among the bugs said to be features were the decline of trade unions, the wilting of cut flowers, economic meltdowns, the gratuitousness of Deadpool 2’s post-credits scenes, monomania, the sloppiness of Neil Young and Crazy Horse, marijuana-induced memory loss, and the apocalypse. Given the right cliche, nothing is unredeemable.

Read it.

Photo: Nigel Jones.