Category Archives: Uncategorized

The map and the script

Shoshana Zuboff’s epic critique of Silicon Valley, The Age of Surveillance Capitalism, is out today, and so is my review, “Thieves of Experience: How Google and Facebook Corrupted Capitalism,” in the Los Angeles Review of Books. It begins:

We sometimes forget that, at the turn of the century, Silicon Valley was in a funk, economic and psychic. The great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment capital along with the savings of many Americans. Trophy startups like, Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines. Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits and decamping. Venture funding had dried up. As a business proposition, the information superhighway was looking like a cul-de-sac.

Today, less than 20 years on, everything has changed. The top American internet companies are among the most profitable and highly capitalized businesses in history. Not only do they dominate the technology industry but they have much of the world economy in their grip. Their founders and early backers sit atop Rockefeller-sized fortunes. Cities and states court them with billions of dollars in tax breaks and other subsidies. Bright young graduates covet their jobs. Along with their financial clout, the internet giants hold immense social and cultural sway, influencing how all of us think, act, and converse.

Silicon Valley’s Phoenix-like resurrection is a story of ingenuity and initiative. It is also a story of callousness, predation, and deceit. …

Read on.

Chaos and control: the story of the web

As societies grow more complex, so too do their systems of control. At best, these systems protect personal freedom, by shielding individuals from the disruptive and sometimes violent forces of social and economic chaos. But they can also have the opposite effect. They can be used to constrain and manipulate people for the commercial or political benefit of those who own, manage, or otherwise wield power over the systems. The story of the Internet is largely a story of control — its establishment, overthrow, reestablishment, and abuse. Here’s an excerpt from my book The Big Switch: Rewiring the World, from Edison to Google, published ten years ago, that traces the dynamics of control through the history of data processing, from the punch-card tabulator to the social network. I think the story helps illuminate the current, troubled state of digital media, where, oddly enough, the forces of chaos and control now exist in symbiosis.

All living systems, from amoebas to nation-states, sustain themselves through the processing of matter, energy, and information. They take in materials from their surroundings, and they use energy to transform those materials into various useful substances, discarding the waste. This continuous turning of inputs into outputs is controlled through the collection, interpretation, and manipulation of information. The process of control itself has two thrusts. It involves measurement — the comparison of the current state of a system to its desired state. And it involves two-way communication — the transmission of instructions and the collection of feedback on results. The processing of information for the purpose of control may result in the release of a hormone into the bloodstream, the expansion of a factory’s production capacity, or the launch of a missile from a warship, but it works in essentially the same way in any living system.

When in the 1880s Herman Hollerith created the punch-card tabulator, an analogue prototype of the mainframe computer, he wasn’t just pursuing his native curiosity as an engineer and an inventor. He was responding to an imbalance between, on the one hand, the technologies for processing matter and energy and, on the other, the technologies for processing information. He was trying to help resolve what James R. Beniger, in The Control Revolution, calls a “crisis of control,” a crisis that was threatening to undermine the stability of markets and bring economic and technological progress to a halt.

Throughout the first two centuries of the Industrial Revolution, the processing of matter and energy had advanced far more rapidly than the processing of information. The steam engine, used to power ships and trains and industrial machines, allowed factories, transportation carriers, retailers, and other businesses to expand their operations and their markets far beyond what was possible when production and distribution were restricted by the limitations of muscle power. Business owners, who had previously been able to observe their operations in their entirety and control them directly, now had to rely on information from many different sources to manage their companies. But they found that they lacked the means to collect and analyze the information fast enough to make timely decisions. Measurement and communication both began to break down, hamstringing management and impeding the further growth of businesses. As the sociologist Emile Durkheim observed in 1893, “The producer can no longer embrace the market in a glance, nor even in thought. He can no longer see limits, since it is, so to speak, limitless. Accordingly production becomes unbridled and unregulated.” Government officials found themselves in a similar predicament, unable to assemble and analyze the information required to regulate commerce. The processing of materials and energy had progressed so rapidly that it had gone, quite literally, out of control.

During the second half of the nineteenth century, a series of technological advances in information processing helped administrators, in both business and government, begin to re-impose control over commerce and society, bringing order to chaos and opening the way for even larger organizations. The construction of the telegraph system, begun by Samuel F.B. Morse in 1845, allowed information to be communicated instantaneously across great distances. The establishment of time zones in 1883 allowed for more precise measurement of the flows of goods. The most important of the new control technologies, however, was bureaucracy — the organization of people into hierarchical information-processing systems. Bureaucracies had, of course, been around as long as civilization itself, but, as Beniger writes, “bureaucratic administration did not begin to achieve anything approximating its modern form until the late Industrial Revolution.” Just as the division of labor in factories provided for the more efficient processing of matter, so the division of labor in government and business offices allowed for the more efficient processing of information.

But bureaucrats alone could not keep up with the flood of data that needed to be processed — the measurement and communication requirements went beyond the capacities of even large groups of human beings. Just like their counterparts on factory floors, information workers needed new tools to do their jobs. That requirement became embarrassingly obvious inside the U.S. Census Bureau at the end of the century. During the 1870s, the federal government, struggling to administer a country and an economy that were growing rapidly in size and complexity, had demanded that the Bureau greatly expand the scope of its data collection, particularly in the areas of business and transport. The 1870 the census had spanned just five subjects; the 1880 round was expanded to cover 215.

The new census turned into a disaster for the government. Even though many professional managers and clerks had been hired by the Bureau, the volume of data overwhelmed their ability to process it. By 1887, the agency found itself in the uncomfortable position of having to begin preparations for the next census even as it was still laboring to tabulate the results of the last one. It was in that context that Hollerith, who had worked on the 1880 census, rushed to invent his information-processing machine. He judged, correctly, that it would prove invaluable not only to the Census Bureau but to large companies across the nation.

The arrival of Hollerith’s tabulator was a seminal event in a new revolution — a “Control Revolution,” as Beniger terms it — that followed and was made necessary and inevitable by the Industrial Revolution. Through the Control Revolution, the technologies for processing information finally caught up with the technologies for processing matter and energy, bringing the living system of society back into equilibrium. The entire history of automated data processing, from Hollerith’s punch-card system through the digital computer and on to the modern computer network, is best understood as part of that ongoing process of reestablishing and maintaining control. “Microprocessor and computer technologies, contrary to currently fashionable opinion, are not new forces only recently unleashed upon an unprepared society,” writes Beniger, “but merely the latest installment in the continuing development of the Control Revolution.”

It should come as no surprise, then, that most of the major advances in computing and networking, from Hollerith’s time to the present, have been spurred not by a desire to liberate the masses but by a need for greater control on the part of commercial and governmental bureaucrats, often ones associated with military operations and national defense. Indeed, the very structure of a bureaucracy is replicated in the functions of a computer. A computer gathers information through its input devices, records information as files in its memory, imposes formal rules and procedures on its users through its programs, and communicates information through its output devices. It is a tool for dispensing instructions, for gathering feedback on how well those instructions are carried out, and for measuring progress toward some specified goal. In using a computer, a person becomes part of the control mechanism. He turns into a component of what the Internet pioneer J. C. R. Licklider, in the seminal 1960 paper “Man-Computer Symbiosis,” described as a system integrating man and machine into a single, programmable unit.

But while computer systems played a major role in helping businesses and governments reestablish central control over workers and citizens in the wake of the Industrial Revolution, the other side of their nature — as tools for personal empowerment — has also helped shape modern society, particularly in recent years. By shifting power from institutions to individuals, information-processing machines can disturb control as well as reinforce it. Such disturbances tend to be short-lived, however. Institutions have proven adept at reestablishing control through the development of ever more powerful information technologies. As Beniger explains, “information processing and flows need themselves to be controlled, so that informational technologies continue to be applied at higher and higher levels of control.”

The arrival of the personal computer in the 1980s, for example, posed a sudden and unexpected threat to centralized power. It initiated a new, if much more limited, crisis of control. Pioneered by countercultural hackers and hobbyists, the PC was infused from the start with a libertarian ideology. As memorably portrayed in Apple Computer’s dramatic “1984” television advertisement, the personal computer was to be a weapon against central control, a tool for destroying the Big Brother-like hegemony of the corporate mainframe. Office workers began buying PCs with their own money, bringing them to their offices, and setting them up on their desks. Bypassing corporate systems altogether, PC-empowered employees gained personal control over the data and programs they used. They gained freedom, but in the process they weakened the ability of bureaucracies to monitor and steer their work. Business executives and the IT managers that served them viewed the flood of PCs into the workplace as “a Biblical plague,” in the words of computer historian Paul Ceruzzi.

The breakdown of control proved fleeting. The client-server system, which tied all the previously autonomous PCs together into a single network connected to a central store of corporate information and software, was the means by which the bureaucrats reasserted their control over information and its processing. Together with an expansion in the size and power of IT departments, client-server systems enabled companies to restrict access to data and to limit the use of software to a set of prescribed programs. Ironically, once they were networked into a corporate system, PCs actually enabled companies to monitor, structure, and guide the work of employees more tightly than was ever possible before. “Local networking took the ‘personal’ out of personal computing,” explains Ceruzzi. “PC users in the workplace accepted this Faustian bargain. The more computer-savvy among them resisted, but the majority of office workers hardly even noticed how much this represented a shift away from the forces that drove the invention of the personal computer in the first place. The ease with which this transition took place shows that those who believed in truly autonomous, personal computing were perhaps naïve.”

The popularization of the Internet, through the World Wide Web and its browser, brought another and very similar control crisis. Although the construction of the Internet was spearheaded by the Department of Defense, a paragon of centralized power, it was designed to be a highly dispersed, loosely organized network. Since the overriding goal was to build as robust a system as possible — one that could withstand the failure of any of its parts — it was given a radically decentralized structure. Every computer, or node, operates autonomously, and communications between computers don’t have to pass through any central clearinghouse. The Net’s “internal protocols,” as New York University professor Alexander Galloway writes, “are the enemy of bureaucracy, of rigid hierarchy, and of centralization.” If a corporate computer network was akin to a railroad, with tightly scheduled and monitored traffic, the Internet was more like the highway system, with largely free-flowing and unsupervised traffic.

At work and at home, people found they could use the Web to once again bypass established centers of control, whether corporate bureaucracies, government agencies, retailing empires, or media conglomerates. Seemingly uncontrolled and uncontrollable, the Web was routinely portrayed as a new frontier, a Rousseauian wilderness in which we, as autonomous agents, were free to redefine society on our own terms. “Governments of the Industrial World,” proclaimed John Perry Barlow in his Declaration of the Independence of Cyberspace, “you are not welcome among us. You have no sovereignty where we gather.” But, as with the arrival of the PC, it didn’t take long for governments, and corporations, to begin reasserting and even extending their dominion.

The error that Barlow and many other Internet enthusiasts made was to assume that the Net’s decentralized structure is necessarily resistant to social control. They turned a technical characteristic into a metaphor for personal freedom. But, as Galloway explains, the connection of previously untethered computers into a network governed by strict protocols has actually created “a new apparatus of control.” Indeed, he writes, “the founding principle of the Net is control, not freedom — control has existed from the beginning.” As the fragmented pages of the World Wide Web turn into centrally controlled and programmed social networks and cloud-computing operations, moreover, a powerful new kind of control becomes possible. What is programming, after all, but a method of control? Even though the Internet still has no center, technically speaking, control can now be wielded, through software code, from anywhere. What’s different, in comparison to the physical world, is that acts of control become harder to detect and wielders of control more difficult to discern.

The future’s so bright, I gotta wear blinders

“This is only the beginning,” writes Kevin Kelly in an essay in Wired‘s 25th anniversary issue. “The main event has barely started.” He’s talking about the internet. If his words sound familiar, it’s because “only the beginning” has become Kelly’s stock phrase, the rhetorical device he flourishes, like a magician’s cape, to draw readers’ eyes away from what’s really going on. Back in 2005, in a Wired story called “We Are the Web,” Kelly wrote, “It is only the beginning.” And then, his enthusiasm waxing, he capitalized it: “the Beginning.” He doubled down in his 2016 book The Inevitable: “The internet is still at the beginning of its beginning.” And then: “The Beginning, of course, is just beginning.”

I predict this sentence will appear in the next thing Kelly writes: “The beginning of the Beginning will be beginning shortly.”

This is not the beginning, much less the beginning of the beginning. We’ve been cozying up to computers for a long time, and the contours of the digital era are clear. Computers have been around for the better part of a century, computer networks have been around since the 1950s, personal computers have been in popular use since the late 1970s, online communities have been around at least since 1985 (when the Well launched), and the web has been around for a quarter century. Text messaging on mobile phones started in 1984, the first BlackBerry smartphone was released in 2002, and the iPhone arrived in 2007. The social network MySpace was popular 15 years ago, and Facebook went live in 2004. Last month, Google turned 20. In looking back over the consequences of computer-mediated connectivity since at least the turn of the century, we see differences in degree, not in kind. 

A few years ago, the technology critic Michael Sacasas introduced the term “Borg Complex” to describe the attitude and rhetoric of modern-day utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool. (The Borg is an alien race in Star Trek that sucks up the minds of other races, telling its victims that “resistance is futile.”) Those afflicted with the complex, Sacasas observed, rely on a a set of largely specious assertions to dismiss concerns about any ill effects of technological progress. The Borgers are quick, for example, to make grandiose claims about the coming benefits of new technologies (remember MOOCs?) while dismissing past cultural achievements with contempt (“I don’t really give a shit if literary novels go away”).

To Sacasas’s list of such obfuscating rhetorical devices, I would add the assertion that we are “only at the beginning.” By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.) Any attempt to put constraints on technologists and technology companies becomes, in this view, a short-sighted and possibly disastrous obstruction of technology’s march toward a brighter future for everyone — what Kelly is still calling the “long boom.” You ain’t seen nothing yet, so stay out of our way and let us work our magic.

In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think. These biases can, in the long run, exert a profound influence over the organization of society and the course of history. “Bias,” it seems to me, is exactly the right word. The media we use to communicate push us to communicate in certain ways, reflecting, among other things, the workings of the underlying technologies and the financial and political interests of the businesses or governments that promulgate the technologies. (For a simple but important example, think of the way personal correspondence has been changed by the shift from letters delivered through the mail to emails delivered via the internet to messages delivered through smartphones.) A bias is an inclination. Its effects are not inevitable, but they can be strong. To temper them requires awareness and, yes, resistance.

For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course.

Is there an overarching bias to the advance of communication systems? Technology enthusiasts like Kelly would argue that there is — a bias toward greater freedom, democracy, and social harmony. As a society, we’ve largely embraced this sunny view. Harold Innis had a very different take. “Improvements in communication,” he wrote in The Bias of Communication, “make for increased difficulties of understanding.” He continued: “The large-scale mechanization of knowledge is characterized by imperfect competition and the active creation of monopolies in language which prevent understanding and hasten appeals to force.” Looking over recent events, I sense that Innis may turn out to be the more reliable prophet.

Decoding INABIAF

Used to be, in the realm of software, that bugs would turn out to be features in disguise. Nowadays it more often goes the other way around: functions presented to us as features are revealed to be bugs. As part of Wired‘s 25th anniversary celebration, I have a piece on the history of the catchphrase “it’s not a bug, it’s a feature.” A taste:

A quick scan of Google News reveals that, over the course of a single month earlier this year, It’s not a bug, it’s a feature appeared 146 times. Among the bugs said to be features were the decline of trade unions, the wilting of cut flowers, economic meltdowns, the gratuitousness of Deadpool 2’s post-credits scenes, monomania, the sloppiness of Neil Young and Crazy Horse, marijuana-induced memory loss, and the apocalypse. Given the right cliche, nothing is unredeemable.

Read it.

Photo: Nigel Jones.

Media democratization and the rise of Trump

The following review of the book Trump and the Media appeared originally, in a slightly different form, in the Los Angeles Review of Books.

* * *

President Trump’s tweets may be without precedent, but the controversy surrounding social media’s influence on politics has a long history. During the 1930s, the rapid spread of mass media was accompanied by the rise of fascism. To many observers at the time, the former helped explain the latter. By consolidating control over news and other information, radio networks, movie studios, and publishing houses allowed a single voice to address and even command the multitudes. The very structure of mass media seemed to reflect and reinforce the political structure of the authoritarian state.

But even as the centralization of broadcasting and publishing raised the specter of a media-sculpted “authoritarian personality,” it also inspired a contrasting ideal, as Stanford professor Fred Turner explains in an essay collected in Trump and the Media. Sociologists and psychologists began to imagine a decentralized, multimedia communication network that would encourage the development of a “democratic personality,” providing a bulwark against fascist movements and their charismatic leaders. By exposing citizens to a multiplicity of perspectives and encouraging them to express their own opinions, such a system would give rise, the scholars believed, to “a psychologically whole individual, able to freely choose what to believe, with whom to associate, and where to turn their attention.”

The ideal of a radically “democratized” media, decentralized, participatory, and personally emancipating, was enticing, and it continued to cast a spell long after the defeat of the fascist powers in the Second World War. The ideal infused the counterculture of the 1960s. Beatniks and hippies staged kaleidoscopic multimedia “happenings” as a way to free their minds, discover their true selves, and subvert consumerist conventionality. By the end of the 1970s, the ideal had been embraced by Steve Jobs and other technologists, who celebrated the personal computer as an anti-authoritarian tool of self-actualization.

In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Then came the 2016 U.S. presidential campaign. The ideal’s fruition proved its undoing.

The democratization of media produced not harmony and pluralism but fractiousness and extremism, and the political energies it unleashed felt more autocratic than democratic. Silicon Valley ideology was revealed as naive and self-serving, and the leaders of the major social media platforms, taken by surprise, stumbled from cluelessness to denial to befuddlement. Turner is blunt in his own assessment:

the faith of a generation of twentieth-century liberal theorists — as well as their digital descendants — was misplaced: decentralization does not necessarily increase democracy in the public sphere or in the state. On the contrary, the technologies of decentralized communication can be coupled very tightly to the charismatic, personality-centered modes of authoritarianism long associated with mass media and mass society.

Around the wreckage of techno-progressive orthodoxy orbit the twenty-seven articles in Trump and the Media. The writers, mainly communication and journalism scholars from American and British universities, are homogeneous in their politics — none is in danger of being mistaken for a Trump voter — but heterogeneous in their views on the state and fate of journalism. Their takes on “what happened” (to quote Hillary Clinton) clash in illuminating ways.

One contentious question is whether social media in general and Twitter in particular actually changed the outcome of the vote. Keith N. Hampton, of Michigan State University, finds “no evidence” that any of the widely acknowledged malignancies of social media, from fake news to filter bubbles, “worked in favor of a particular presidential candidate.” Drawing on exit polls, he shows that most demographic groups voted pretty much the same in 2016 as they had in the Obama-Romney race of 2012. The one group that exhibited a large and possibly decisive shift from the Democratic to the Republican candidate were white voters without college degrees. Yet these voters, surveys reveal, are also the least likely to spend a lot of time online or to be active on social media. It’s unfair to blame Twitter or Facebook for Trump’s victory, Hampton suggests, if the swing voters weren’t on Twitter or Facebook.

What Hampton overlooks are the indirect effects of social media, particularly its influence on press coverage and public attention. As the University of Oxford’s Josh Cowls and Ralph Schroeder write, Trump’s Twitter account may have been monitored by only a small portion of the public, but it was followed, religiously, by journalists, pundits, and politicos. The novelty and frequent abrasiveness of the tweets — they broke all the rules of decorum for presidential campaigns — mesmerized the chattering class throughout the primaries and the general election campaign, fueling a frenzy of retweets, replies, and hashtags. Social media’s biggest echo chamber turned out to be the traditional media elite.

An analysis of Twitter mentions and news stories, Cowls and Schroeder report, reveals a clear correlation: “Trump is mentioned in tweets far more often than any other candidate in both parties, often more than all other candidates combined, and the volume of tweets closely tracks his outsize coverage in the dominant mainstream media.” Through his use of Twitter, Trump didn’t so much bypass the established media as bend its coverage to his own ends, keeping himself at the center of TV and radio reports and on the front pages of newspapers while amplifying the anger, outrage, and enmity his posts were intended to sow.

The result, several of the contributors to Trump and the Media posit, was to push voters of all persuasions away from reasoned judgments and toward emotional reactions — a shift that further served Trump’s interests. Zizi Papacharissi, a political scientist at the University of Illinois at Chicago (and, along with Northwestern’s Pablo J. Boczkowski, an editor of the volume), argues that the emotionalism of press coverage during the campaign was in keeping with a general trend in American journalism away from factual reporting and toward “affective news” — stories and snippets that encourage readers and viewers to feel rather than reason their way toward opinions and beliefs. Overheated headlines, constant “breaking news” bulletins, and partisan rants merged into people’s social-media feeds, provoking visceral responses but providing little in the way of context or perspective. “We get intensity, 24/7, but no substance,” Papacharissi laments.

Even as on-the-ground reporting has been in retreat, a victim of financial pressures as well as the public’s hunger for zealotry and spectacle, so-called computational journalism has been advancing. By presenting seemingly rigorous statistical analyses in web-friendly, interactive “visualizations,” popular sites like FiveThirtyEight and the New York Times’s The Upshot would appear to offer an empirical counterweight to reflexive emotionalism. But the objectivity and reliability of computational journalism were called into question by the failure of the number-crunching sites to gauge the extent of Trump’s support during the campaign. The election revealed that, as George Washington University’s Nikki Usher writes, the “alluring certainty” of quantified information can be an illusion. By hiding the subjectivity and ambiguity inherent to data collection and analysis, the slick presentation of quantitative findings or algorithmic outputs is “as liable to mislead as it is to inform.” Then, when the problems come to light, cries of “fake news” resound, and journalism’s credibility takes another hit.

Usher believes that the flaws in computational journalism can be remedied through a more open and honest accounting of its assumptions and limitations. C. W. Anderson, of the University of Leeds, takes a darker view. To much of the public, he argues, the pursuit of “data-driven objectivity” will always be suspect, not because of its methodological limits but because of its egghead aesthetics. Numbers and charts, he notes, have been elements of journalism for a long time, and they have always been “pitched to a more policy-focused audience.” With its ties to social science, computational journalism inevitably carries an air of ivory-tower elitism, making it anathema to those of a populist bent. “In the partisan and polarized American political environment,” Anderson concludes, “professional journalistic claims to facticity have become simply another tribal marker — the tribal marker of ‘smartness’ — and the quantitative, visually oriented forms of data news serve to alienate certain audience members as much as they convince anyone to think about politics or political claims more skeptically.”

Anderson’s stress on the aesthetics of news dovetails with broader observations about contemporary journalism offered by Michael X. Delli Carpini, dean of the University of Pennsylvania’s Annenberg School for Communication. He sees “Trumpism” not as an aberration but as the culmination of “a fundamental shift in the relationships between journalism, politics, and democracy.” The removal of the professional journalist as media gatekeeper released into the public square torrents of information, misinformation, and disinformation. The flood dissolved the already blurred boundaries between news and entertainment, truth and fantasy, public servant and charlatan. Drawing on a term coined years ago by the French philosopher Jean Baudrillard, Delli Carpini argues that we’ve entered a state of “hyperreality,” where media representations of events and facts feel more real than the actual events and facts. In hyperreality, as Baudrillard put it in his 2000 book The Vital Illusion, “form gives way to information and performance.” The aesthetics of news becomes more important to the public than does the news’s accuracy or provenance.

Through its many voices, Trump and the Media makes a convincing case that journalism has sailed into strange and dangerous waters. The belief that more freely flowing information would by itself “spark more, and deeper, democratic engagement with civic life,” as Oxford’s Gina Neff describes it, has been shattered, yet in the headlong pursuit of that belief we’ve dismantled the editorial structures that had been used to filter information and shape it, however imperfectly, into a “shared and coherent narrative.” The circulation of news now seems more likely to tear apart the social fabric than stitch it together.

What the book doesn’t do — and perhaps no book could, at this point — is chart a clear course forward. Some of the writers cling to the techno-progressive flotsam, believing that the problem with democratization is that it didn’t go far enough. Others urge journalists to abandon their pursuit of objective reporting and take on the roles of activist and advocate. Still others suggest that news organizations need to curb their competitive instincts and learn to share sources and reporting rather than fight for scoops. The suggestions are well-intentioned, but most come off as wishful or simplistic. If pursued, they could make matters worse.

If there is a way out of the crisis, it may lie in Fred Turner’s critical reexamination of past assumptions about the structure and influence of media. Just as we failed to see that democratization could subvert democracy, we may have overlooked the strengths of the mass-media news organization in protecting democracy. Professional gatekeepers have their flaws — they can narrow the range of views presented to the public, and they can stifle voices that should be heard — yet through the exercise of their professionalism they also temper the uglier tendencies of human nature. They make it less likely that ignorance, gullibility, and prejudice will poison our conversations and warp our politics.

At this confused moment in the nation’s history, Turner writes at the close of his essay, “what democracy needs first and foremost is not more personalized modes of mediated expression [but rather] a renewed engagement with the rule of law and with the institutions that embody it” — one of those institutions being the press. The most important lesson we can take from the the last election may be an unfashionable one: To be sustained, democracy needs to be constrained.

The problem with Facebook

In the Washington Post, I have a review of two new books that offer critical assessments of Facebook and other social networks: Siva Vaidhyanathan’s Antisocial Media: How Facebook Disconnects Us and Undermines Democracy and Jaron Lanier’s Ten Arguments for Deleting Your Social Media Accounts Right Now. It begins:

The only thing worse than being on Facebook is not being on Facebook. That’s the one clear conclusion we can draw from the recent controversies surrounding the world’s favorite social network.

Despite the privacy violations, despite the spewing of lies and insults, despite the blistering criticism from politicians and the press, Facebook continues to suck up an inordinate amount of humanity’s time and attention. The company’s latest financial report, released after the Cambridge Analytica scandal and the #DeleteFacebook uprising, showed that the service attracted millions of new members during the year’s first quarter, and its ad sales soared. Facebook has become our Best Frenemy Forever.

In Antisocial Media, University of Virginia professor Siva Vaidhyanathan gives a full and rigorous accounting of Facebook’s sins. . . .

Read on.

I am a data factory (and so are you)

1. Mines and Factories

Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial. The metaphor we choose informs our sense of the power wielded by so-called platform companies like Facebook, Google, and Amazon, and it shapes the way we, as individuals and as a society, respond to that power.

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.

The mining metaphor, like the mining business, is a fairly simple one, and it has become popular, particularly among writers of the left. Thinking of the platform companies as being in the extraction business, with personal data being analogous to a natural resource like iron or petroleum, brings a neatness and clarity to discussions of a new and complicated type of company. In an article in the Guardian in March, Ben Tarnoff wrote that “thinking of data as a resource like oil helps illuminate not only how it functions, but how we might organize it differently.” Building on the metaphor, he went on the argue that the data business should not just be heavily regulated, as extractive industries tend to be, but that “data resources” should be nationalized — put under state ownership and control:

Data is no less a form of common property than oil or soil or copper. We make data together, and we make it meaningful together, but its value is currently captured by the companies that own it. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many — wealth that could help feed, educate, house and heal people — is used to enrich the few. The solution is to take up the template of resource nationalism, and nationalize our data reserves.

In another Guardian piece, published a couple of weeks later, Evgeny Morozov offered a similar proposal concerning what he termed “the data wells inside ourselves”:

We can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.

The simplicity of the mining metaphor is its strength but also its weakness. The extraction metaphor doesn’t capture enough of what companies like Facebook and Google do, and hence in adopting it we too quickly narrow the discussion of our possible responses to their power. Data does not lie passively within me, like a seam of ore, waiting to be extracted. Rather, I actively produce data through the actions I take over the course of a day. When I drive or walk from one place to another, I produce locational data. When I buy something, I produce purchase data. When I text with someone, I produce affiliation data. When I read or watch something online, I produce preference data. When I upload a photo, I produce not only behavioral data but data that is itself a product. I am, in other words, much more like a data factory than a data mine. I produce data through my labor — the labor of my mind, the labor of my body.

The platform companies, in turn, act more like factory owners and managers than like the owners of oil wells or copper mines. Beyond control of my data, the companies seek control of my actions, which to them are production processes, in order to optimize the efficiency, quality, and value of my data output (and, on the demand side of the platform, my data consumption). They want to script and regulate the work of my factory — i.e., my life — as Frederick Winslow Taylor sought to script and regulate the labor of factory workers at the turn of the last century. The control wielded by these companies, in other words, is not just that of ownership but also that of command. And they exercise this command through the design of their software, which increasingly forms the medium of everything we all do during our waking hours.

The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.

2. A False Choice

To understand why the choice of metaphor is so important, consider a new essay by Ben Tarnoff, written with Moira Weigel, that was published last week. The piece opens with a sharp, cold-eyed examination of those Silicon Valley apostates who now express regret over the harmful effects of the products they created. Through their stress on redesigning the products to promote personal “well-being,” these “tech humanists,” Tarnoff and Weigel write, actually serve the business interests of the platform companies they criticize. The companies, the writers point out, can easily co-opt the well-being rhetoric, using it as cover to deflect criticism while seizing even more economic power.

Tarnoff and Weigel point to Facebook CEO Mark Zuckerberg’s recent announcement that his company will place less emphasis on increasing the total amount of time members spend on Facebook and more emphasis on ensuring that their Facebook time is “time well spent.” What may sound like a selfless act of philanthropy is in reality, Tarnoff and Weigel suggest, the product of a hard-headed business calculation:

Emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable. In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

The analysis is a trenchant one. The vagueness and self-absorption that often characterize discussions of wellness, particularly those emanating from the California coast, are well suited to the construction of window dressing. And, Lord knows, Zuckerberg and his ilk are experts at window dressing. But, having offered good reasons to be skeptical about Silicon Valley’s brand of tech humanism, Tarnoff and Weigel overreach. They argue that any “humanist” critique of the personal effects of technology design and use is a distraction from the “fundamental” critique of the economic and structural basis for Silicon Valley’s dominance:

[The humanists] remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

The choice that Tarnoff and Weigel present here — either personal critique or political critique, either a design focus or a structural focus — is a false choice. And it stems from the metaphor of extraction, which conceives of data as lying passively within us (beyond the influence of design) rather than being actively produced by us (under the influence of design). Arguing that attending to questions of design blinds us to questions of ownership is as silly (and as condescending) as arguing that attending to questions of ownership blinds us to questions of design. Silicon Valley wields its power through both its control of data and its control of design, and that power influences us on both a personal and a collective level. Any robust critique of Silicon Valley, whether practical, theoretical, or both, needs to address both the personal and the political.

The Silicon Valley apostates may be deserving of criticism, but what they’ve done that is praiseworthy is to expose, in considerable detail, the way the platform companies use software design to guide and regulate people’s behavior — in particular, to encourage the compulsive use of their products in ways that override people’s ability to think critically about the technology while provoking the kind of behavior that generates the maximum amount of valuable personal data. To put it into industrial terms, these companies are not just engaged in resource extraction; they are engaged in process engineering.

Tarnoff and Weigel go on to suggest that the tech humanists are pursuing a patriarchal agenda. They want to define some ideal state of human well-being, and then use software and hardware design to impose that way of being on everybody. That may well be true of some of the Silicon Valley apostates. Tarnoff and Weigel quote a prominent one as saying, “We have a moral responsibility to steer people’s thoughts ethically.” It’s hard to imagine a purer distillation of Silicon Valley’s hubris or a clearer expression of its belief in the engineering of lives. But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.

3. Consumer, Track Thyself

Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.

Finally, there’s the obvious practical question. How likely is it that the United States is going to establish a massive state-run data collective encompassing exhaustive information on every citizen, at least any time in the foreseeable future? It may not be entirely a pipe dream, but it’s pretty close. In the end, we may discover that the best means of curbing Silicon Valley’s power lies in an expansion of personal awareness, personal choice, and personal resistance. At the very least, we need to keep that possibility open. Let’s not rush to sacrifice the personal at the altar of the collective.