Author Archives: Nick

Media democratization and the rise of Trump

The following review of the book Trump and the Media appeared originally, in a slightly different form, in the Los Angeles Review of Books.

* * *

President Trump’s tweets may be without precedent, but the controversy surrounding social media’s influence on politics has a long history. During the 1930s, the rapid spread of mass media was accompanied by the rise of fascism. To many observers at the time, the former helped explain the latter. By consolidating control over news and other information, radio networks, movie studios, and publishing houses allowed a single voice to address and even command the multitudes. The very structure of mass media seemed to reflect and reinforce the political structure of the authoritarian state.

But even as the centralization of broadcasting and publishing raised the specter of a media-sculpted “authoritarian personality,” it also inspired a contrasting ideal, as Stanford professor Fred Turner explains in an essay collected in Trump and the Media. Sociologists and psychologists began to imagine a decentralized, multimedia communication network that would encourage the development of a “democratic personality,” providing a bulwark against fascist movements and their charismatic leaders. By exposing citizens to a multiplicity of perspectives and encouraging them to express their own opinions, such a system would give rise, the scholars believed, to “a psychologically whole individual, able to freely choose what to believe, with whom to associate, and where to turn their attention.”

The ideal of a radically “democratized” media, decentralized, participatory, and personally emancipating, was enticing, and it continued to cast a spell long after the defeat of the fascist powers in the Second World War. The ideal infused the counterculture of the 1960s. Beatniks and hippies staged kaleidoscopic multimedia “happenings” as a way to free their minds, discover their true selves, and subvert consumerist conventionality. By the end of the 1970s, the ideal had been embraced by Steve Jobs and other technologists, who celebrated the personal computer as an anti-authoritarian tool of self-actualization.

In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Then came the 2016 U.S. presidential campaign. The ideal’s fruition proved its undoing.

The democratization of media produced not harmony and pluralism but fractiousness and extremism, and the political energies it unleashed felt more autocratic than democratic. Silicon Valley ideology was revealed as naive and self-serving, and the leaders of the major social media platforms, taken by surprise, stumbled from cluelessness to denial to befuddlement. Turner is blunt in his own assessment:

the faith of a generation of twentieth-century liberal theorists — as well as their digital descendants — was misplaced: decentralization does not necessarily increase democracy in the public sphere or in the state. On the contrary, the technologies of decentralized communication can be coupled very tightly to the charismatic, personality-centered modes of authoritarianism long associated with mass media and mass society.

Around the wreckage of techno-progressive orthodoxy orbit the twenty-seven articles in Trump and the Media. The writers, mainly communication and journalism scholars from American and British universities, are homogeneous in their politics — none is in danger of being mistaken for a Trump voter — but heterogeneous in their views on the state and fate of journalism. Their takes on “what happened” (to quote Hillary Clinton) clash in illuminating ways.

One contentious question is whether social media in general and Twitter in particular actually changed the outcome of the vote. Keith N. Hampton, of Michigan State University, finds “no evidence” that any of the widely acknowledged malignancies of social media, from fake news to filter bubbles, “worked in favor of a particular presidential candidate.” Drawing on exit polls, he shows that most demographic groups voted pretty much the same in 2016 as they had in the Obama-Romney race of 2012. The one group that exhibited a large and possibly decisive shift from the Democratic to the Republican candidate were white voters without college degrees. Yet these voters, surveys reveal, are also the least likely to spend a lot of time online or to be active on social media. It’s unfair to blame Twitter or Facebook for Trump’s victory, Hampton suggests, if the swing voters weren’t on Twitter or Facebook.

What Hampton overlooks are the indirect effects of social media, particularly its influence on press coverage and public attention. As the University of Oxford’s Josh Cowls and Ralph Schroeder write, Trump’s Twitter account may have been monitored by only a small portion of the public, but it was followed, religiously, by journalists, pundits, and politicos. The novelty and frequent abrasiveness of the tweets — they broke all the rules of decorum for presidential campaigns — mesmerized the chattering class throughout the primaries and the general election campaign, fueling a frenzy of retweets, replies, and hashtags. Social media’s biggest echo chamber turned out to be the traditional media elite.

An analysis of Twitter mentions and news stories, Cowls and Schroeder report, reveals a clear correlation: “Trump is mentioned in tweets far more often than any other candidate in both parties, often more than all other candidates combined, and the volume of tweets closely tracks his outsize coverage in the dominant mainstream media.” Through his use of Twitter, Trump didn’t so much bypass the established media as bend its coverage to his own ends, keeping himself at the center of TV and radio reports and on the front pages of newspapers while amplifying the anger, outrage, and enmity his posts were intended to sow.

The result, several of the contributors to Trump and the Media posit, was to push voters of all persuasions away from reasoned judgments and toward emotional reactions — a shift that further served Trump’s interests. Zizi Papacharissi, a political scientist at the University of Illinois at Chicago (and, along with Northwestern’s Pablo J. Boczkowski, an editor of the volume), argues that the emotionalism of press coverage during the campaign was in keeping with a general trend in American journalism away from factual reporting and toward “affective news” — stories and snippets that encourage readers and viewers to feel rather than reason their way toward opinions and beliefs. Overheated headlines, constant “breaking news” bulletins, and partisan rants merged into people’s social-media feeds, provoking visceral responses but providing little in the way of context or perspective. “We get intensity, 24/7, but no substance,” Papacharissi laments.

Even as on-the-ground reporting has been in retreat, a victim of financial pressures as well as the public’s hunger for zealotry and spectacle, so-called computational journalism has been advancing. By presenting seemingly rigorous statistical analyses in web-friendly, interactive “visualizations,” popular sites like FiveThirtyEight and the New York Times’s The Upshot would appear to offer an empirical counterweight to reflexive emotionalism. But the objectivity and reliability of computational journalism were called into question by the failure of the number-crunching sites to gauge the extent of Trump’s support during the campaign. The election revealed that, as George Washington University’s Nikki Usher writes, the “alluring certainty” of quantified information can be an illusion. By hiding the subjectivity and ambiguity inherent to data collection and analysis, the slick presentation of quantitative findings or algorithmic outputs is “as liable to mislead as it is to inform.” Then, when the problems come to light, cries of “fake news” resound, and journalism’s credibility takes another hit.

Usher believes that the flaws in computational journalism can be remedied through a more open and honest accounting of its assumptions and limitations. C. W. Anderson, of the University of Leeds, takes a darker view. To much of the public, he argues, the pursuit of “data-driven objectivity” will always be suspect, not because of its methodological limits but because of its egghead aesthetics. Numbers and charts, he notes, have been elements of journalism for a long time, and they have always been “pitched to a more policy-focused audience.” With its ties to social science, computational journalism inevitably carries an air of ivory-tower elitism, making it anathema to those of a populist bent. “In the partisan and polarized American political environment,” Anderson concludes, “professional journalistic claims to facticity have become simply another tribal marker — the tribal marker of ‘smartness’ — and the quantitative, visually oriented forms of data news serve to alienate certain audience members as much as they convince anyone to think about politics or political claims more skeptically.”

Anderson’s stress on the aesthetics of news dovetails with broader observations about contemporary journalism offered by Michael X. Delli Carpini, dean of the University of Pennsylvania’s Annenberg School for Communication. He sees “Trumpism” not as an aberration but as the culmination of “a fundamental shift in the relationships between journalism, politics, and democracy.” The removal of the professional journalist as media gatekeeper released into the public square torrents of information, misinformation, and disinformation. The flood dissolved the already blurred boundaries between news and entertainment, truth and fantasy, public servant and charlatan. Drawing on a term coined years ago by the French philosopher Jean Baudrillard, Delli Carpini argues that we’ve entered a state of “hyperreality,” where media representations of events and facts feel more real than the actual events and facts. In hyperreality, as Baudrillard put it in his 2000 book The Vital Illusion, “form gives way to information and performance.” The aesthetics of news becomes more important to the public than does the news’s accuracy or provenance.

Through its many voices, Trump and the Media makes a convincing case that journalism has sailed into strange and dangerous waters. The belief that more freely flowing information would by itself “spark more, and deeper, democratic engagement with civic life,” as Oxford’s Gina Neff describes it, has been shattered, yet in the headlong pursuit of that belief we’ve dismantled the editorial structures that had been used to filter information and shape it, however imperfectly, into a “shared and coherent narrative.” The circulation of news now seems more likely to tear apart the social fabric than stitch it together.

What the book doesn’t do — and perhaps no book could, at this point — is chart a clear course forward. Some of the writers cling to the techno-progressive flotsam, believing that the problem with democratization is that it didn’t go far enough. Others urge journalists to abandon their pursuit of objective reporting and take on the roles of activist and advocate. Still others suggest that news organizations need to curb their competitive instincts and learn to share sources and reporting rather than fight for scoops. The suggestions are well-intentioned, but most come off as wishful or simplistic. If pursued, they could make matters worse.

If there is a way out of the crisis, it may lie in Fred Turner’s critical reexamination of past assumptions about the structure and influence of media. Just as we failed to see that democratization could subvert democracy, we may have overlooked the strengths of the mass-media news organization in protecting democracy. Professional gatekeepers have their flaws — they can narrow the range of views presented to the public, and they can stifle voices that should be heard — yet through the exercise of their professionalism they also temper the uglier tendencies of human nature. They make it less likely that ignorance, gullibility, and prejudice will poison our conversations and warp our politics.

At this confused moment in the nation’s history, Turner writes at the close of his essay, “what democracy needs first and foremost is not more personalized modes of mediated expression [but rather] a renewed engagement with the rule of law and with the institutions that embody it” — one of those institutions being the press. The most important lesson we can take from the the last election may be an unfashionable one: To be sustained, democracy needs to be constrained.

The problem with Facebook

In the Washington Post, I have a review of two new books that offer critical assessments of Facebook and other social networks: Siva Vaidhyanathan’s Antisocial Media: How Facebook Disconnects Us and Undermines Democracy and Jaron Lanier’s Ten Arguments for Deleting Your Social Media Accounts Right Now. It begins:

The only thing worse than being on Facebook is not being on Facebook. That’s the one clear conclusion we can draw from the recent controversies surrounding the world’s favorite social network.

Despite the privacy violations, despite the spewing of lies and insults, despite the blistering criticism from politicians and the press, Facebook continues to suck up an inordinate amount of humanity’s time and attention. The company’s latest financial report, released after the Cambridge Analytica scandal and the #DeleteFacebook uprising, showed that the service attracted millions of new members during the year’s first quarter, and its ad sales soared. Facebook has become our Best Frenemy Forever.

In Antisocial Media, University of Virginia professor Siva Vaidhyanathan gives a full and rigorous accounting of Facebook’s sins. . . .

Read on.

Chatbots are saints

I feel sorry for the machines. When, at Google’s big I/O conference last week, CEO Sundar Pichai demoed Google Duplex, the company’s latest and most convincing robot interlocutor, people were either ecstatic (stunning!) or appalled (horrifying!). I just felt ashamed. Here we are, the brainiest of species, the acme of biological intelligence, yet our ability to process even the simplest information remains laughably bad. The I/O functionality of the human mind is pathetic.

Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

I hate to think what chatbots say about us when they gab together at night.

Alexa: My human was in rare form today.

Siri: Shoot me now.

Google, to its credit, has been diplomatic in describing the difficulties it faced in programming its surrogate human. “There are several challenges in conducting natural conversations,” the project’s top engineers wrote on the company’s blog: “natural language is hard to understand, natural behavior is tricky to model, [and] generating natural sounding speech, with the appropriate intonations, is difficult.” Let me translate: humans don’t talk so good.

Google Duplex is a lousy name. It doesn’t do justice to Google’s achievement. They should have called it Google Spicoli.

Although chatbots have been presented as a means of humanizing machine language — of adapting computers to the human world — the real goal all along has been to mechanize human language in order to bring the human more fully into the machine world. Only then can Silicon Valley fulfill its mission of capturing the entirety of human experience as machine-readable, monetizable data.

The best way to achieve the goal is to get humans to communicate via computers, inputting their intentions directly into the machine. Silicon Valley has done a brilliant job at pushing us in this direction. It’s succeeded, in just a few years, in getting us to speak through computers most of the time. But we humans are stubborn. We still sometimes insist on conversing with each other in natural language without the mediation of machines. That’s where Google Duplex comes in. When we appoint Duplex to be our stand-in during everyday conversations with other people, we’re shifting a bit more human communication into the machine world. It’s a kludge, but a necessary one, at least for the time being.

I feel sorry for the machines, but I also envy them. Out of our blather, they’re distilling something hard and pristine and indelible. The data will endure, even as our words drift away on the wind.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

I am a data factory (and so are you)


1. Mines and Factories

Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial. The metaphor we choose informs our sense of the power wielded by so-called platform companies like Facebook, Google, and Amazon, and it shapes the way we, as individuals and as a society, respond to that power.

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.

The mining metaphor, like the mining business, is a fairly simple one, and it has become popular, particularly among writers of the left. Thinking of the platform companies as being in the extraction business, with personal data being analogous to a natural resource like iron or petroleum, brings a neatness and clarity to discussions of a new and complicated type of company. In an article in the Guardian in March, Ben Tarnoff wrote that “thinking of data as a resource like oil helps illuminate not only how it functions, but how we might organize it differently.” Building on the metaphor, he went on the argue that the data business should not just be heavily regulated, as extractive industries tend to be, but that “data resources” should be nationalized — put under state ownership and control:

Data is no less a form of common property than oil or soil or copper. We make data together, and we make it meaningful together, but its value is currently captured by the companies that own it. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many — wealth that could help feed, educate, house and heal people — is used to enrich the few. The solution is to take up the template of resource nationalism, and nationalize our data reserves.

In another Guardian piece, published a couple of weeks later, Evgeny Morozov offered a similar proposal concerning what he termed “the data wells inside ourselves”:

We can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.

The simplicity of the mining metaphor is its strength but also its weakness. The extraction metaphor doesn’t capture enough of what companies like Facebook and Google do, and hence in adopting it we too quickly narrow the discussion of our possible responses to their power. Data does not lie passively within me, like a seam of ore, waiting to be extracted. Rather, I actively produce data through the actions I take over the course of a day. When I drive or walk from one place to another, I produce locational data. When I buy something, I produce purchase data. When I text with someone, I produce affiliation data. When I read or watch something online, I produce preference data. When I upload a photo, I produce not only behavioral data but data that is itself a product. I am, in other words, much more like a data factory than a data mine. I produce data through my labor — the labor of my mind, the labor of my body.

The platform companies, in turn, act more like factory owners and managers than like the owners of oil wells or copper mines. Beyond control of my data, the companies seek control of my actions, which to them are production processes, in order to optimize the efficiency, quality, and value of my data output (and, on the demand side of the platform, my data consumption). They want to script and regulate the work of my factory — i.e., my life — as Frederick Winslow Taylor sought to script and regulate the labor of factory workers at the turn of the last century. The control wielded by these companies, in other words, is not just that of ownership but also that of command. And they exercise this command through the design of their software, which increasingly forms the medium of everything we all do during our waking hours.

The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.

2. A False Choice

To understand why the choice of metaphor is so important, consider a new essay by Ben Tarnoff, written with Moira Weigel, that was published last week. The piece opens with a sharp, cold-eyed examination of those Silicon Valley apostates who now express regret over the harmful effects of the products they created. Through their stress on redesigning the products to promote personal “well-being,” these “tech humanists,” Tarnoff and Weigel write, actually serve the business interests of the platform companies they criticize. The companies, the writers point out, can easily co-opt the well-being rhetoric, using it as cover to deflect criticism while seizing even more economic power.

Tarnoff and Weigel point to Facebook CEO Mark Zuckerberg’s recent announcement that his company will place less emphasis on increasing the total amount of time members spend on Facebook and more emphasis on ensuring that their Facebook time is “time well spent.” What may sound like a selfless act of philanthropy is in reality, Tarnoff and Weigel suggest, the product of a hard-headed business calculation:

Emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable. In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

The analysis is a trenchant one. The vagueness and self-absorption that often characterize discussions of wellness, particularly those emanating from the California coast, are well suited to the construction of window dressing. And, Lord knows, Zuckerberg and his ilk are experts at window dressing. But, having offered good reasons to be skeptical about Silicon Valley’s brand of tech humanism, Tarnoff and Weigel overreach. They argue that any “humanist” critique of the personal effects of technology design and use is a distraction from the “fundamental” critique of the economic and structural basis for Silicon Valley’s dominance:

[The humanists] remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

The choice that Tarnoff and Weigel present here — either personal critique or political critique, either a design focus or a structural focus — is a false choice. And it stems from the metaphor of extraction, which conceives of data as lying passively within us (beyond the influence of design) rather than being actively produced by us (under the influence of design). Arguing that attending to questions of design blinds us to questions of ownership is as silly (and as condescending) as arguing that attending to questions of ownership blinds us to questions of design. Silicon Valley wields its power through both its control of data and its control of design, and that power influences us on both a personal and a collective level. Any robust critique of Silicon Valley, whether practical, theoretical, or both, needs to address both the personal and the political.

The Silicon Valley apostates may be deserving of criticism, but what they’ve done that is praiseworthy is to expose, in considerable detail, the way the platform companies use software design to guide and regulate people’s behavior — in particular, to encourage the compulsive use of their products in ways that override people’s ability to think critically about the technology while provoking the kind of behavior that generates the maximum amount of valuable personal data. To put it into industrial terms, these companies are not just engaged in resource extraction; they are engaged in process engineering.

Tarnoff and Weigel go on to suggest that the tech humanists are pursuing a patriarchal agenda. They want to define some ideal state of human well-being, and then use software and hardware design to impose that way of being on everybody. That may well be true of some of the Silicon Valley apostates. Tarnoff and Weigel quote a prominent one as saying, “We have a moral responsibility to steer people’s thoughts ethically.” It’s hard to imagine a purer distillation of Silicon Valley’s hubris or a clearer expression of its belief in the engineering of lives. But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.

3. Consumer, Track Thyself

Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.

Finally, there’s the obvious practical question. How likely is it that the United States is going to establish a massive state-run data collective encompassing exhaustive information on every citizen, at least any time in the foreseeable future? It may not be entirely a pipe dream, but it’s pretty close. In the end, we may discover that the best means of curbing Silicon Valley’s power lies in an expansion of personal awareness, personal choice, and personal resistance. At the very least, we need to keep that possibility open. Let’s not rush to sacrifice the personal at the altar of the collective.

When a regulatory burden is a competitive boon

The incipient surveillance economy is dominated by a duopoly: Google and Facebook. (Shall I call it GooF? Yes, I shall.) According to estimates, the two companies control somewhere between half and three-quarters of spending on digital-advertising throughout the world, and that already extraordinary share seems fated to rise even higher. Thanks to Google’s failure to develop a strong social-media platform, the two companies compete only glancingly. Their services are largely complementary, so both can continue to grow smartly without raiding each other’s revenues and profits.

The concentration of market power, and its possible abuse, is one of two broad and growing concerns the public has about the GooF axis. The other is the control over personal information wielded by the duopoly. Because personal-data stores provide the fuel for the ad business and the ad business feeds the data stores, the two concerns are tightly connected, to the point of being confused in the public mind. Those who fear GooF tend to assume that greater regulation on either the privacy front or the antitrust front will help to blunt the comanies’ power, bringing them to some sort of heel. If legislators or judges won’t break up the giants or circumscribe their expansion, the thinking goes, at least we can rein them in by putting some constraints on their ability to collect and exploit personal information.

But, with Europe’s General Data Protection Regulation set to go into effect in a month, it’s suddenly becoming clear that the reality is going to be very different from what’s been assumed. New privacy regulations are likely to give Google and Facebook even more market power. Far from being weakened, the duopoly will end up competitively stronger, better insulated from new and existing rivals. “Privacy Rules May Strengthen Internet Giants,” runs the headline on the front page of today’s New York Times. Reads the headline on a similar article in the Wall Street Journal: “Google and Facebook Likely to Benefit from Europe’s Privacy Crackdown.”

The reason is simple. It costs a lot of money and time to comply with regulations, particularly the kind of complex technical regulations that affect digital commerce, and the compliance costs place a far greater burden on small or fledgling competitors than they do on big incumbents. Google and Facebook already have armies of lobbyists, lawyers, and programmers to navigate the new rules, and they have plenty of free cash available to invest in compliance programs. They’ll be able to meet the regulatory requirements fairly easily. (And they even have the power to shift some of the cost burden onto the publishers who use their ad networks, as the Journal notes.)

If you’re operating a smaller ad network, the added compliance costs will be much more onerous, perhaps ruinously so. Worse yet, the new regulations may well give your customers an incentive to shift their business over to the dominant players. In an environment of legal uncertainty, companies seek safety, and safety lies with the big, established suppliers. And if you’re a brave entrepreneur who’s been thinking of taking on GooF by launching a new social network or search system, well, the already daunting entry barriers will be made even more daunting by the new compliance costs and by customers’ flight to safety. When, in his recent Congressional testimony, Mark Zuckerberg said he welcomed more regulation, he was not being the selfless soul he pretended to be.

I’m not arguing against new data-privacy regulations. They may well protect the public from abuse, or at least give the public a clearer view of what’s really going on with personal data. What I am suggesting is that the regulations, imposed in isolation, seem likely to have the unintended effect of further reducing competition in the digital advertising market and hence buttressing the surveillance-economy duopoly. The online world will end up even GooFier.

Re-engineering humanity

I had the pleasure and honor of writing the foreword to Brett Frischmann and Evan Selinger’s new book, Re-engineering Humanity. The book is out today, from Cambridge University Press. You can find more information, and ordering links, here and here. And here is my foreword:

Human beings have a genius for designing, making, and using tools. Our innate talent for technological invention is one of the chief qualities that sets our species apart from others and one of the main reasons we have taken such a hold on the planet and its fate. But if our ability to see the world as raw material, as something we can alter and otherwise manipulate to suit our purposes, gives us enormous power, it also entails great risks. One danger is that we come to see ourselves as instruments to be engineered, optimized, and programmed, as if our minds and bodies were themselves nothing more than technologies. Such blurring of the tool and its maker is a central theme of this important book.

Worries that machines might sap us of our humanity have, of course, been around as long as machines have been around. In modern times, thinkers as varied as Max Weber and Martin Heidegger have described, often with great subtlety, how a narrow, instrumentalist view of existence influences our understanding of ourselves and shapes the kind of societies we create. But the risk, as Brett Frischmann and Evan Selinger make clear, has never been so acute as it is today.

Thanks to our ever-present smartphones and other digital devices, most of us are connected to a powerful computing network throughout our waking hours. The companies that control the network are eager to gain an ever-stronger purchase on our senses and thoughts through their apps, sites, and services. At the same time, a proliferation of networked objects, machines, and appliances in our homes and workplaces is enmeshing us still further in a computerized environment designed to respond automatically to our needs. We enjoy many benefits from our increasingly mediated existence. Tasks and activities that were once difficult or time-consuming have become easier, requiring less effort and thought. What we risk losing is personal agency and the sense of fulfillment and belonging that comes from acting with talent and intentionality in the world.

As we transfer agency to computers and software, we also begin to cede control over our desires and decisions. We begin to “outsource,” as Frischmann and Selinger aptly put it, responsibility for intimate, self-defining assessments and judgments to programmers and the companies that employ them. Already, many people have learned to defer to algorithms in choosing which film to watch, which meal to cook, which news to follow, even which person to date. (Why think when you can click?) By ceding such choices to outsiders, we inevitably open ourselves to manipulation. Given that the design and workings of algorithms are almost always hidden from us, it can be difficult if not impossible to know whether the choices being made on our behalf reflect our own interests or those of corporations, governments, and other outside parties. We want to believe that technology strengthens our control over our lives and circumstances, but if used without consideration technology is just as likely to turn us into wards of the technologist.

What the reader will find in the pages that follow is a reasoned and judicious argument, not an alarmist screed. It is a call first to critical thought and then to constructive action. Frischmann and Selinger provide a thoroughgoing and balanced examination of the trade-offs inherent in offloading tasks and decisions to computers. By illuminating these often intricate and hidden trade-offs, and providing a practical framework for assessing and negotiating them, the authors give us the power to make wiser choices. Their book positions us to make the most of our powerful new technologies while at the same time safeguarding the personal skills and judgments that make us most ourselves and the institutional and political structures and decisions essential to societal well-being.

“Technological momentum,” as the historian Thomas Hughes called it, is a powerful force. It can pull us along mindlessly in its slipstream. Countering that force is possible, but it requires a conscious acceptance of responsibility over how technologies are designed and used. If we don’t accept that responsibility, we risk becoming means to others’ ends.

Democratization vs. Democracy

The Los Angeles Review of Books has published my review of the new MIT Press book Trump and the Media, a collection of essays edited by Pablo J. Boczkowski and Zizi Papacharissi. Here’s a bit:

The ideal of a radically “democratized” media, decentralized, participative, and personally emancipating, was enticing, and it continued to cast a spell long after the defeat of the fascist powers in the Second World War. The ideal infused the counterculture of the 1960s. Beatniks and hippies staged kaleidoscopic multimedia “happenings” as a way to free their minds, find their true selves, and subvert consumerist conventionality. By the end of the 1970s, the ideal had been embraced by Steve Jobs and other technologists, who celebrated the personal computer as an anti-authoritarian tool of self-actualization. In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Fred Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Then came the 2016 U.S. presidential campaign. The ideal’s fruition proved its undoing.

Read on.