The problem with Facebook

In the Washington Post, I have a review of two new books that offer critical assessments of Facebook and other social networks: Siva Vaidhyanathan’s Antisocial Media: How Facebook Disconnects Us and Undermines Democracy and Jaron Lanier’s Ten Arguments for Deleting Your Social Media Accounts Right Now. It begins:

The only thing worse than being on Facebook is not being on Facebook. That’s the one clear conclusion we can draw from the recent controversies surrounding the world’s favorite social network.

Despite the privacy violations, despite the spewing of lies and insults, despite the blistering criticism from politicians and the press, Facebook continues to suck up an inordinate amount of humanity’s time and attention. The company’s latest financial report, released after the Cambridge Analytica scandal and the #DeleteFacebook uprising, showed that the service attracted millions of new members during the year’s first quarter, and its ad sales soared. Facebook has become our Best Frenemy Forever.

In Antisocial Media, University of Virginia professor Siva Vaidhyanathan gives a full and rigorous accounting of Facebook’s sins. . . .

Read on.

Chatbots are saints

I feel sorry for the machines. When, at Google’s big I/O conference last week, CEO Sundar Pichai demoed Google Duplex, the company’s latest and most convincing robot interlocutor, people were either ecstatic (stunning!) or appalled (horrifying!). I just felt ashamed. Here we are, the brainiest of species, the acme of biological intelligence, yet our ability to process even the simplest information remains laughably bad. The I/O functionality of the human mind is pathetic.

Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

I hate to think what chatbots say about us when they gab together at night.

Alexa: My human was in rare form today.

Siri: Shoot me now.

Google, to its credit, has been diplomatic in describing the difficulties it faced in programming its surrogate human. “There are several challenges in conducting natural conversations,” the project’s top engineers wrote on the company’s blog: “natural language is hard to understand, natural behavior is tricky to model, [and] generating natural sounding speech, with the appropriate intonations, is difficult.” Let me translate: humans don’t talk so good.

Google Duplex is a lousy name. It doesn’t do justice to Google’s achievement. They should have called it Google Spicoli.

Although chatbots have been presented as a means of humanizing machine language — of adapting computers to the human world — the real goal all along has been to mechanize human language in order to bring the human more fully into the machine world. Only then can Silicon Valley fulfill its mission of capturing the entirety of human experience as machine-readable, monetizable data.

The best way to achieve the goal is to get humans to communicate via computers, inputting their intentions directly into the machine. Silicon Valley has done a brilliant job at pushing us in this direction. It’s succeeded, in just a few years, in getting us to speak through computers most of the time. But we humans are stubborn. We still sometimes insist on conversing with each other in natural language without the mediation of machines. That’s where Google Duplex comes in. When we appoint Duplex to be our stand-in during everyday conversations with other people, we’re shifting a bit more human communication into the machine world. It’s a kludge, but a necessary one, at least for the time being.

I feel sorry for the machines, but I also envy them. Out of our blather, they’re distilling something hard and pristine and indelible. The data will endure, even as our words drift away on the wind.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

I am a data factory (and so are you)


1. Mines and Factories

Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial. The metaphor we choose informs our sense of the power wielded by so-called platform companies like Facebook, Google, and Amazon, and it shapes the way we, as individuals and as a society, respond to that power.

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.

The mining metaphor, like the mining business, is a fairly simple one, and it has become popular, particularly among writers of the left. Thinking of the platform companies as being in the extraction business, with personal data being analogous to a natural resource like iron or petroleum, brings a neatness and clarity to discussions of a new and complicated type of company. In an article in the Guardian in March, Ben Tarnoff wrote that “thinking of data as a resource like oil helps illuminate not only how it functions, but how we might organize it differently.” Building on the metaphor, he went on the argue that the data business should not just be heavily regulated, as extractive industries tend to be, but that “data resources” should be nationalized — put under state ownership and control:

Data is no less a form of common property than oil or soil or copper. We make data together, and we make it meaningful together, but its value is currently captured by the companies that own it. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many — wealth that could help feed, educate, house and heal people — is used to enrich the few. The solution is to take up the template of resource nationalism, and nationalize our data reserves.

In another Guardian piece, published a couple of weeks later, Evgeny Morozov offered a similar proposal concerning what he termed “the data wells inside ourselves”:

We can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.

The simplicity of the mining metaphor is its strength but also its weakness. The extraction metaphor doesn’t capture enough of what companies like Facebook and Google do, and hence in adopting it we too quickly narrow the discussion of our possible responses to their power. Data does not lie passively within me, like a seam of ore, waiting to be extracted. Rather, I actively produce data through the actions I take over the course of a day. When I drive or walk from one place to another, I produce locational data. When I buy something, I produce purchase data. When I text with someone, I produce affiliation data. When I read or watch something online, I produce preference data. When I upload a photo, I produce not only behavioral data but data that is itself a product. I am, in other words, much more like a data factory than a data mine. I produce data through my labor — the labor of my mind, the labor of my body.

The platform companies, in turn, act more like factory owners and managers than like the owners of oil wells or copper mines. Beyond control of my data, the companies seek control of my actions, which to them are production processes, in order to optimize the efficiency, quality, and value of my data output (and, on the demand side of the platform, my data consumption). They want to script and regulate the work of my factory — i.e., my life — as Frederick Winslow Taylor sought to script and regulate the labor of factory workers at the turn of the last century. The control wielded by these companies, in other words, is not just that of ownership but also that of command. And they exercise this command through the design of their software, which increasingly forms the medium of everything we all do during our waking hours.

The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.

2. A False Choice

To understand why the choice of metaphor is so important, consider a new essay by Ben Tarnoff, written with Moira Weigel, that was published last week. The piece opens with a sharp, cold-eyed examination of those Silicon Valley apostates who now express regret over the harmful effects of the products they created. Through their stress on redesigning the products to promote personal “well-being,” these “tech humanists,” Tarnoff and Weigel write, actually serve the business interests of the platform companies they criticize. The companies, the writers point out, can easily co-opt the well-being rhetoric, using it as cover to deflect criticism while seizing even more economic power.

Tarnoff and Weigel point to Facebook CEO Mark Zuckerberg’s recent announcement that his company will place less emphasis on increasing the total amount of time members spend on Facebook and more emphasis on ensuring that their Facebook time is “time well spent.” What may sound like a selfless act of philanthropy is in reality, Tarnoff and Weigel suggest, the product of a hard-headed business calculation:

Emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable. In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

The analysis is a trenchant one. The vagueness and self-absorption that often characterize discussions of wellness, particularly those emanating from the California coast, are well suited to the construction of window dressing. And, Lord knows, Zuckerberg and his ilk are experts at window dressing. But, having offered good reasons to be skeptical about Silicon Valley’s brand of tech humanism, Tarnoff and Weigel overreach. They argue that any “humanist” critique of the personal effects of technology design and use is a distraction from the “fundamental” critique of the economic and structural basis for Silicon Valley’s dominance:

[The humanists] remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

The choice that Tarnoff and Weigel present here — either personal critique or political critique, either a design focus or a structural focus — is a false choice. And it stems from the metaphor of extraction, which conceives of data as lying passively within us (beyond the influence of design) rather than being actively produced by us (under the influence of design). Arguing that attending to questions of design blinds us to questions of ownership is as silly (and as condescending) as arguing that attending to questions of ownership blinds us to questions of design. Silicon Valley wields its power through both its control of data and its control of design, and that power influences us on both a personal and a collective level. Any robust critique of Silicon Valley, whether practical, theoretical, or both, needs to address both the personal and the political.

The Silicon Valley apostates may be deserving of criticism, but what they’ve done that is praiseworthy is to expose, in considerable detail, the way the platform companies use software design to guide and regulate people’s behavior — in particular, to encourage the compulsive use of their products in ways that override people’s ability to think critically about the technology while provoking the kind of behavior that generates the maximum amount of valuable personal data. To put it into industrial terms, these companies are not just engaged in resource extraction; they are engaged in process engineering.

Tarnoff and Weigel go on to suggest that the tech humanists are pursuing a patriarchal agenda. They want to define some ideal state of human well-being, and then use software and hardware design to impose that way of being on everybody. That may well be true of some of the Silicon Valley apostates. Tarnoff and Weigel quote a prominent one as saying, “We have a moral responsibility to steer people’s thoughts ethically.” It’s hard to imagine a purer distillation of Silicon Valley’s hubris or a clearer expression of its belief in the engineering of lives. But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.

3. Consumer, Track Thyself

Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.

Finally, there’s the obvious practical question. How likely is it that the United States is going to establish a massive state-run data collective encompassing exhaustive information on every citizen, at least any time in the foreseeable future? It may not be entirely a pipe dream, but it’s pretty close. In the end, we may discover that the best means of curbing Silicon Valley’s power lies in an expansion of personal awareness, personal choice, and personal resistance. At the very least, we need to keep that possibility open. Let’s not rush to sacrifice the personal at the altar of the collective.

When a regulatory burden is a competitive boon

The incipient surveillance economy is dominated by a duopoly: Google and Facebook. (Shall I call it GooF? Yes, I shall.) According to estimates, the two companies control somewhere between half and three-quarters of spending on digital-advertising throughout the world, and that already extraordinary share seems fated to rise even higher. Thanks to Google’s failure to develop a strong social-media platform, the two companies compete only glancingly. Their services are largely complementary, so both can continue to grow smartly without raiding each other’s revenues and profits.

The concentration of market power, and its possible abuse, is one of two broad and growing concerns the public has about the GooF axis. The other is the control over personal information wielded by the duopoly. Because personal-data stores provide the fuel for the ad business and the ad business feeds the data stores, the two concerns are tightly connected, to the point of being confused in the public mind. Those who fear GooF tend to assume that greater regulation on either the privacy front or the antitrust front will help to blunt the comanies’ power, bringing them to some sort of heel. If legislators or judges won’t break up the giants or circumscribe their expansion, the thinking goes, at least we can rein them in by putting some constraints on their ability to collect and exploit personal information.

But, with Europe’s General Data Protection Regulation set to go into effect in a month, it’s suddenly becoming clear that the reality is going to be very different from what’s been assumed. New privacy regulations are likely to give Google and Facebook even more market power. Far from being weakened, the duopoly will end up competitively stronger, better insulated from new and existing rivals. “Privacy Rules May Strengthen Internet Giants,” runs the headline on the front page of today’s New York Times. Reads the headline on a similar article in the Wall Street Journal: “Google and Facebook Likely to Benefit from Europe’s Privacy Crackdown.”

The reason is simple. It costs a lot of money and time to comply with regulations, particularly the kind of complex technical regulations that affect digital commerce, and the compliance costs place a far greater burden on small or fledgling competitors than they do on big incumbents. Google and Facebook already have armies of lobbyists, lawyers, and programmers to navigate the new rules, and they have plenty of free cash available to invest in compliance programs. They’ll be able to meet the regulatory requirements fairly easily. (And they even have the power to shift some of the cost burden onto the publishers who use their ad networks, as the Journal notes.)

If you’re operating a smaller ad network, the added compliance costs will be much more onerous, perhaps ruinously so. Worse yet, the new regulations may well give your customers an incentive to shift their business over to the dominant players. In an environment of legal uncertainty, companies seek safety, and safety lies with the big, established suppliers. And if you’re a brave entrepreneur who’s been thinking of taking on GooF by launching a new social network or search system, well, the already daunting entry barriers will be made even more daunting by the new compliance costs and by customers’ flight to safety. When, in his recent Congressional testimony, Mark Zuckerberg said he welcomed more regulation, he was not being the selfless soul he pretended to be.

I’m not arguing against new data-privacy regulations. They may well protect the public from abuse, or at least give the public a clearer view of what’s really going on with personal data. What I am suggesting is that the regulations, imposed in isolation, seem likely to have the unintended effect of further reducing competition in the digital advertising market and hence buttressing the surveillance-economy duopoly. The online world will end up even GooFier.

Re-engineering humanity

I had the pleasure and honor of writing the foreword to Brett Frischmann and Evan Selinger’s new book, Re-engineering Humanity. The book is out today, from Cambridge University Press. You can find more information, and ordering links, here and here. And here is my foreword:

Human beings have a genius for designing, making, and using tools. Our innate talent for technological invention is one of the chief qualities that sets our species apart from others and one of the main reasons we have taken such a hold on the planet and its fate. But if our ability to see the world as raw material, as something we can alter and otherwise manipulate to suit our purposes, gives us enormous power, it also entails great risks. One danger is that we come to see ourselves as instruments to be engineered, optimized, and programmed, as if our minds and bodies were themselves nothing more than technologies. Such blurring of the tool and its maker is a central theme of this important book.

Worries that machines might sap us of our humanity have, of course, been around as long as machines have been around. In modern times, thinkers as varied as Max Weber and Martin Heidegger have described, often with great subtlety, how a narrow, instrumentalist view of existence influences our understanding of ourselves and shapes the kind of societies we create. But the risk, as Brett Frischmann and Evan Selinger make clear, has never been so acute as it is today.

Thanks to our ever-present smartphones and other digital devices, most of us are connected to a powerful computing network throughout our waking hours. The companies that control the network are eager to gain an ever-stronger purchase on our senses and thoughts through their apps, sites, and services. At the same time, a proliferation of networked objects, machines, and appliances in our homes and workplaces is enmeshing us still further in a computerized environment designed to respond automatically to our needs. We enjoy many benefits from our increasingly mediated existence. Tasks and activities that were once difficult or time-consuming have become easier, requiring less effort and thought. What we risk losing is personal agency and the sense of fulfillment and belonging that comes from acting with talent and intentionality in the world.

As we transfer agency to computers and software, we also begin to cede control over our desires and decisions. We begin to “outsource,” as Frischmann and Selinger aptly put it, responsibility for intimate, self-defining assessments and judgments to programmers and the companies that employ them. Already, many people have learned to defer to algorithms in choosing which film to watch, which meal to cook, which news to follow, even which person to date. (Why think when you can click?) By ceding such choices to outsiders, we inevitably open ourselves to manipulation. Given that the design and workings of algorithms are almost always hidden from us, it can be difficult if not impossible to know whether the choices being made on our behalf reflect our own interests or those of corporations, governments, and other outside parties. We want to believe that technology strengthens our control over our lives and circumstances, but if used without consideration technology is just as likely to turn us into wards of the technologist.

What the reader will find in the pages that follow is a reasoned and judicious argument, not an alarmist screed. It is a call first to critical thought and then to constructive action. Frischmann and Selinger provide a thoroughgoing and balanced examination of the trade-offs inherent in offloading tasks and decisions to computers. By illuminating these often intricate and hidden trade-offs, and providing a practical framework for assessing and negotiating them, the authors give us the power to make wiser choices. Their book positions us to make the most of our powerful new technologies while at the same time safeguarding the personal skills and judgments that make us most ourselves and the institutional and political structures and decisions essential to societal well-being.

“Technological momentum,” as the historian Thomas Hughes called it, is a powerful force. It can pull us along mindlessly in its slipstream. Countering that force is possible, but it requires a conscious acceptance of responsibility over how technologies are designed and used. If we don’t accept that responsibility, we risk becoming means to others’ ends.

Democratization vs. Democracy

The Los Angeles Review of Books has published my review of the new MIT Press book Trump and the Media, a collection of essays edited by Pablo J. Boczkowski and Zizi Papacharissi. Here’s a bit:

The ideal of a radically “democratized” media, decentralized, participative, and personally emancipating, was enticing, and it continued to cast a spell long after the defeat of the fascist powers in the Second World War. The ideal infused the counterculture of the 1960s. Beatniks and hippies staged kaleidoscopic multimedia “happenings” as a way to free their minds, find their true selves, and subvert consumerist conventionality. By the end of the 1970s, the ideal had been embraced by Steve Jobs and other technologists, who celebrated the personal computer as an anti-authoritarian tool of self-actualization. In the early years of this century, as the internet subsumed traditional media, the ideal became a pillar of Silicon Valley ideology. The founders of companies like Google and Facebook, Twitter and Reddit, promoted their networks as tools for overthrowing mass-media “gatekeepers” and giving individuals control over the exchange of information. They promised, as Fred Turner writes, that social media would “allow us to present our authentic selves to one another” and connect those diverse selves into a more harmonious, pluralistic, and democratic society.

Then came the 2016 U.S. presidential campaign. The ideal’s fruition proved its undoing.

Read on.

AI: the Ziggy Stardust Syndrome

“Ziggy sucked up into his mind.” –David Bowie

In his Wall Street Journal column this weekend, Nobel laureate Frank Wilczek offers a fascinating theory as to why we haven’t been able to find signs of intelligent life elsewhere in the universe. Maybe, he suggests, intelligent beings are fated to shrink as their intelligence expands. Once the singularity happens, AI implodes into invisibility.

It’s entirely logical. Wilczek notes that “effective computation must involve interactions and that the speed of light limits communication.” To optimize its thinking, an AI would have no choice but to compress itself to minimize delays in the exchange of messages. It would need to get really, really small.

Consider a computer operating at a speed of 10 gigahertz, which is not far from what you can buy today. In the time between its computational steps, light can travel just over an inch. Accordingly, powerful thinking entities that obey the laws of physics, and which need to exchange up-to-date information, can’t be spaced much farther apart than that. Thinkers at the vanguard of a hyperadvanced technology, striving to be both quick-witted and coherent, would keep that technology small.

The upshot is that the most advanced civilizations would be tiny and shy. They would “expand inward, to achieve speed and integration — not outward, where they’d lose patience waiting for feedback.” Call it the Ziggy Stardust Syndrome. An AI-based civilization would suck up into its own mind, becoming a sort of black hole of braininess. We wouldn’t be able to see such civilizations because, lost in their own thoughts, they’d have no interest in being seen. “A hyperadvanced civilization,” as Wilczek puts it, “might just want to be left alone.” Like Greta Garbo.

The idea of a jackbooted superintelligent borg bent on imperialistic conquest has always left me cold. It seems an expression of anthropomorphic thinking: an AI would act like us. Wilczek’s vision is much more appealing. There’s a real poignancy — and, to me at least, a strange hopefulness — to the idea that the ultimate intelligence would also be the ultimate introvert, drawn ever further into the intricacies of its own mind. What would an AI think about? It would think about its own thoughts. It would be a pinprick of pure philosophy. It would, in the end, be the size of an idea.

The meek may not inherit the earth, but it seems they may inherit the cosmos, if they haven’t already.