I am a data factory (and so are you)

1. Mines and Factories

Am I a data mine, or am I a data factory? Is data extracted from me, or is data produced by me? Both metaphors are ugly, but the distinction between them is crucial. The metaphor we choose informs our sense of the power wielded by so-called platform companies like Facebook, Google, and Amazon, and it shapes the way we, as individuals and as a society, respond to that power.

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.

The mining metaphor, like the mining business, is a fairly simple one, and it has become popular, particularly among writers of the left. Thinking of the platform companies as being in the extraction business, with personal data being analogous to a natural resource like iron or petroleum, brings a neatness and clarity to discussions of a new and complicated type of company. In an article in the Guardian in March, Ben Tarnoff wrote that “thinking of data as a resource like oil helps illuminate not only how it functions, but how we might organize it differently.” Building on the metaphor, he went on the argue that the data business should not just be heavily regulated, as extractive industries tend to be, but that “data resources” should be nationalized — put under state ownership and control:

Data is no less a form of common property than oil or soil or copper. We make data together, and we make it meaningful together, but its value is currently captured by the companies that own it. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many — wealth that could help feed, educate, house and heal people — is used to enrich the few. The solution is to take up the template of resource nationalism, and nationalize our data reserves.

In another Guardian piece, published a couple of weeks later, Evgeny Morozov offered a similar proposal concerning what he termed “the data wells inside ourselves”:

We can use the recent data controversies to articulate a truly decentralised, emancipatory politics, whereby the institutions of the state (from the national to the municipal level) will be deployed to recognise, create, and foster the creation of social rights to data. These institutions will organise various data sets into pools with differentiated access conditions. They will also ensure that those with good ideas that have little commercial viability but promise major social impact would receive venture funding and realise those ideas on top of those data pools.

The simplicity of the mining metaphor is its strength but also its weakness. The extraction metaphor doesn’t capture enough of what companies like Facebook and Google do, and hence in adopting it we too quickly narrow the discussion of our possible responses to their power. Data does not lie passively within me, like a seam of ore, waiting to be extracted. Rather, I actively produce data through the actions I take over the course of a day. When I drive or walk from one place to another, I produce locational data. When I buy something, I produce purchase data. When I text with someone, I produce affiliation data. When I read or watch something online, I produce preference data. When I upload a photo, I produce not only behavioral data but data that is itself a product. I am, in other words, much more like a data factory than a data mine. I produce data through my labor — the labor of my mind, the labor of my body.

The platform companies, in turn, act more like factory owners and managers than like the owners of oil wells or copper mines. Beyond control of my data, the companies seek control of my actions, which to them are production processes, in order to optimize the efficiency, quality, and value of my data output (and, on the demand side of the platform, my data consumption). They want to script and regulate the work of my factory — i.e., my life — as Frederick Winslow Taylor sought to script and regulate the labor of factory workers at the turn of the last century. The control wielded by these companies, in other words, is not just that of ownership but also that of command. And they exercise this command through the design of their software, which increasingly forms the medium of everything we all do during our waking hours.

The factory metaphor makes clear what the mining metaphor obscures: We work for the Facebooks and Googles of the world, and the work we do is increasingly indistinguishable from the lives we lead. The questions we need to grapple with are political and economic, to be sure. But they are also personal, ethical, and philosophical.

2. A False Choice

To understand why the choice of metaphor is so important, consider a new essay by Ben Tarnoff, written with Moira Weigel, that was published last week. The piece opens with a sharp, cold-eyed examination of those Silicon Valley apostates who now express regret over the harmful effects of the products they created. Through their stress on redesigning the products to promote personal “well-being,” these “tech humanists,” Tarnoff and Weigel write, actually serve the business interests of the platform companies they criticize. The companies, the writers point out, can easily co-opt the well-being rhetoric, using it as cover to deflect criticism while seizing even more economic power.

Tarnoff and Weigel point to Facebook CEO Mark Zuckerberg’s recent announcement that his company will place less emphasis on increasing the total amount of time members spend on Facebook and more emphasis on ensuring that their Facebook time is “time well spent.” What may sound like a selfless act of philanthropy is in reality, Tarnoff and Weigel suggest, the product of a hard-headed business calculation:

Emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable. In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

The analysis is a trenchant one. The vagueness and self-absorption that often characterize discussions of wellness, particularly those emanating from the California coast, are well suited to the construction of window dressing. And, Lord knows, Zuckerberg and his ilk are experts at window dressing. But, having offered good reasons to be skeptical about Silicon Valley’s brand of tech humanism, Tarnoff and Weigel overreach. They argue that any “humanist” critique of the personal effects of technology design and use is a distraction from the “fundamental” critique of the economic and structural basis for Silicon Valley’s dominance:

[The humanists] remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

The choice that Tarnoff and Weigel present here — either personal critique or political critique, either a design focus or a structural focus — is a false choice. And it stems from the metaphor of extraction, which conceives of data as lying passively within us (beyond the influence of design) rather than being actively produced by us (under the influence of design). Arguing that attending to questions of design blinds us to questions of ownership is as silly (and as condescending) as arguing that attending to questions of ownership blinds us to questions of design. Silicon Valley wields its power through both its control of data and its control of design, and that power influences us on both a personal and a collective level. Any robust critique of Silicon Valley, whether practical, theoretical, or both, needs to address both the personal and the political.

The Silicon Valley apostates may be deserving of criticism, but what they’ve done that is praiseworthy is to expose, in considerable detail, the way the platform companies use software design to guide and regulate people’s behavior — in particular, to encourage the compulsive use of their products in ways that override people’s ability to think critically about the technology while provoking the kind of behavior that generates the maximum amount of valuable personal data. To put it into industrial terms, these companies are not just engaged in resource extraction; they are engaged in process engineering.

Tarnoff and Weigel go on to suggest that the tech humanists are pursuing a patriarchal agenda. They want to define some ideal state of human well-being, and then use software and hardware design to impose that way of being on everybody. That may well be true of some of the Silicon Valley apostates. Tarnoff and Weigel quote a prominent one as saying, “We have a moral responsibility to steer people’s thoughts ethically.” It’s hard to imagine a purer distillation of Silicon Valley’s hubris or a clearer expression of its belief in the engineering of lives. But Tarnoff and Weigel’s suggestion is the opposite of the truth when it comes to the broader humanist tradition in technology theory and criticism. It is the thinkers in that tradition — Mumford, Arendt, Ellul, McLuhan, Postman, Turkle, and many others — who have taught us how deeply and subtly technology is entwined with human history, human society, and human behavior, and how our entanglement with technology can produce effects, often unforeseen and sometimes hidden, that may run counter to our interests, however we choose to define those interests.

Though any cultural criticism will entail the expression of values — that’s what gives it bite — the thrust of the humanist critique of technology is not to impose a particular way of life on us but rather to give us the perspective, understanding, and know-how necessary to make our own informed choices about the tools and technologies we use and the way we design and employ them. By helping us to see the force of technology clearly and resist it when necessary, the humanist tradition expands our personal and social agency rather than constricting it.

3. Consumer, Track Thyself

Nationalizing collective stores of personal data is an idea worthy of consideration and debate. But it raises a host of hard questions. In shifting ownership and control of exhaustive behavioral data to the government, what kind of abuses do we risk? It seems at least a little disconcerting to see the idea raised at a time when authoritarian movements and regimes are on the rise. If we end up trading a surveillance economy for a surveillance state, we’ve done ourselves no favors.

But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.

Finally, there’s the obvious practical question. How likely is it that the United States is going to establish a massive state-run data collective encompassing exhaustive information on every citizen, at least any time in the foreseeable future? It may not be entirely a pipe dream, but it’s pretty close. In the end, we may discover that the best means of curbing Silicon Valley’s power lies in an expansion of personal awareness, personal choice, and personal resistance. At the very least, we need to keep that possibility open. Let’s not rush to sacrifice the personal at the altar of the collective.