“You can see the computer age everywhere but in the productivity statistics,” remarked MIT economist Robert Solow in a 1987 book review. The quip became famous. It crystallized what had come to be called the productivity paradox — the mysterious softness in industrial productivity despite years of big corporate investments in putatively labor-saving information technology.
I think the time has come to start talking about the robot paradox. So let me offer a new twist on Solow’s words:
You can see the robot age everywhere but in the labor statistics.
In an echo of the hype surrounding IT in the 1970s and 1980s, we’ve heard over the last decade a stream of predictions about how robots, algorithms, and other automation technologies are about to unleash an unemployment crisis. Not only will most factory jobs be handed over to automatons, but the ranks of white-collar workers will be decimated by artificial intelligence programs powered by Big Data. The end of work is nigh.
In the wake of the Great Recession, when hiring stayed stagnant for years, such predictions seemed reasonable. But recent economic statistics flat-out belie the claims. As Grep Ip, the Wall Street Journal economics columnist, wrote last week, predictions of an impending job apocalypse “would be more plausible if the evidence weren’t moving in exactly the opposite direction.” Business employment has been going up for 86 straight months, pushing the U.S. unemployment rate down to just 4.4 percent, a level many economists see as representing full employment. It’s true that a lot of workers have dropped out of the labor force, but the sustained, robust job growth makes it awfully hard to argue that advances in computer automation, which have been accelerating for a long time, are poised to create an unemployment explosion.
Even more telling is the persistently weak growth in productivity. As Ip explained: “If automation were rapidly displacing workers, the productivity of the remaining workers ought to be growing rapidly. Instead, growth in productivity — worker output per hour — has been dismal in almost every sector, including manufacturing.” You can argue that our methods of measuring productivity are imperfect, but if computers were going to obliterate workers, you should by now be seeing a strong upswing in productivity. And it’s just not there.
I’m convinced that computer automation is changing the way people work, often in profound ways, and I think it’s likely that automation is playing an important role in restraining wage growth by, among other things, deskilling certain occupations, shifting employees to more contingent positions, and reducing the bargaining power of workers. But the argument that computers are going to bring extreme unemployment in coming decades — an argument that was also popular in the 1950s, the 1960s, and again in the 1990s, it’s worth remembering — sounds increasingly dubious. It runs counter to the facts. Anyone making the argument today needs to provide a lucid and rational explanation of why, despite years of rapid advances in robotics, computer power, network connectivity, and artificial intelligence techniques, we have yet to see any sign of a broad loss of jobs in the economy.
Exactly fifty years after the hippies gathered in San Francisco, another summer of love seems set to blossom. This time it’s not the flower children who are holding hands and sharing beds. It’s the titans of Big Internet.
Just this week, at its Build conference, Microsoft gave a hug to former adversaries Apple and Alphabet. “Windows PCs heart iOS and Android devices” was one of the big themes of the event — yes, the heart symbol was on display — and Microsoft announced that Apple’s iTunes app is coming to the Windows Store. Microsoft also formed a partnership with Facebook to incorporate an ad-tracking tool into Excel. Meanwhile, Apple and Amazon were engaged in their own public display of affection. They let word leak out that Amazon’s Prime Video app would soon be available on Apple TV. The once fierce rivals appear to have “reached a truce,” reported Recode.
Thanks to their technical and marketing prowess, combined with the winner-take-all dynamics of the internet, Alphabet, Amazon, Apple, Facebook, and Microsoft have emerged as the dominant companies of the consumer net (Farhad Manjoo dubs them the “frightful five”), with a combined market cap of a zillion dollars, give or take. Each now operates something of a perpetual-motion money-printing machine powered by the dollars and data that flow in such massive quantities through the net. The companies still face threats, of course, but, even as they sow disruption in other industries, their own market positions now look pretty stable and secure. They’re the winners.
While the boundaryless nature of online business means that each of the five companies competes with each of the others on many fronts, there is also now a symbiosis among them — and that symbiosis is getting stronger. Each of the five makes its profits in different ways, with Apple focusing on hardware, Google on web ads, Facebook on social-media ads, Amazon on retailing, and Microsoft on software sales and subscriptions. Their businesses overlap, but they are also complementary. And, as is often true with complementary products and services, gains by one company often help rather than hurt the businesses of the others. Each of the five is focused on expanding consumers’ dependency on the net, and as the net pie expands so does each of the five slices. At this point, being friends rather than enemies makes sense.
When it comes to business, in other words, the net is a centralizing force, not a decentralizing one as once assumed. The frightful five together form a digital-industrial complex, a nascent oligopoly set to skim the lion’s share of the profits from the consumer web for the foreseeable future. Five big pieces, loosely joined.
On Monday, the venture capitalist Jeremy Philips wrote a column intended as a rejoinder to Manjoo’s warnings about the power of the titans. Philips argued against the idea that, as he put it, “the five leading tech behemoths have turned into dangerous monopolies that stifle innovation and harm consumers.” Their businesses, he wrote, are “all converging — therefore competing — with one another.” His timing was unfortunate, as immediately after the column appeared we got the news of the new partnerships among the companies.
Philips’s argument would have sounded compelling just a few years ago. Back then, the five’s positions were not as well-established as they are now, and their relationships were defined by their skirmishes. That’s no longer the case. Yes, the businesses of the five have converged, but it’s now becoming clear that their interests have converged as well. For Big Internet, this is the dawning of the Age of Aquarius.
I have an essay in the Boston Globe‘s Ideas section that takes a hard look at the popular notion that communication networks make the world a better place.
Here’s a taste:
If our assumption that communications technology brings people together were true, we should today be seeing a planetary outbreak of peace, love, and understanding. Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone. Nearly 2 billion are on Facebook, more than a billion upload and download YouTube videos, and billions more converse through messaging apps like WhatsApp and WeChat. With smartphone in hand, everyone becomes a media hub, transmitting and receiving ceaselessly.
Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised.
For years now, psychological and sociological studies have been casting doubt on the idea that communication dissolves differences. The research suggests that the opposite is true: free-flowing information makes personal and cultural differences more salient, turning people against one another instead of bringing them together. “Familiarity breeds contempt” is one of the gloomiest of proverbs. It is also, the evidence says, one of the truest.
Uber is not only a scofflaw, but, as Mike Isaac of the New York Timesreported last week, the company has been running an elaborate program to deceive and evade cops and other local officials in cities where its car service has been banned or lacks authorization to operate. The centerpiece of the scheme is a piece of software called Greyball, which uses a variety of data, including credit-card records, to identify what Uber calls “opponents.” When an opponent hails a car using the Uber app, the app presents the opponent with a fake map, filled with “ghost cars” that don’t actually exist. The map overlays a fictional story, intended to mislead, on a representation of actual city streets. Beyond the ethical and legal questions it raises, Greyball sheds important light on the digital representations of reality that we increasingly rely on to live our lives. These representations do more than mediate reality; they manufacture reality.
Traditional cartographers knew that they were creating mere representations of the world, but their goal was to achieve representational accuracy. They strove to provide map users with an objectively true, if necessarily incomplete, rendering of reality. As the semantician Alfred Korzybski wrote in his 1933 book Science and Sanity, “A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” There were times when mapmakers were pulled into propaganda campaigns, made to produce distorted maps to trick people for political ends, but those episodes were exceptions to the rule. The cartographic ideal was always to produce “correct” representations of the world that people could rely on for navigational or educational purposes. The mapmaker served the interests of the map user.
The digital maps that we see on our phones are different. They are created primarily for marketing rather than cartographic purposes. The interests they ultimately serve are those of the companies that create them and incorporate them into broader products or services. While a digital map can be useful to the user, its usefulness no longer derives from its accuracy or correctness in representing territory. In a digital map, the traditional map becomes a substrate on which a new, and fictionalized, representation of the world is presented. The digital map that appears on phones and other screens is at least twice removed from reality. What it tells us is that we need to refine and extend Korzybski’s famous distinction. It is no longer enough to say that the map is not the territory. What we have to say now is this: the map is not the map.
Uber’s ghost map provides a particularly stark example of the way a digital representation of the actual world can be manipulated, surreptitiously, to create a digital representation of a fictional world. As Uber itself has admitted, Greyball has been used in many different circumstances in order “to hide the standard city app view for individual riders, enabling Uber to show that same rider a different version.” In addition to deceiving authorities, the software has been used, the company says, for such purposes as “the testing of new features by employees; marketing promotions; fraud prevention; to protect our partners from physical harm; and to deter riders using the app in violation of our terms of service.” That sounds like a pretty much unbounded portfolio of potential uses. Have you been greyballed? It’s impossible to say.
But even Uber’s “standard city app view” presents a fictionalized picture of the world, at once useful and seductive:
The Uber map is a media production. It presents a little, animated entertainment in which you, the user, play the starring role. You are placed at the very center of things, wherever you happen to be, and you are surrounded by a pantomime of oversized automobiles poised to fulfill your desires, to respond immediately to your beckoning. It’s hard not to feel flattered by the illusion of power that the Uber map grants you. Every time you open the app, you become a miniature superhero on a city street. You send out a bat signal, and the batmobile speeds your way. By comparison, taking a bus or a subway, or just hoofing it, feels almost insulting.
In a similar way, a Google map also sets you in a fictionalized story about a place, whether you use the map for navigation or for searching. You are given a prominent position on the map, usually, again, at its very center, and around you a city personalized to your desires takes shape. Certain business establishments and landmarks are highlighted, while other ones are not. Certain blocks are highlighted as “areas of interest“; others are not. Sometimes the highlights are paid for, as advertising; other times they reflect Google’s assessment of you and your preferences. You’re not allowed to know precisely why your map looks the way it does. The script is written in secret.
It’s not only maps. The news and message feeds presented to you by Facebook, or Apple or Google or Twitter, are also stories about the world, fictional representations manufactured both to appeal to your desires and biases and to provide a compelling context for advertising. Mark Zuckerberg may wring his hands over “fake news,” but fake news is to the usual Facebook feed what the Greyball map is to the usual Uber map: an extreme example of the norm.
When I talk about “you,” I don’t really mean you. The “you” around which the map or the news feed or any other digitized representation of the world coalesces is itself a representation. As John Cheney-Lippold explains in his forthcoming book We Are Data, companies like Facebook and Google create digital versions of their users derived through an algorithmic analysis of the data they collect about their users. The companies rely on these necessarily fictionalized representations for both technical reasons (human beings can’t be computed; to be rendered computable, you have to be turned into a digital representation) and commercial reasons (a digital representation of a person can be bought and sold). The “you” on the Uber map or in the Facebook feed is a fake — a character in a story — but it’s a useful and a flattering fake, so you accept it as an accurate portrayal of yourself: an “I” for an I.
Greyballing is not an aberration of the virtual world. Greyballing is the essence of virtuality.
The public restroom, never a pleasant place, has in recent years become a dystopia. It presents us with a preview, in microcosm, of our automated future. Motion detectors and other sensors register our presence, read our intentions, and, on our behalf, turn on the lights, flush the toilets, open the taps, squirt out the liquid soap, and dispense towelettes for drying. There is a weird tension between the primitiveness of the bodily functions being executed in the contemporary restroom and the sophistication of the technology facilitating the execution. The pee and the poop, if I may be indelicate, seem out of place in the very place designed to accommodate them. Nowhere so much as in a public restroom does one wish one were a robot.
Yet, as Ian Bogost reminds us, in an illuminating Atlantic piece, the inconvenient truth about the automated public restroom is that nothing works worth a crap. Whatever it is that has been automated here bears no resemblance to even the most rudimentary of human skills. The automated toilet flushes prematurely, often repeatedly, while we are still seated upon it, and then, once we’ve reassumed an erect posture and want nothing more than to exit the stall, it refuses to flush at all. The automated soap dispenser either doesn’t work or spits soap on our trousers. The automated faucet either doesn’t work or sprays out such a gusher that the water bounces off the sink and soaks our shirt. The automated towel dispenser hands us a strip of ugly brown paper that would be too small to dry the hands of a hamster.
We reassure ourselves, as we leave the restroom damp and shamefaced, that the entire experience, however miserable in raw human terms, has been carefully engineered to maximize efficiency and save precious resources. Our discomfort is simply the price we have to pay for advanced technology that is “green” and “smart.” But, as Bogost also reminds us, this is an illusion. Thanks to what’s called “phantom flushing,” sensor-flush toilets end up using nearly 50 percent more water than do manual-flush toilets, according to one real-world study. The reality is probably equally perverse with sensor-controlled faucets, soap dispensers, and paper-towel dispensers, which demand that the user activate them repeatedly in order to get the required amount in the required place.
Bogost argues that the automated restroom has been designed not to save resources but rather to reduce labor costs: “When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.” I would bet that even here the desired benefit is illusory. Automated restrooms, with their wasteful ways and wayward sprays, seem to me to be at least as filthy as manually operated restrooms, requiring at least as much janitorial labor. And the greater complexity of the fixtures means more breakdowns and more repairs, increasing maintenance labor. In short, the automated restroom fails on pretty much every measure. Yet we accept it as good and necessary because it fits the prevailing paradigm of progress, in which technological advances are viewed as social advances.
In seeing society through its bathrooms, Bogost is working in the tradition of the great Siegfried Giedion, who devoted a hundred-page chapter of Mechanization Takes Command (1948) to the industrialization and democratization of the bathroom.
The bath and its purposes have different meanings for different ages. The manner in which a civilization integrates bathing within its life, as well as the type of bathing it prefers, yields searching insight into the inner nature of the period.
Bogost sums up the broader meaning of the automated restroom this way: “Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.” This is the WALL-E effect. As we become more dependent on automation, we become less likely to develop the skills and common sense required to perform even the most basic of tasks in the world, and hence we become even more dependent on automation (and on the companies orchestrating the automation) and less able to judge whether the automation is even any good. In this fashion, the “self” migrates, along with its agency, from the person to the device.
The word “community” appears, by my rough count, 98 times in Mark Zuckerberg’s latest message to the masses. In a post-fact world, truth is approached through repetition. The message that is transmitted most often is the fittest message, the message that wins. Verification becomes a matter of pattern recognition. It’s the epistemology of the meme, the sword by which Facebook lives and dies.
Today I want to focus on the most important question of all: are we building the world we all want?
It’s a good question, though I’m not sure there is any world that we all want, and if there is one, I’m not sure Mark Zuckerberg is the guy I’d appoint to define it. And yet, from his virtual pulpit, surrounded by his 86 million followers, the young Facebook CEO hesitates not a bit to speak for everyone, in the first person plural. There is no opt-out to his “we.” It’s the default setting and, in Zuckerberg’s totalizing utopian vision, the setting is hardwired, universal, and nonnegotiable.
Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community. …
Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial.
The reason the idea — that community-building on a planetary scale is practicable, necessary, and altogether good — did not seem controversial in the beginning was that Zuckerberg, like Silicon Valley in general, operated in a technological bubble, outside of politics, outside of history. Now that history has broken through the bubble and upset the algorithms, history must be put back in its place. Technological determinism must again be made synonymous with historical determinism.
In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.
Infrastructure is destiny. (The word “infrastructure” appears 24 times in Zuckerberg’s message.) Society is not a fluctuating arrangement of contending and at times noxious interests brought into a tenuous equilibrium through a difficult, ongoing process of negotiation and struggle. Society is itself a technology, a built thing that, correctly constructed, “works for all of us.” Get the specs right, and the human community will scale as a computer network scales. Global harmony becomes a technological inevitability.
Just as the internet is a network of networks, so society, in Zuckerberg’s view, is a community of communities. “Building a global community that works for everyone,” he writes, “starts with the millions of smaller communities and intimate social structures we turn to for our personal, emotional and spiritual needs.” He points to “churches” and “sports teams” as examples of “local groups” that “share important roles as social infrastructure.” They form the “sub-communities” that are then connected, as roads are connected to form a highway system, into the “global community.” He comes back to the same two examples a little later, when he writes of people “coming together around religion or sports.”
Zuckerberg’s conflation of religion and sports is odd but illuminating. In his view, the tenets of a religion matter no more than the rules of a game; what’s essential about a church and a sports team is that they both form social infrastructure that serves to “bring us together and reinforce our values.” It’s only by separating individual beliefs from community formation, and then pretending those beliefs don’t really matter, that Zuckerberg is able to sustain the fantasy that all sub-communities share a set of values — values that derive from community itself, independent of the members’ motivations in forming a group. These common values play the same role in building a global community that common standards play in building the internet: they enable seamless interconnectivity.
Zuckerberg remains oblivious to the fact that a sub-community, particularly a religious one, may be formed on a foundation of belief that is incompatible with, and in opposition to, the beliefs of the surrounding community. As the Wall Street Journal‘s Ian Lovett writes today in an article on a traditionalist Catholic community that has grown around a Benedictine monastery in Oklahoma, “The 100 or so people living here are part of a burgeoning movement among traditional Christians. Feeling besieged by secular society, they are taking refuge in communities like this one, clustered around churches and monasteries, where faith forms the backbone of daily life.” Such communities are very different from sports teams. Their formative beliefs aren’t some sort of standardized Lego infrastructure that enables the expression of universal community values. The beliefs of the individuals in the community are the values of the community, and they are anything but common standards.
The problems with Zuckerberg’s self-serving fantasy about social relations become even more pronounced when we turn to “sub-communities” of creeps and miscreants who share poisonous beliefs — neo-Nazi groups, say, or racist groups or misogynistic groups or groups of murderous ideologues (or even groups of amoral entrepreneurs who seek to make a quick buck by spreading fake news stories through the web). Here, too, the beliefs of the individual members of the community form the values of the community — values that, thankfully, are anything but common standards. “The purpose of any community is to bring people together to do things we couldn’t do on our own,” Zuckerberg writes, without any recognition that those “things” could be bad things. Even though the actions of sociopathic groups, in particular their use of Facebook and other social networks not as a metaphorical infrastructure for global harmony but as a very real infrastructure for recruitment, propaganda, planning, and organization, would seem to be one of the spurs for Zuckerberg’s message, he is blind to the way they contradict that message. Nastiness, envy, chauvinism, mistrust, distrust, anger, vanity, greed, enmity, hatred: for Zuckerberg, these aren’t features of the human condition; they are bugs in the network.
Tension and conflict, then, become technical problems, amenable to technical solutions. And so, rather than questioning Facebook’s assumptions about society — might global community-building, pursued through media structures, end up encouraging polarization and tribalism? — and the role the company plays in society, Zuckerberg ends up back where he always ends up: with a batch of new hacks. There will be new algorithmic filters, new layers of artificial intelligence, new commenting and rating systems, new techniques for both encryption and surveillance. The bugs — bad actors and bad code — will be engineered out of the system. Zuckerberg’s program, as Ars Technica’s Annalee Newitz points out, is filled with contradictions, which he either won’t acknowledge or, thanks to his techno-utopian tunnel vision, can’t see. He makes a big deal, for instance, of a new initiative through which Facebook will provide management tools for organizing what he calls “very meaningful” communities — groups characterized by passionate members under the direction of a strong leader. The example Zuckerberg offers — a group dedicated to helping refugees find homes — sounds great, but it’s not hard to see how such tools, deployed in the context of Facebook’s emotionalist echo chamber, could be used to mobilize some very nasty groups, of just the sort that Facebook is hoping to purge from its network. “The best communities in the world have leaders,” Zuckerberg said in an interview promoting his so-called manifesto. So do the worst, Mark.
Toward the end of his message, Zuckerberg writes, “In recent campaigns around the world — from India and Indonesia across Europe to the United States — we’ve seen the candidate with the largest and most engaged following on Facebook usually wins.” One might think that this observation would inspire some soul-searching on Zuckerberg’s part. But he offers it as a boast. Facebook is never the problem; it is always the solution.
No one wants to break a butterfly on a wheel, even if the butterfly is a billionaire. And only a fool would look to an official communiqué from the CEO of a big company for honest, subtle thinking about complicated social issues. And yet, in Zuckerberg’s long message, there is one moment of clarity, when he states the plain truth: “Social media is a short-form medium where resonant messages get amplified many times. This rewards simplicity and discourages nuance.” The medium, he continues, often “oversimplifies important topics and pushes us toward extremes.” This insight might have led Zuckerberg to a forthright accounting of the limitations of Facebook as a communications system. He might have pointed out that while Facebook is well designed for some things — banter among friends, the sharing of photos and videos, the coordination of group actions (for better or worse), the circulation of information in emergencies, advertising — it is ill designed for other things. It’s lousy as a news medium. It’s terrible as a forum for political discourse. It’s not the place to go to get a deep, well-rounded view of society. As a community, it’s pretty sketchy. And, he might have concluded, if you expect Facebook to solve the problems of the world, you’ve taken me far too seriously.
The terms addiction and compulsion tend to be used loosely and often interchangeably. But in an article in the Wall Street Journal, science writer Sharon Begley draws a simple but illuminating distinction between the two psychological disorders: addiction is born of pleasure, while compulsion is anxiety’s child.
Behavioral addictions begin in pleasure. But compulsions, according to a growing body of scientific evidence, are born in anxiety and remain strangers to joy. They are repetitive behaviors that we engage in repeatedly to alleviate the angst brought on by the possibility of harmful consequences.
The drunk seeks to regain the sense of well-being that the last shot of bourbon provided. The compulsive hoarder seeks to alleviate the dread that something valuable has been lost. Compulsion is both “balm and curse,” writes Begley. A compulsive act briefly mitigates feelings of anxiety, but the very experience of relief reinforces the anxiety. The anxiety ends up feeling more real, more pressing — and even more in need of relief. Anxiety and compulsion become a self-reinforcing cycle.
Compulsions can be so severe as to be debilitating. But they also, and much more routinely, take milder forms. They alter our thoughts and behavior, sometimes in deep ways, without making us dysfunctional in society. In fact, by tempering our anxiety, they may serve as a kind of therapy that protects our social functionality. Since ours is, as Auden suggested, an age of anxiety, it’s no surprise that it is also an age of compulsion.
The near-universal compulsion of the present day is, as we all know and as behavioral studies prove, the incessant checking of the smartphone. As Begley notes, with a little poetic hyperbole, we all “feel compelled to check our phones before we get out of bed in the morning and constantly throughout the day, because FOMO — the fear of missing out — fills us with so much anxiety that it feels like fire ants swarming every neuron in our brain.” With its perpetually updating, tightly personalized messaging, networking, searching, and shopping apps, the smartphone creates the anxiety that it salves. It’s a machine almost perfectly designed to turn its owner into a compulsive.
Needless to say, a portable, pocket-sized product that spurs and sustains compulsive use can be a very lucrative product for any company able to tap into its hypnotic power. The smartphone is the perfect consumer good for the age of anxiety. It’s hardly an exaggeration to say that, from a commercial standpoint, the smartphone is to compulsion what the cigarette pack was to addiction.
In a recent post, I highlighted the business scholar Shoshana Zuboff’s idea that, with the arrival of the internet, capitalism has begun to take on a new form. Traditional product-based competition (sell an attractive good at a fair price) is being displaced by data-based competition (collect the richest store of information about the identity and behavior of individual consumers). In this new industrial system, which Zuboff callssurveillance capitalism, “profits derive from the unilateral surveillance and modification of human behavior.”
While surveillance capitalism taps the invasive powers of the Internet as the source of capital formation and wealth creation, it is now, as I have suggested, poised to transform commercial practice across the real world too. An analogy is the rapid spread of mass production and administration throughout the industrialized world in the early twentieth century, but with one major caveat. Mass production was interdependent with its populations who were its consumers and employees. In contrast, surveillance capitalism preys on dependent populations who are neither its consumers nor its employees and are largely ignorant of its procedures.
The concept of surveillance capitalism helps explain the dynamics of a growing part of the economy. But it doesn’t explain everything. It focuses on the supply side (what motivates companies) while largely ignoring the demand side (what motivates consumers). I’d suggest that the secret to understanding the demand side may lie in the anxiety-compulsion cycle. What motivates consumers is anxiety — not just the fear of missing out, but also the dread of becoming invisible or losing status, the worry that others might know something that you don’t know, the nervousness that a message might have been misconstrued, and so on — and this anxiety spurs the compulsive behavior that generates ever more personal data for surveillance capitalists to harvest. We divulge our secrets because we can’t help ourselves.
This powerful, compulsion-fueled business model may have emerged by accident — I’m pretty sure that Larry Page and Sergey Brin didn’t found Google with the intent of spreading social anxiety and then capitalizing on it through surveillance systems — but it is now sustained by design. Facebook doesn’t hire cognitive psychologists and maintain a behavioral research lab for nothing. Rewards now flow to the competitor that is best able to maximize consumer anxiety in a way that spurs more compulsive behavior that in turn generates more valuable consumer data that can, to complete the cycle, be deployed to further manipulate consumer psychology.
That’s a dark way of putting it, to be sure — it ignores the real benefits that consumers gain from many online services — but it does seem to explain the governing logic of what we once happily termed “the new economy.”