“As for living,” wrote the French symbolist Auguste Villiers de l’Isle-Adam in his 1890 play Axël, “our servants will do that for us.” Silicon Valley seems intent on giving the infamous remark a new, digital spin: “As for living, our computers will do that for us.”
The latest evidence is Allo, the new Google messaging app that uses artificial intelligence algorithms to generate replies on a user’s behalf. “If your friend sends you a photo of their pet,” Google explained when it launched the software two weeks ago, Allo’s “smart reply” feature will suggest a suitable response, such as “aww cute!” Tap it, and you’re done.
As Evan Selinger and Brett Frischmann pointed out, it’s like an autopilot for friendship.
The smart-reply system, which is built into the Pixel phones Google introduced yesterday, has been in the works for a while. Back in 2012, the company filed for a patent on the “automated generation of suggestions for personalized reactions in a social network.” In the application, Google pointed to birthdays and anniversaries as occasions when a person might want a machine to compose a congratulatory message to send to a friend. What with juggling Snapchat, Instagram, Facebook, and Twitter, who has time to pen a personal note anymore?
Some might point to Allo as yet another example of the trivialization of innovation. Now that the smartphone has become our all-purpose mediator of existence, Google is in a competitive war with rivals like Facebook, Apple, and Amazon to corner the market on human attention and agency. No feature is too trifling to exploit as a potential advantage.
But there’s something deeper going on here. Allo’s message-generation algorithm reveals, in its own small way, the strange view of personal relations that seems to hold sway in Silicon Valley. To the entrepreneurs and coders who run today’s massive social networks, our conversations are data streams. They can be tracked, parsed, and ultimately automated to enhance efficiency and remove kinks from the system.
We already use computers to converse, so the next logical step, in this view, is to use software to conduct the conversations themselves. By relying on an AI to compose our messages, we can optimize our productivity in managing our relationships. Call it the industrialization of affiliation.
Last year, in an online question-and-answer session, Facebook founder and CEO Mark Zuckerberg said that he thinks “there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about.” Stripped to our essence, we humans are just aggregations of data, and it’s only a matter of time before information scientists discern the statistical pattern that defines our beings. At that point, we’ll all be perfectly programmable.
I expect most people would find such a pinched view of the human condition off-putting, if not repulsive. But as we continue to adapt to the digital processing of our thoughts and words, we may find ourselves embracing, without really thinking about it, the Silicon Valley ethos. We already consider it normal to respond to a friend’s message or photo with a quick click on a like button. Is it really such a leap to let a computer dash off a reply?
The German sociologist Theodor Adorno, in his prescient 1951 book Minima Moralia, warned of the dangers of allowing the values of the business world to creep into our personal lives. Behind the push to make communication more streamlined and efficient, he wrote, lies “an ideology for treating people as things.” Allo and its myriadkin would seem to bear out Adorno’s fears.
In its patent application, Google wrote that an “unstated protocol for behavior” often governs conversations between friends. What to a programmer might look like a formal protocol is actually something fuzzier yet much more meaningful: an expression of kindness, affection, care. It will be interesting to see whether we’ll come to draw a line between artificial intelligence and artificial emotion, or just take them as a package deal.
Twice before in the last hundred years a new medium has transformed elections. In the 1920s, radio disembodied candidates, reducing them to voices. It also made national campaigns much more intimate. Politicians, used to bellowing at fairgrounds and train depots, found themselves talking to families in their homes. The blustery rhetoric that stirred big, partisan crowds came off as shrill and off-putting when piped into a living room or a kitchen. Gathered around their wireless sets, the public wanted an avuncular statesman, not a rabble rouser. With Franklin Roosevelt, master of the soothing fireside chat, the new medium found its ideal messenger.
In the 1960s, television gave candidates their bodies back, at least in two dimensions. With its jumpy cuts and pitiless close-ups, TV placed a stress on sound bites, good teeth, and an easy manner. Image became everything, as the line between politician and celebrity blurred. John Kennedy was the first successful candidate of the TV era, but it was Ronald Reagan and Bill Clinton who perfected the form. Born actors, they managed to project a down-home demeanor while also seeming bigger than life. They were made for television.
Today, with the public looking to their smartphones for news and entertainment, we’re at the start of the third technological transformation of modern electioneering. The presidential campaign is becoming just another social-media stream, its swift and shallow current intertwining with all the other streams that flow through people’s devices. This shift is changing the way politicians communicate with voters, altering the tone and content of political speech. But it’s doing more than that. It’s changing what the country wants and expects from its would-be leaders. If radio and TV required candidates to be nouns — to present themselves as stable, coherent figures — social media pushes them to be verbs, engines of activity. Authority and esteem don’t accumulate on social media; they have to be earned anew at each moment.
What’s important now is not so much image as personality. But, as the Trump phenomenon suggests, it’s a particular kind of personality that works best — one that’s big enough to grab the attention of the perpetually distracted but small enough to fit neatly into a thousand tiny media containers. It might best be described as a Snapchat personality. It bursts into focus at regular intervals without ever demanding steady concentration.
“We are a tech company, not a media company,” said Mark Zuckerberg in Rome on August 29, shortly after presenting the Pope with a toy drone. And Zuckerberg — l never thought I’d write this sentence — was right.
Media companies saw it differently. They responded to the Facebook CEO’s remark with a collective, peeved guffaw. At best Zuckerberg was being disingenuous; at worst he was lying. “Yes, Facebook is a media company,” wroteRecode. “Sorry, Mark Zuckerberg, but Facebook is definitely a media company,” wroteFortune. “Facebook is a media company even though it says it’s not,” wroteBusiness Insider. “Facebook is totally a media company,” wroteMashable. “Dude,” tweetedSlate chief Jacob Weisberg, “Facebook is a media company.”
The message could not have been clearer: Dammit, Zuck, you’ve got your hands all over our precious goods — our words, our pictures, our thoughts, our ads — so you better come clean and admit that you’re a media company now. You’re one of us.
That’s like telling the fox that, now that he’s entered the henhouse, he’s a farmer. The fox may be part of the agriculture business — he may at times deal in chickens — but the fox’s business is not agriculture.
And so it is with Facebook. Facebook is an automated data processing company that manages — brilliantly, by any technical standard — an extraordinarily complex network graph, one with well over a billion nodes. To an outsider, the nodes may look like persons or readers or consumers, and the data may look like news stories or photographs or advertisements. But to Facebook they’re just numbers, just the mathematical abstractions of graph theory. Facebook uses software algorithms to optimize data flows among the nodes on its graph in a way that produces a pattern of network activity that maximizes the flow of a certain kind of data (dollars) to one particular node (the one labeled “Facebook”). That’s its business. Everything else — the lobbying, the PR, the meetings with Popes — is window-dressing.
“The fox may at times deal in chickens, but the fox’s business is not agriculture.”
Facebook’s goal, and its ideal, is a thoroughly technical one: total automation. It wants to operate its social network entirely with computers. (If you want to know where Zuckerberg is coming from, remember that he has said he believes “there is a fundamental mathematical law underlying human social relationships.”) But the technology is not quite there yet. The abstract network has a real-life manifestation, and in real life there are still some subtle qualities of human common sense and judgment that lie beyond the ability of programmers to replicate in code. And so Facebook still has to rely on people to perform a small number of network-management functions, such as negotiating the terms of its relationships with certain important nodes (a prominent newspaper, say, or a big advertiser) or interpreting the real-world meaning of ambiguous data objects (is the headline on that news story serious or a joke? is the nudity in that photograph intended to titillate or to inform?).
The handoffs between humans and computers inevitably cause confusion, both within the company and outside it, as they introduce the haltingness and messiness of personal judgment into an unimaginably fast, standardized data-processing routine. But it’s important to recognize that, to Facebook, these occasional reversions to the human eye and mind, which at times entail the making of editorial judgments, are matters of exception management — and necessarily, due to the size of the network, peripheral and even contrary to the company’s real business.
Facebook’s software will get better at making distinctions — and we humans will, for better or worse, continue to adapt ourselves to the limitations of the software — but it’s naive to think that the company will, or even could, take on the editorial responsibilities of a media company. When there’s an outcry over some filtering or labeling miscue, whether it stems from a software error or a human bias, Facebook will make a show of fixing the problem and tweaking “the process” (as we’ve just seen with the imbroglio over the deletion of a harrowing Vietnam War photograph). But that’s still just exception management. Facebook’s scale precludes the kind of day-to-day editorial decision-making that characterizes media companies.
Does that mean that Facebook bears no responsibility for the workings of its software, or that its operations lie beyond public scrutiny? Absolutely not, on both counts. It means that both corporate responsibility and public scrutiny are going to take different forms for Facebook than they do for media businesses. The Court of Justice of the European Union, in its trenchant 2014 ruling on what’s come to be known, misleadingly, as the Right to Be Forgotten case, observed that companies like Facebook and Google — companies that “control” online information flows on a grand scale — are new kinds of businesses and need to be treated as such by the public and its institutions. The data-processing giants play a different role from that of tradional media companies like newspapers, but it’s a role that extends well beyond mere information distribution. They are not, as they like to pretend, just data pipelines. In filtering, sorting, and arranging information produced by others, whether media companies or individuals, a business like Facebook transforms that information into a new product. It manipulates the information to serve its own interests, and it does so on a scale far greater than anything we’ve seen before. Because the company doesn’t fit old molds, and because it keeps its data-processing protocols secret, it deserves particularly close and thoughtful scrutiny by the public. Labeling Facebook a media company does not illuminate what Facebook does; it obscures it.
“The pressing challenge for journalism companies
is to define what they are, not what Facebook is.”
As for news outlets, their demand that Facebook assume the identity and responsibility of a media company may feel good, but it’s going to accomplish nothing. As we’ve already seen, pointing out that Facebook still occasionally relies on people to make editorial judgments is not going to inspire Facebook to hire more editors; it’s going to inspire Facebook to redouble its efforts to automate those judgments, even if the price in the immediate term is more foul-ups. (The company has a lot of experience dealing with self-inflicted embarrassments; it has learned that the press and the public lose interest quickly.) Facebook, in short, will continue to be true to its calling as a technology company, a company in the lucrative business of large-scale data processing.
The pressing challenge for journalism companies is to define what they are, not what Facebook is. Together and individually, they’re going to have to decide precisely what kinds of nodes they want to be — or whether they want to be nodes at all. That’s not going to be easy. But if you hand the fox your chickens and tell him he must take proper care of them, you have only yourself to blame if you come round the next day and find a pile of feathers.
Everything had been going swimmingly at this year’s Burning Man, the annual desert festival devoted to “decommodification” and “radical self-reliance,” reports social media specialist Becky Wicks in a GQ post:
I turned my head up to the giant shrimp rotating on the ceiling, and realised the end of it had been cleverly moulded into the shaft of a penis. Before I could voice this fact aloud however, I was being thrust a sippy cup full of champagne, the shrimp-penis was forgotten and I found myself bouncing with my new friend MacGyver on a trampoline, in my shimmering fairy costume and wings. “Life is soooooo fun!” we screamed into the dust clouds, as my champagne flew everywhere. “This is so good!” And it was.
Then the hooligans arrived. In a wink Burning Man turned into Occupy Burning Man. The target of the insurgents’ wrath was the White Ocean luxury camp, an air-conditioned “plug-and-play” enclave of the rich and beautiful bankrolled by the son of a Russian oil billionaire. (Radical self-reliance doesn’t come cheap these days.) In the middle of the night, the hooligans snuck past White Ocean’s security detail, raided the posh outpost, flooded it with water, glued the doors of its RVs shut, and cut its electrical lines. With no power, the refrigeration system shut down and the champagne lost its chill.
It was a class war in a classless commune and, as The Telegraphreported, symbolized a larger rift: “The big tensions that have been rubbing up against each other in the tech scene for decades erupted to the surface.”
On Facebook, White Ocean issued a plaintive message about the unpleasantness:
A very unfortunate and saddening event happened last night at White Ocean, something we thought would never be possible in OUR Burning Man utopia. A band of hooligans raided our camp, stole from us, pulled and sliced all of our electrical lines leaving us with no refrigeration and wasting our food and, glued our trailer doors shut, vandalized most of our camping infrastructure, dumped 200 gallons of potable water flooding our camp.
We immediately contacted authorities. Sheriffs came to our camp along with rangers to take our report.
Sad, yes, but it’s comforting to know that, even in utopia, authorities are on call.
Apple’s logo remains one of the great business marks: simple, eloquent, indelible. But, more and more, the stylized apple with the bite missing is looking like an anachronism.
The colors went long ago, of course, those happy, trippy rainbow stripes that connected the company with its flower-child origins. But the bite remained, signifying and celebrating the human: organic, flawed, sweet. The missing piece was our piece. It welcomed us in. It made us part of the company. It put the personal in personal computing. It familiarized the machine.
At some point, though, Apple lost patience with us, with our fiddling, our ineptitude. It began to see the bite as a wound, and ever since it has been seeking to heal the damage. Apple wants to be pristine, untouched by outside forces, entire onto itself.
Apple’s ideal now is the unbitten apple, the immaculate fruit.
One by one the portals go, the entrances and exits are sealed. The customer is locked out of the device — and locked into the “ecosystem.” Today, in a media ceremony, it was the headphone jack that was exorcised. That tiny analog orifice linking the iPhone back to the transistor radio, the Walkman, the iPod: gone. And why not? With the socket and its audio converter removed, the phone will become even slimmer, even lighter, even more elegant. It will be better insulated against the elements. It will be more totemic. It will be purer.
Removing the headphone jack was an act of “courage” on Apple’s part, explained the company’s marketing chief, Phil Schiller: “The courage to move on and do something new that betters all of us.”
With just the one port remaining, the proprietary, tightly guarded Lightning port, the iPhone is getting very close now to the ideal. Apple will soon be whole.
What will we make of ourselves? The question is no longer figurative; it’s literal. Biotechnology and genetic engineering are giving us new tools to reshape ourselves, at both the individual and the species level. My new book Utopia Is Creepy — out today — concludes with an essay, “The Daedalus Mission,” that tries to make sense of radical human enhancement, or “transhumanism.” Aeon is featuring excerpts from the essay (mainly the beginning and the ending). Here’s a little more, from the middle:
Transhumanism is “an extension of humanism,” argues Nick Bostrom, an Oxford philosophy professor who has been one of the foremost proponents of radical human enhancement. “Just as we use rational means to improve the human condition and the external world, we can also use such means to improve ourselves, the human organism. In doing so, we are not limited to traditional humanistic methods, such as education and cultural development. We can also use technological means that will eventually enable us to move beyond what some would think of as ‘human.’” The ultimate benefit of transhumanism, in Bostrom’s view, is that it expands “human potential,” giving individuals greater freedom “to shape themselves and their lives according to their informed wishes.” Transhumanism unchains us from our nature.
Other transhumanists take a subtly different tack in portraying their beliefs as part of the humanistic tradition. They suggest that the greatest benefit of radical enhancement is not that it allows us to transcend our deepest nature but rather to fulfill it. “Self-reconstruction” is “a distinctively human activity, something that helps define us,” writes Duke University bioethicist Allen Buchanan in his book Better Than Human. “We repeatedly alter our environment to suit our needs and preferences. In doing this we inevitably alter ourselves as well. The new environments we create alter our social practices, our cultures, our biology, and even our identity.” The only difference now, he says, “is that for the first time we can deliberately, and in a scientifically informed way, change our selves.” We can extend the Enlightenment into our cells.
Critics of radical human enhancement, often referred to as bioconservatives, take the opposite view, arguing that transhumanism is antithetical to humanism. Altering human nature in a fundamental way, they contend, is more likely to demean or even destroy the human race than elevate it. Some of their counterarguments are pragmatic. By tinkering with life, they warn, researchers risk opening a Pandora’s box, inadvertently unleashing a biological or environmental catastrophe. They also caution that access to expensive enhancement procedures and technologies is likely to be restricted to economic or political elites. Society may end up riven into two classes, with the merely normal masses under the bionic thumbs of an oligarchy of supermen. They worry, too, that as people gain prodigious intellectual and physical abilities, they’ll lose interest in the very activities that bring pleasure and satisfaction to their lives. They’ll suffer “self-alienation,” as New Zealand philosopher Nicholas Agar puts it.
But at the heart of the case against transhumanism lies a romantic belief in the dignity of life as it has been given to us. There is an essence to humankind, bioconservatives believe, from which springs both our strengths and our defects. Whether bestowed by divine design or evolutionary drift, the human essence should be cherished and protected as a singular gift, they argue. “There is something appealing, even intoxicating, about a vision of human freedom unfettered by the given,” writes Harvard professor Michael J. Sandel in The Case against Perfection. “But that vision of freedom is flawed. It threatens to banish our appreciation of life as a gift, and to leave us with nothing to affirm or behold outside our own will.” As a counter to what they see as misguided utilitarian utopianism, bioconservatives counsel humility.
The transhumanists and the bioconservatives are wrestling with the largest of questions: Who are we? What is our destiny? But their debate is a sideshow. Intellectual wrangling over the meaning of humanism and the fate of humanity is not going to have much influence over how people respond when offered new opportunities for self-expression, self-improvement, and self-transformation. The public is not going to approach transhumanism as a grand moral or political movement, a turning point in the history of the species, but rather as a set of distinct products and services, each offering its own possibilities. Whatever people sense is missing in themselves or their lives they will seek to acquire with whatever means available. And as standards of beauty, intelligence, talent, and status change, even those wary of human enhancement will find it hard to resist the general trend. Wherever they may lead us, our attempts to change human nature will be governed by human nature.
We are myth makers as well as tool makers. Biotechnology allows us to merge these two instincts, giving us the power to refashion the bodies we have and the lives we lead to more closely match those we imagine for ourselves. Transhumanism ends in a paradox. The rigorously logical work that scientists, doctors, engineers, and programmers are doing to enhance and extend our bodies and minds is unlikely to raise us onto a more rational plane. It promises, instead, to return us to a more mythical existence, as we deploy our new tools in an effort to bring our dream selves more fully into the world.
The following review of Alex Pentland’s book Social Physicsappeared originally, in a slightly different form, in MIT Technology Review.
In 1969, Playboy published a long, freewheeling interview with Marshall McLuhan in which the media theorist and sixties icon sketched a portrait of the future that was at once seductive and repellent. Noting the ability of digital computers to analyze data and communicate messages, McLuhan predicted that the machines eventually would be deployed to fine-tune society’s workings. “The computer can be used to direct a network of global thermostats to pattern life in ways that will optimize human awareness,” he said. “Already, it’s technologically feasible to employ the computer to program societies in beneficial ways.” He acknowledged that such centralized control raised the specter of “brainwashing, or far worse,” but he stressed that “the programming of societies could actually be conducted quite constructively and humanistically.”
The interview appeared when computers were used mainly for arcane scientific and industrial number-crunching. To most readers at the time, McLuhan’s words must have sounded far-fetched, if not nutty. Now, they seem prophetic. With smartphones ubiquitous, Facebook inescapable, and wearable computers proliferating, society is gaining a digital sensing system. People’s location and behavior are being tracked as they go through their days, and the resulting information is being transmitted instantaneously to vast server farms. Once we write the algorithms needed to parse all that “big data,” many sociologists and statisticians believe, we’ll be rewarded with a much deeper understanding of what makes society tick.
One of big data’s keenest advocates is Alex “Sandy” Pentland, a data scientist who, as the director of MIT’s Human Dynamics Laboratory, has long used computers to probe the dynamics of businesses and other organizations. In his brief but ambitious book, Social Physics, Pentland argues that our greatly expanded ability to gather behavioral data will allow scientists to develop “a causal theory of social structure” and ultimately establish “a mathematical explanation for why society reacts as it does” in all manner of circumstances. As the book’s title makes clear, Pentland thinks that the social world, no less than the material world, operates according to rules. There are “statistical regularities within human movement and communication,” he writes, and once we fully understand those regularities, we’ll discover “the basic mechanisms of social interactions.”
What has prevented us from deciphering society’s mathematical underpinnings up to now, Pentland believes, is a lack of empirical rigor in the social sciences. Unlike physicists, who can measure the movements of objects with great precision, sociologists have had to make do with fuzzy observations. They’ve had to work with rough and incomplete data sets drawn from small samples of the population, and they’ve had to rely on people’s notoriously flawed recollections of what they did, when they did it, and whom they did it with. Computer networks promise to remedy those shortcomings. Tapping into the streams of data that flow through gadgets, search engines, social media, and credit-card payment systems, scientists will be able to collect precise, real-time information on the behavior of millions, if not billions, of individuals. And because computers neither forget nor fib, the information will be reliable.
To illustrate what lies in store, Pentland describes a series of experiments that he and his associates have been conducting in the private sector. They go into a business and give each employee an electronic ID card, called a “sociometric badge,” that hangs from the neck and communicates with the badges worn by colleagues. Incorporating microphones, location sensors, and accelerometers, the badges monitor where people go and whom they talk with, taking note of their tone of voice and even their body language. The devices are able to measure not only the chains of communication and influence within an organization but also “personal energy levels” and traits such as “extraversion and empathy.” In one such study of a bank’s call center, the researchers discovered that productivity could be increased simply by tweaking the coffee-break schedule.
Pentland dubs this data-processing technique “reality mining,” and he suggests that similar kinds of information can be collected on a much broader scale by smartphones outfitted with specialized sensors and apps. Fed into statistical modeling programs, the data could reveal “how things such as ideas, decisions, mood, or the seasonal flu are spread in the community.”
The mathematical modeling of society is made possible, according to Pentland, by the innate tractability of human beings. We may think of ourselves as rational actors, in conscious control of our choices, but in reality most of what we do is reflexive. Our behavior is determined by our subliminal reactions to the influence of other people, particularly those in the various peer groups we belong to. “The power of social physics,” he writes, “comes from the fact that almost all of our day-to-day actions are habitual, based mostly on what we have learned from observing the behavior of others.” Once you map and measure all of a person’s social influences, you can develop a statistical model that predicts that person’s behavior, just as you can model the path a billiard ball will take after it strikes other balls.
Deciphering people’s behavior is only the first step. What really excites Pentland is the prospect of using digital media and related tools to change people’s behavior, to motivate groups and individuals to act in more productive and responsible ways. If people react predictably to social influences, then governments and businesses can use computers to develop and deliver carefully tailored incentives, such as messages of praise or small cash payments, to “tune” the flows of influence in a group and thereby modify the habits of its members. Beyond improving the efficiency of transit and health-care systems, Pentland suggests that group-based incentive programs can enhance the harmony and creativity of communities. “Our main insight,” he reports, “is that by targeting [an] individual’s peers, peer pressure can amplify the desired effect of a reward on the target individual.” Computers become, as McLuhan foresaw, civic thermostats. They not only register society’s state but bring it into line with some prescribed ideal. Both the tracking and the maintenance of the social order are automated.
Ultimately, Pentland argues, looking at people’s interactions through a mathematical lens will free us of time-worn notions about class and class struggle. Political and economic classes, he contends, are “oversimplified stereotypes of a fluid and overlapping matrix of peer groups.” Peer groups, unlike classes, are defined by “shared norms” rather than just “standard features such as income” or “their relationship to the means of production.” Armed with exhaustive information about individuals’ habits and associations, civic planners will be able to trace the full flow of influences that shape personal behavior. Abandoning general categories like “rich” and “poor” or “haves” and “have-nots,” we’ll be able to understand people as individuals—even if those individuals are no more than the sums of all the peer pressures and other social influences that affect them.
Replacing politics with programming might sound appealing, particularly given Washington’s paralysis. But there are good reasons to be nervous about this sort of social engineering. Most obvious are the privacy concerns raised by the collection of ever more intimate personal information. Pentland anticipates such criticism by arguing that public fears about privacy can be ameliorated through a “New Deal on Data” that gives people more control over the information collected about them. It’s hard, though, to imagine Internet companies agreeing to give up ownership of the behavioral information they hoard. The data are, after all, crucial to their competitive advantage and their profit.
Even if we assume that the privacy issues can be resolved, the idea of what Pentland calls “a data-driven society” remains problematical. Social physics is a variation on the theory of behavioralism that found favor in McLuhan’s day, and it suffers from the same limitations that doomed its predecessor. Defining social relations as a pattern of stimulus and response makes the math easier, but it ignores the deep, structural sources of social ills. Pentland may be right that people’s behavior is determined in large part by the social norms and influences exerted upon them by their peers, but what he fails to see is that those norms and influences are themselves shaped by history, politics, and economics, not to mention the ever-present forces of power and prejudice. People don’t have complete freedom in choosing their peer groups. Their choices are constrained by where they live, where they come from, how much money they have, and what they look like. A statistical model of society that ignores issues of class, that takes patterns of influence as givens rather than as historical contingencies, will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.
Politics is messy because society is messy, not the other way around. Pentland does a commendable job in describing how better data can enhance social planning. But like other would-be social engineers he overreaches. Letting his enthusiasm get the better of him, he begins to take the metaphor of “social physics” literally, even as he acknowledges that influence-based mathematical models will always be reductive. “Because it does not try to capture internal cognitive processes,” he writes at one point, “social physics is inherently probabilistic, with an irreducible kernel of uncertainty caused by avoiding the generative nature of conscious human thought.” What big data can’t account for is what’s most unpredictable, and most interesting, about us.