“A Brutal Intelligence: AI, Chess, and the Human Mind,” my review of Garry Kasparov’s new book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, appears today in the Los Angeles Review of Books. Here’s a bit:
The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparov’s account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethal attack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov “confused and in agony,” unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.
One of Kasparov’s strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machine’s lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. “I didn’t know my opponent at all,” he recalls. “This intense confusion left my mind to wander to darker places.” The irony is that the machine’s victory was as much a matter of psychology as of skill.
A little more than two years ago, I suggested in a post that “the killer business app for artificial intelligence may turn out to be the algorithmic CEO.” I was picking up on a point that Frank Pasquale had made in a review of The Second Machine Age:
[Thiel Fellow and Ethereum developer Vitalik Buterin] has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.” Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite.
Now that Uber CEO Travis Kalanick has resigned, completing a meltdown of the company’s top management ranks, Uber and its investors have a perfect opportunity to disrupt the executive suite, and indeed the entire history of management, by using software to run the company. Let’s face it: Kalanick’s great failing was that he was not quite robotic enough. His flaws were not analytical but human. He was a victim of his own meat.
A fundamentally numerical company, constituted mainly of software, Uber is the perfect test bed for the robot CEO. And since its staff includes exceptionally talented programmers, it already has the skill needed to gin up the algorithms necessary to do the work Kalanick and his lieutenants did (without the attendant buffoonery).* A two-day hackathon should be more than sufficient to create a robot able to translate spreadsheet data into resource-allocation plans and use machine learning to compose forward-leaning messages that inspire staffers, drivers, and venture capitalists. And to have Uber’s robot CEO sit next to Cook, Nadella, Bezos, et al., at the next White House photo-op would be a huge PR coup.
Not only is Uber the right company for a robot CEO, but now is the right time for one. Just two months ago, Alibaba CEO Jack Ma predicted that “in thirty years, a robot will likely be on the cover of Time Magazine as the best CEO.”** As the financier Martin Hutchinson pointed out, there’s no reason to wait that long. “Human CEOs have amassed an especially dire track record in the last two decades,” he wrote. “Whereas their compensation has soared far faster than overall U.S. output, productivity growth in U.S. businesses has notably lagged, indicating their failure to invest optimally.” If there were ever a job to be automated, it’s that of the underperforming, overpaid modern CEO.
Even at this year’s World Economic Forum in Davos, the case for a robot CEO was laid out in compelling terms:
There are some distinct advantages to having a robot as your company’s CEO. Firstly, they might be able to make better, more responsible, decisions. … Robots don’t face the unpredictability we humans face, so their decisions are more likely to be consistent, based on facts. … Robots can work all day, every day. They don’t need sleep, weekends or holidays. No mere humans can say the same, however hard they may try to cultivate that impression. … And if you’ve created one CEO robot, why not create a few more? It’s not as if he or she has a unique personality. Technology allows them to interact wherever your customers are, further cutting down travel costs and helping the environment.
We may look back on Kalanick’s resignation as the most transformative act of his eventful career. He has opened the door for a robot CEO. The question now is whether the Uber board will welcome the future or resist it.
*On further thought, Uber’s coders probably have better things to do than write simple CEO algorithms. What’s really needed are cloud-based virtual CEOs. Yes: CEO-as-a-Service. Are you listening, Marc Benioff?
**Ma’s assumption that Time will still be around, with its cover intact, thirty years from now makes me question his futurist cred. But I’m going to assume he was speaking figuratively.
“You can see the computer age everywhere but in the productivity statistics,” remarked MIT economist Robert Solow in a 1987 book review. The quip became famous. It crystallized what had come to be called the productivity paradox — the mysterious softness in industrial productivity despite years of big corporate investments in putatively labor-saving information technology.
I think the time has come to start talking about the robot paradox. So let me offer a new twist on Solow’s words:
You can see the robot age everywhere but in the labor statistics.
In an echo of the hype surrounding IT in the 1970s and 1980s, we’ve heard over the last decade a stream of predictions about how robots, algorithms, and other automation technologies are about to unleash an unemployment crisis. Not only will most factory jobs be handed over to automatons, but the ranks of white-collar workers will be decimated by artificial intelligence programs powered by Big Data. The end of work is nigh.
In the wake of the Great Recession, when hiring stayed stagnant for years, such predictions seemed reasonable. But recent economic statistics flat-out belie the claims. As Grep Ip, the Wall Street Journal economics columnist, wrote last week, predictions of an impending job apocalypse “would be more plausible if the evidence weren’t moving in exactly the opposite direction.” Business employment has been going up for 86 straight months, pushing the U.S. unemployment rate down to just 4.4 percent, a level many economists see as representing full employment. It’s true that a lot of workers have dropped out of the labor force, but the sustained, robust job growth makes it awfully hard to argue that advances in computer automation, which have been accelerating for a long time, are poised to create an unemployment explosion.
Even more telling is the persistently weak growth in productivity. As Ip explained: “If automation were rapidly displacing workers, the productivity of the remaining workers ought to be growing rapidly. Instead, growth in productivity — worker output per hour — has been dismal in almost every sector, including manufacturing.” You can argue that our methods of measuring productivity are imperfect, but if computers were going to obliterate workers, you should by now be seeing a strong upswing in productivity. And it’s just not there.
I’m convinced that computer automation is changing the way people work, often in profound ways, and I think it’s likely that automation is playing an important role in restraining wage growth by, among other things, deskilling certain occupations, shifting employees to more contingent positions, and reducing the bargaining power of workers. But the argument that computers are going to bring extreme unemployment in coming decades — an argument that was also popular in the 1950s, the 1960s, and again in the 1990s, it’s worth remembering — sounds increasingly dubious. It runs counter to the facts. Anyone making the argument today needs to provide a lucid and rational explanation of why, despite years of rapid advances in robotics, computer power, network connectivity, and artificial intelligence techniques, we have yet to see any sign of a broad loss of jobs in the economy.
Exactly fifty years after the hippies gathered in San Francisco, another summer of love seems set to blossom. This time it’s not the flower children who are holding hands and sharing beds. It’s the titans of Big Internet.
Just this week, at its Build conference, Microsoft gave a hug to former adversaries Apple and Alphabet. “Windows PCs heart iOS and Android devices” was one of the big themes of the event — yes, the heart symbol was on display — and Microsoft announced that Apple’s iTunes app is coming to the Windows Store. Microsoft also formed a partnership with Facebook to incorporate an ad-tracking tool into Excel. Meanwhile, Apple and Amazon were engaged in their own public display of affection. They let word leak out that Amazon’s Prime Video app would soon be available on Apple TV. The once fierce rivals appear to have “reached a truce,” reported Recode.
Thanks to their technical and marketing prowess, combined with the winner-take-all dynamics of the internet, Alphabet, Amazon, Apple, Facebook, and Microsoft have emerged as the dominant companies of the consumer net (Farhad Manjoo dubs them the “frightful five”), with a combined market cap of a zillion dollars, give or take. Each now operates something of a perpetual-motion money-printing machine powered by the dollars and data that flow in such massive quantities through the net. The companies still face threats, of course, but, even as they sow disruption in other industries, their own market positions now look pretty stable and secure. They’re the winners.
While the boundaryless nature of online business means that each of the five companies competes with each of the others on many fronts, there is also now a symbiosis among them — and that symbiosis is getting stronger. Each of the five makes its profits in different ways, with Apple focusing on hardware, Google on web ads, Facebook on social-media ads, Amazon on retailing, and Microsoft on software sales and subscriptions. Their businesses overlap, but they are also complementary. And, as is often true with complementary products and services, gains by one company often help rather than hurt the businesses of the others. Each of the five is focused on expanding consumers’ dependency on the net, and as the net pie expands so does each of the five slices. At this point, being friends rather than enemies makes sense.
When it comes to business, in other words, the net is a centralizing force, not a decentralizing one as once assumed. The frightful five together form a digital-industrial complex, a nascent oligopoly set to skim the lion’s share of the profits from the consumer web for the foreseeable future. Five big pieces, loosely joined.
On Monday, the venture capitalist Jeremy Philips wrote a column intended as a rejoinder to Manjoo’s warnings about the power of the titans. Philips argued against the idea that, as he put it, “the five leading tech behemoths have turned into dangerous monopolies that stifle innovation and harm consumers.” Their businesses, he wrote, are “all converging — therefore competing — with one another.” His timing was unfortunate, as immediately after the column appeared we got the news of the new partnerships among the companies.
Philips’s argument would have sounded compelling just a few years ago. Back then, the five’s positions were not as well-established as they are now, and their relationships were defined by their skirmishes. That’s no longer the case. Yes, the businesses of the five have converged, but it’s now becoming clear that their interests have converged as well. For Big Internet, this is the dawning of the Age of Aquarius.
I have an essay in the Boston Globe‘s Ideas section that takes a hard look at the popular notion that communication networks make the world a better place.
Here’s a taste:
If our assumption that communications technology brings people together were true, we should today be seeing a planetary outbreak of peace, love, and understanding. Thanks to the Internet and cellular networks, humanity is more connected than ever. Of the world’s 7 billion people, 6 billion have access to a mobile phone. Nearly 2 billion are on Facebook, more than a billion upload and download YouTube videos, and billions more converse through messaging apps like WhatsApp and WeChat. With smartphone in hand, everyone becomes a media hub, transmitting and receiving ceaselessly.
Yet we live in a fractious time, defined not by concord but by conflict. Xenophobia is on the rise. Political and social fissures are widening. From the White House down, public discourse is characterized by vitriol and insult. We probably shouldn’t be surprised.
For years now, psychological and sociological studies have been casting doubt on the idea that communication dissolves differences. The research suggests that the opposite is true: free-flowing information makes personal and cultural differences more salient, turning people against one another instead of bringing them together. “Familiarity breeds contempt” is one of the gloomiest of proverbs. It is also, the evidence says, one of the truest.
Uber is not only a scofflaw, but, as Mike Isaac of the New York Timesreported last week, the company has been running an elaborate program to deceive and evade cops and other local officials in cities where its car service has been banned or lacks authorization to operate. The centerpiece of the scheme is a piece of software called Greyball, which uses a variety of data, including credit-card records, to identify what Uber calls “opponents.” When an opponent hails a car using the Uber app, the app presents the opponent with a fake map, filled with “ghost cars” that don’t actually exist. The map overlays a fictional story, intended to mislead, on a representation of actual city streets. Beyond the ethical and legal questions it raises, Greyball sheds important light on the digital representations of reality that we increasingly rely on to live our lives. These representations do more than mediate reality; they manufacture reality.
Traditional cartographers knew that they were creating mere representations of the world, but their goal was to achieve representational accuracy. They strove to provide map users with an objectively true, if necessarily incomplete, rendering of reality. As the semantician Alfred Korzybski wrote in his 1933 book Science and Sanity, “A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” There were times when mapmakers were pulled into propaganda campaigns, made to produce distorted maps to trick people for political ends, but those episodes were exceptions to the rule. The cartographic ideal was always to produce “correct” representations of the world that people could rely on for navigational or educational purposes. The mapmaker served the interests of the map user.
The digital maps that we see on our phones are different. They are created primarily for marketing rather than cartographic purposes. The interests they ultimately serve are those of the companies that create them and incorporate them into broader products or services. While a digital map can be useful to the user, its usefulness no longer derives from its accuracy or correctness in representing territory. In a digital map, the traditional map becomes a substrate on which a new, and fictionalized, representation of the world is presented. The digital map that appears on phones and other screens is at least twice removed from reality. What it tells us is that we need to refine and extend Korzybski’s famous distinction. It is no longer enough to say that the map is not the territory. What we have to say now is this: the map is not the map.
Uber’s ghost map provides a particularly stark example of the way a digital representation of the actual world can be manipulated, surreptitiously, to create a digital representation of a fictional world. As Uber itself has admitted, Greyball has been used in many different circumstances in order “to hide the standard city app view for individual riders, enabling Uber to show that same rider a different version.” In addition to deceiving authorities, the software has been used, the company says, for such purposes as “the testing of new features by employees; marketing promotions; fraud prevention; to protect our partners from physical harm; and to deter riders using the app in violation of our terms of service.” That sounds like a pretty much unbounded portfolio of potential uses. Have you been greyballed? It’s impossible to say.
But even Uber’s “standard city app view” presents a fictionalized picture of the world, at once useful and seductive:
The Uber map is a media production. It presents a little, animated entertainment in which you, the user, play the starring role. You are placed at the very center of things, wherever you happen to be, and you are surrounded by a pantomime of oversized automobiles poised to fulfill your desires, to respond immediately to your beckoning. It’s hard not to feel flattered by the illusion of power that the Uber map grants you. Every time you open the app, you become a miniature superhero on a city street. You send out a bat signal, and the batmobile speeds your way. By comparison, taking a bus or a subway, or just hoofing it, feels almost insulting.
In a similar way, a Google map also sets you in a fictionalized story about a place, whether you use the map for navigation or for searching. You are given a prominent position on the map, usually, again, at its very center, and around you a city personalized to your desires takes shape. Certain business establishments and landmarks are highlighted, while other ones are not. Certain blocks are highlighted as “areas of interest“; others are not. Sometimes the highlights are paid for, as advertising; other times they reflect Google’s assessment of you and your preferences. You’re not allowed to know precisely why your map looks the way it does. The script is written in secret.
It’s not only maps. The news and message feeds presented to you by Facebook, or Apple or Google or Twitter, are also stories about the world, fictional representations manufactured both to appeal to your desires and biases and to provide a compelling context for advertising. Mark Zuckerberg may wring his hands over “fake news,” but fake news is to the usual Facebook feed what the Greyball map is to the usual Uber map: an extreme example of the norm.
When I talk about “you,” I don’t really mean you. The “you” around which the map or the news feed or any other digitized representation of the world coalesces is itself a representation. As John Cheney-Lippold explains in his forthcoming book We Are Data, companies like Facebook and Google create digital versions of their users derived through an algorithmic analysis of the data they collect about their users. The companies rely on these necessarily fictionalized representations for both technical reasons (human beings can’t be computed; to be rendered computable, you have to be turned into a digital representation) and commercial reasons (a digital representation of a person can be bought and sold). The “you” on the Uber map or in the Facebook feed is a fake — a character in a story — but it’s a useful and a flattering fake, so you accept it as an accurate portrayal of yourself: an “I” for an I.
Greyballing is not an aberration of the virtual world. Greyballing is the essence of virtuality.
The public restroom, never a pleasant place, has in recent years become a dystopia. It presents us with a preview, in microcosm, of our automated future. Motion detectors and other sensors register our presence, read our intentions, and, on our behalf, turn on the lights, flush the toilets, open the taps, squirt out the liquid soap, and dispense towelettes for drying. There is a weird tension between the primitiveness of the bodily functions being executed in the contemporary restroom and the sophistication of the technology facilitating the execution. The pee and the poop, if I may be indelicate, seem out of place in the very place designed to accommodate them. Nowhere so much as in a public restroom does one wish one were a robot.
Yet, as Ian Bogost reminds us, in an illuminating Atlantic piece, the inconvenient truth about the automated public restroom is that nothing works worth a crap. Whatever it is that has been automated here bears no resemblance to even the most rudimentary of human skills. The automated toilet flushes prematurely, often repeatedly, while we are still seated upon it, and then, once we’ve reassumed an erect posture and want nothing more than to exit the stall, it refuses to flush at all. The automated soap dispenser either doesn’t work or spits soap on our trousers. The automated faucet either doesn’t work or sprays out such a gusher that the water bounces off the sink and soaks our shirt. The automated towel dispenser hands us a strip of ugly brown paper that would be too small to dry the hands of a hamster.
We reassure ourselves, as we leave the restroom damp and shamefaced, that the entire experience, however miserable in raw human terms, has been carefully engineered to maximize efficiency and save precious resources. Our discomfort is simply the price we have to pay for advanced technology that is “green” and “smart.” But, as Bogost also reminds us, this is an illusion. Thanks to what’s called “phantom flushing,” sensor-flush toilets end up using nearly 50 percent more water than do manual-flush toilets, according to one real-world study. The reality is probably equally perverse with sensor-controlled faucets, soap dispensers, and paper-towel dispensers, which demand that the user activate them repeatedly in order to get the required amount in the required place.
Bogost argues that the automated restroom has been designed not to save resources but rather to reduce labor costs: “When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.” I would bet that even here the desired benefit is illusory. Automated restrooms, with their wasteful ways and wayward sprays, seem to me to be at least as filthy as manually operated restrooms, requiring at least as much janitorial labor. And the greater complexity of the fixtures means more breakdowns and more repairs, increasing maintenance labor. In short, the automated restroom fails on pretty much every measure. Yet we accept it as good and necessary because it fits the prevailing paradigm of progress, in which technological advances are viewed as social advances.
In seeing society through its bathrooms, Bogost is working in the tradition of the great Siegfried Giedion, who devoted a hundred-page chapter of Mechanization Takes Command (1948) to the industrialization and democratization of the bathroom.
The bath and its purposes have different meanings for different ages. The manner in which a civilization integrates bathing within its life, as well as the type of bathing it prefers, yields searching insight into the inner nature of the period.
Bogost sums up the broader meaning of the automated restroom this way: “Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.” This is the WALL-E effect. As we become more dependent on automation, we become less likely to develop the skills and common sense required to perform even the most basic of tasks in the world, and hence we become even more dependent on automation (and on the companies orchestrating the automation) and less able to judge whether the automation is even any good. In this fashion, the “self” migrates, along with its agency, from the person to the device.