Category Archives: Uncategorized

AI’s game

The following review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins appeared originally in the Los Angeles Review of Books.

¤ ¤ ¤

Chess is the game not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling idea, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the Bell Labs engineer and information scientist Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they would influence the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”

Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. As the public’s fascination with digital computers intensified during the 1950s, the machines began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤ ¤ ¤

It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.

One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence. After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. If in the 1950s and 1960s the emphasis in the phrase “artificial intelligence” fell heavily on the word “intelligence,” today it falls with even greater weight on the word “artificial.”

Particularly fruitful has been the deployment of search algorithms similar to those that powered Deep Blue. If a machine can search millions of options in a matter of milliseconds, ranking each according to how well it fulfills some specified goal, then it can outperform experts in a lot of problem-solving tasks without having to match their experience or insight. More recently, AI programmers have added another brute-force technique to their repertoire: machine learning. In simple terms, machine learning is a statistical method for discovering correlations in past events that can then be used to make predictions about future events. Rather than giving a computer a set of instructions to follow, a programmer feeds the computer many examples of a phenomenon and from those examples the machine deciphers relationships among variables. Whereas most software programs apply rules to data, machine-learning algorithms do the reverse: they distill rules from data, and then apply those rules to make judgments about new situations.

In modern translation software, for example, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesn’t require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translator’s work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. Kasparov quotes an engineer from Google’s popular translation program: “When you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”

The pragmatic turn in AI research is producing many such breakthroughs, but the shift also highlights the limitations of artificial intelligence. Through brute-force data processing, computers can churn out answers to well-defined questions and forecast how complex events may play out, but they lack the understanding, imagination, and common sense to do what human minds do naturally: turn information into knowledge, think conceptually and metaphorically, and negotiate the world’s flux and uncertainty without a script. Machines remain machines.

That fact hasn’t blunted the public’s enthusiasm for AI fantasies. Along with TV shows and movies featuring scheming computers and bloody-minded robots, we’ve recently seen a slew of earnest nonfiction books with titles like SuperintelligenceSmarter Than Us, and Our Final Invention, all suggesting that machines will soon be brainier than we are. The predictions echo those made in the 1950s and 1960s, and, as before, they’re founded on speculation, not fact. Despite monumental advances in hardware and software, computers give no sign of being any nearer to self-awareness, volition, or emotion. Their strength — what Kasparov describes as an “amnesiac’s objectivity” — is also their weakness.

¤ ¤ ¤

In addition to questioning the common wisdom about artificial intelligence, Kasparov challenges our preconceptions about chess. The game, particularly when played at its highest levels, is far more than a cerebral exercise in logic and calculation, and the expert player is anything but a stereotypical egghead. The connection between chess skill and the kind of intelligence measured by IQ scores, Kasparov observes, is weak at best. “There is no more truth to the thought that all chess players are geniuses than in saying that all geniuses play chess,” he writes. “One of the things that makes chess so interesting is that it’s still unclear exactly what separates good chess players from great ones.”

Chess is a grueling sport. It demands stamina, resilience, and an aptitude for psychological warfare. It also requires acute sensory perception. “Move generation seems to involve more visuospatial brain activity than the sort of calculation that goes into solving math problems,” writes Kasparov, drawing on recent neurological experiments. To the chess master, the board’s 64 squares define not just an abstract geometry but an actual terrain. Like figures on a landscape, the pieces form patterns that the master, drawing on years of experience, reads intuitively, often at a glance. Methodical analysis is important, too, but it is carried out as part of a multifaceted and still mysterious thought process involving the body and its senses as well as the brain’s neurons and synapses.

The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparov’s account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethal attack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov “confused and in agony,” unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.

One of Kasparov’s strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machine’s lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. “I didn’t know my opponent at all,” he recalls. “This intense confusion left my mind to wander to darker places.” The irony is that the machine’s victory was as much a matter of psychology as of skill.*

If Kasparov hadn’t become flustered, he might have won the 1997 match. But that would have just postponed the inevitable. By the turn of the century, the era of computer dominance in chess was well established. Today, not even the grandest of grandmasters would bother challenging a computer to a match. They know they wouldn’t stand a chance.

But if computers have become unbeatable at the board, they remain incapable of exhibiting what Kasparov calls “the ineffable nature of human chess.” To Kasparov, this is cause for optimism about the future of humanity. Unlike the eight-by-eight chessboard, the world is an unbounded place without a rigid set of rules, and making sense of it will always require more than mathematical or statistical calculations. The inherent rigidity of computer intelligence leaves plenty of room for humans to exercise their flexible and intuitive intelligence. If we remain vigilant in turning the power of our computers to our own purposes, concludes Kasparov, our machines will not replace us but instead propel us to ever-greater achievements.

One hopes he’s right. Still, as computers become more powerful and more adept at fulfilling our needs, there is a danger. The benefits of computer processing are easy to measure — in speed, in output, in dollars — while the benefits of human thought are often impossible to express in hard numbers. Given contemporary society’s worship of the measurable and suspicion of the ineffable, our own intelligence would seem to be at a disadvantage as we rush to computerize more and more aspects of our jobs and lives. The question isn’t whether the subtleties of human thought will continue to lie beyond the reach of computers. They almost certainly will. The question is whether we’ll continue to appreciate the value of those subtleties as we become more dependent on the mindless but brutally efficient calculations of our machines. In the face of the implacable, the contingent can seem inferior, its strengths appearing as weaknesses.

Near the end of his book, Kasparov notes, with some regret, that “humans today are starting to play chess more like computers.” Once again, the ancient game may be offering us an omen.

___________________________

*A bit of all-too-human deviousness was also involved in Deep Blue’s win. IBM’s coders, it was later revealed, programmed the computer to display erratic behavior — delaying certain moves, for instance, and rushing others — in an attempt to unsettle Kasparov. Computers may be innocents, but that doesn’t mean their programmers are.

Image: Google’s cat.

The robot paradox, continued

You can see the robot age everywhere but in the labor statistics, I wrote a few months ago, channeling Robert Solow. The popular and often alarming predictions of a looming unemployment crisis, one that would stem from rapid advances in robotics, artificial intelligence, and other computer automation technologies, have become increasingly hard to square with the economy’s rebound to near full employment. If computers were going to devastate jobs on a broad scale, one would think there’d be signs of it by now. We have, after all, been seeing remarkable gains in computing and software for many decades, while the broadband internet has been working its putative magic for more than twenty years. And it’s not like a shortage of corporate cash is curtailing investment in technology. Profits have been robust and capital cheap.

Still, even as jobs rebounded from the depths of the Great Recession, overall wage growth has appeared sluggish, at times stagnant. It has seemed possible that the weakness in wages might be the canary in the automated coal mine, an early indication of a coming surge in technological unemployment. If humans are increasingly competing for jobs against automatons, of both the hardware and software variety, that might explain workers’ inability to reap wage gains from a tightening labor market — and it might presage a broad shift of work from people to machines. At some point, if automation continued to put downward pressure on pay, workers would simply give up trying to compete with technology. The robots would win.

But even here, there’s growing reason to doubt the conventional wisdom. For one thing, earnings growth has been picking up, hitting an annualized 4.2 percent in July, its highest mark in a decade. Second, and more telling, the wage statistics may not have been giving us an accurate picture. The sluggishness in earnings growth may have been something of an illusion all along, a distortion resulting from a combination of demographic changes in the American work force and post-recession labor market dynamics. That’s the implication of a new study of wage growth in this century from the Federal Reserve Bank of San Francisco. The researchers found that average wages have been depressed by two unusual trends: (1) Baby boomers are retiring at a high rate, and they’re being replaced by younger and less experienced workers. The inexperienced workers are naturally being paid less than the veteran workers they’re replacing, which in the labor statistics appears as a drop in pay for those jobs. (2) A lot of the workers getting full-time jobs have either been unemployed for a while or are moving from part-time to full-time posts. These workers, too, will tend to earn below-average wages in their new positions, which also serves to pull down average wages. As the researchers explain: “Counterintuitively, this means that strong job growth can pull average wages in the economy down and slow the pace of wage growth.”

When you adjust the numbers for these factors, the wage picture improves considerably. “Overall,” the researchers report, “these factors have combined to hold down growth in the median weekly earnings measure by a little under 2 percentage points, a sizable effect relative to the normal expected gains.” Here’s the money graph from the Fed report:

The black line in the middle tracks wage growth as reported in the labor statistics. The dotted red line shows the effect on the numbers of recent changes in the makeup of the workforce. The dotted blue line shows what wage growth looks like when you account for those demographic shifts — when you isolate, in other words, the actual changes in the wages of employed, full-time workers. What you’re left with, clearly, is a much brighter and much more typical picture. As the Fed’s economic research director Mary Daly told Bloomberg, “Wage growth, when cleaned up, looks consistent with other measures seen in the labor market.”

I’m sure this research won’t be the final word on the complex issue of jobs, wages, and technological unemployment. But the findings do provide further reason for skepticism when examining claims that a robot horde is about to eat the job market.

Postscript: In a new article in Wired, Andrew McAfee, coauthor with Erik Brynjolfsson of the influential book The Second Machine Age, says he now regrets the stress he placed on automation’s impact on overall employment: “If I had to do it over again, I would put more emphasis on the way technology leads to structural changes in the economy, and less on jobs, jobs, jobs. The central phenomenon is not net job loss. It’s the shift in the kinds of jobs that are available.” I think that’s right, but I’d add another concern that will become more pressing: the impact of automation on the structure of jobs themselves. Human beings and computers are going to be working together, more closely than ever, and we need to get the division of labor right. The “robots are taking over” rhetoric is a distraction from what’s most important about the second machine age.

Image: still from Lost in Space.

The virtual postman never stops ringing

In the latest issue of New Philosopher, I have an essay, “Speaking Through Computers,” that looks at how the form and content of our speech have been shaped by communications networks, from the postal system to social media. It begins:

Much modern technology has its origins in war, or the anticipation of war, and that’s the case with Google, Facebook, Snapchat, and all the other networks that stream data through our phones and lives. The Big Bang of digital communication came on the morning of August 29, 1949, when the Soviet Union carried out its first test of an atomic bomb. The explosion jolted the U.S. government, and the American military soon began work on a vast air-defense system, known as Semi-Automatic Ground Environment, or SAGE, to provide early warnings of air attacks on North America.

The system required a fast computer network. Readings from radar stations would be collected in digital form by mainframes stationed around the continent, and the data would be sent in real time to other computers at command centers and air bases. The output would be a complete picture of the sky at every moment. There was just one hitch: computers at the time worked in solitude; they didn’t know how to talk to each other. The Air Force called in the crack engineers at Bell Labs, and they solved the problem by devising a digital modem able to turn the ones and zeroes of computer code into electrical pulses that could be sent over wires. The telephone lines that for decades had carried the conversations of human beings now carried computer chatter as well. The melding of personal and machine communication had begun. . . .

New Philosopher is available at newsstands and bookstores or by subscription. More info.

The soft tyranny of the rating system

In his darkly comic 2010 novel Super Sad True Love Story, Gary Shteyngart imagines a Yelpified America in which people are judged not by the content of their character but by their streamed credit scores and crowdsourced “hotness” points. Social relations of even the most intimate variety are governed by online rating systems.

A sanitized if more insidious version of Shteyngart’s big-data dystopia is taking shape in China today. At its core is the government’s “Social Credit System,” a centrally managed data-analysis program that, using facial-recognition software, mobile apps, and other digital tools, collects exhaustive information on people’s behavior and, running the data through an evaluative algorithm, assigns each person a “social trustworthiness” score. If you run a red light or fail to pick up your dog’s poop, your score goes down. If you shovel snow off a sidewalk or exhibit good posture in riding your bicycle, your score goes up. People with high scores get a variety of benefits, from better seats on trains to easier credit at banks. People with low scores suffer various financial and social penalties.

As Kai Strittmatter reports in a Süddeutsche Zeitung article, the Social Credit System is already operating in three dozen test cities in China, including Shanghai, and the government’s goal is to have everyone in the country enrolled by 2020:

Each company and person in China is to take part in it. Everyone will be continuously assessed at all times and accorded a rating. In [the test cities], each participant starts with 1000 points, and then their score either improves or worsens. You can be a triple-A citizen (“Role Model of Honesty,” with more than 1050 points), or a double-A (“Outstanding Honesty”). But if you’ve messed up often enough, you can drop down to a C, with fewer than 849 points (“Warning Level”), or even a D (“Dishonest”) with 599 points or less. In the latter case, your name is added to a black list, the general public is informed, and you become an “object of significant surveillance.”

As Strittmatter points out, the Chinese government has long monitored its citizenry. But while the internet-based Social Credit System may be nothing new from a policy standpoint, it allows a depth and immediacy of behavioral monitoring and correction that go far beyond anything that was possible before:

The Social Credit System’s heart and soul is the algorithm that gathers information without pause, and then processes, structures and evaluates it. The “Accelerate Punishment Software” section of the system guidelines describes the aim: “automatic verification, automatic interception, automatic supervision, and automatic punishment” of each breach of trust, in real time, everywhere. If all goes as planned, there will no longer be any loopholes anywhere.

The government officials that Strittmatter talked to were eager to discuss the program and to emphasize how it would encourage citizens to act more responsibly, leading to a happier, more harmonious society. As one planning document puts it, “the system will stamp out ‘lies and deception’ [and] increase ‘the nation’s honesty and quality.'” Those sound like worthy goals, and the rhetoric is not so different from that used in the U.S. and U.K. to promote governmental and commercial programs that employ online data collection and automated “nudge” systems to encourage good behavior and social harmony. I recall something Mark Zuckerberg wrote in his recent “Building Global Community” manifesto: “Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community.” I’m not suggesting any equivalence. I am suggesting that when it comes to using automated behavioral monitoring and control systems for “beneficial” ends, the boundaries can get fuzzy fast. “Of all tyrannies,” wrote C. S. Lewis in God in the Dock, “a tyranny sincerely exercised for the good of its victims may be the most oppressive.”

What’s particularly worrisome about behavior-modification systems that employ publicly posted numerical ratings is that they encourage citizens to serve as their own tyrants. Using peer pressure, competition, and status-establishing prizes to shape behavior, the systems raise the specter of a “gamification” of tyranny. Nobody wants the stigma of a low score, particularly when it’s out there on the net for everyone to see. We’ll strive for Status Credits just as we strive for Likes or, to return to Shteyngart’s world, Hotness Points. “Our aim is to standardize people’s behavior,“ a Communist Party Secretary tells Strittmatter. “If everyone behaves according to standard, society is automatically stable and harmonious. This makes my work much easier.”

The master and the machine: on AI and chess

“A Brutal Intelligence: AI, Chess, and the Human Mind,” my review of Garry Kasparov’s new book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, appears today in the Los Angeles Review of Books. Here’s a bit:

The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparov’s account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethal attack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov “confused and in agony,” unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.

One of Kasparov’s strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machine’s lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. “I didn’t know my opponent at all,” he recalls. “This intense confusion left my mind to wander to darker places.” The irony is that the machine’s victory was as much a matter of psychology as of skill.

Read on.

Photo: Elyktra.

Should Uber’s next CEO be a robot?

A little more than two years ago, I suggested in a post that “the killer business app for artificial intelligence may turn out to be the algorithmic CEO.” I was picking up on a point that Frank Pasquale had made in a review of The Second Machine Age:

[Thiel Fellow and Ethereum developer Vitalik Buterin] has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.” Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite.

Now that Uber CEO Travis Kalanick has resigned, completing a meltdown of the company’s top management ranks, Uber and its investors have a perfect opportunity to disrupt the executive suite, and indeed the entire history of management, by using software to run the company. Let’s face it: Kalanick’s great failing was that he was not quite robotic enough. His flaws were not analytical but human. He was a victim of his own meat.

A fundamentally numerical company, constituted mainly of software, Uber is the perfect test bed for the robot CEO. And since its staff includes exceptionally talented programmers, it already has the skill needed to gin up the algorithms necessary to do the work Kalanick and his lieutenants did (without the attendant buffoonery).* A two-day hackathon should be more than sufficient to create a robot able to translate spreadsheet data into resource-allocation plans and use machine learning to compose forward-leaning messages that inspire staffers, drivers, and venture capitalists. And to have Uber’s robot CEO sit next to Cook, Nadella, Bezos, et al., at the next White House photo-op would be a huge PR coup.

Not only is Uber the right company for a robot CEO, but now is the right time for one. Just two months ago, Alibaba CEO Jack Ma predicted that “in thirty years, a robot will likely be on the cover of Time Magazine as the best CEO.”** As the financier Martin Hutchinson pointed out, there’s no reason to wait that long. “Human CEOs have amassed an especially dire track record in the last two decades,” he wrote. “Whereas their compensation has soared far faster than overall U.S. output, productivity growth in U.S. businesses has notably lagged, indicating their failure to invest optimally.” If there were ever a job to be automated, it’s that of the underperforming, overpaid modern CEO.

Even at this year’s World Economic Forum in Davos, the case for a robot CEO was laid out in compelling terms:

There are some distinct advantages to having a robot as your company’s CEO. Firstly, they might be able to make better, more responsible, decisions. … Robots don’t face the unpredictability we humans face, so their decisions are more likely to be consistent, based on facts. … Robots can work all day, every day. They don’t need sleep, weekends or holidays. No mere humans can say the same, however hard they may try to cultivate that impression. … And if you’ve created one CEO robot, why not create a few more? It’s not as if he or she has a unique personality. Technology allows them to interact wherever your customers are, further cutting down travel costs and helping the environment.

We may look back on Kalanick’s resignation as the most transformative act of his eventful career. He has opened the door for a robot CEO. The question now is whether the Uber board will welcome the future or resist it.

_________

*On further thought, Uber’s coders probably have better things to do than write simple CEO algorithms. What’s really needed are cloud-based virtual CEOs. Yes: CEO-as-a-Service. Are you listening, Marc Benioff?

**Ma’s assumption that Time will still be around, with its cover intact, thirty years from now makes me question his futurist cred. But I’m going to assume he was speaking figuratively.