When television emerged as a fledgling medium in the middle years of the last century, it already had, in the form of the Federal Communications Commission, the Communications Act of 1934, and various other laws and precedents, a framework for regulating its content. The formal restrictions on the broadcasting of obscene, indecent, profane, prurient, and violent material, combined with the sensitivities of mainstream advertisers, defined the boundaries of Prime Time television through the fifties, sixties, and much of the seventies — until the spread of cable programming changed everything.
When the internet emerged as a medium in the 1990s, it was free of any such regulatory framework restricting its content. Indeed, an anything-goes ethos was as essential to the nature and ideals of the net as the family-friendly ethos was to the nature and ideals of TV during its formative decades. The net, in other words, escaped the sanitized Prime Time phase.
Or did it?
Today, Facebook released a set of “content guidelines for monetization” that might have been written by FCC bureaucrats in the 1950s. Among other things, the Facebook rules prohibit or restrict:
“Content that depicts family entertainment characters engaging in violent, sexualized, or otherwise inappropriate behavior, including videos positioned in a comedic or satirical manner.”
“Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education.”
“Content that is incendiary, inflammatory, demeaning or disparages people, groups, or causes.”
“Content that is depicting threats or acts of violence against people or animals, [including] excessively graphic violence in the course of video gameplay.”
“Content where the focal point is nudity or adult content, including depictions of people in explicit or suggestive positions, or activities that are overly suggestive or sexually provocative.”
“Content that features coordinated criminal activity, drug use, or vandalism.”
“Content that depicts overly graphic images, blood, open wounds, bodily fluids, surgeries, medical procedures, or gore that is intended to shock or scare.”
“Content depicting or promoting the excessive consumption of alcohol, smoking, or drug use.”
I’m not sure Petticoat Junction would have made it through that gauntlet.
Rather than being imposed by government fiat, Facebook is imposing these content restrictions on itself in response to growing public concerns about the net’s anything-goes ethos and, in particular, to advertisers’ growing worries about what Facebook VP Carolyn Everson terms “brand safety.” The fact that the rules allow little or no room for editorial judgment — is this image exploitative or journalistic? — reveals what happens when a tech firm becomes a media hub.
Some will welcome the sweeping new restrictions on content. Others will be appalled. What they make clear, though, is that the internet, as most experience it, has entered a new era, spurred by the consolidation of traffic into a handful of sites and apps run by companies whose fortunes hinge on their ability to keep advertisers happy. The internet is reliving the history of television, but in reverse. First came Anything Goes. Now comes Prime Time.
The paperback edition of Utopia Is Creepy is out today, September 12, from W. W. Norton & Company. Collecting seventy-nine of the best posts from Rough Type as well as sixteen essays and reviews I published between 2008 and 2016, the book, says Time, “punches a hole in Silicon Valley cultural hubris.”
Here’s an excerpt from the introduction:
“The most unfree souls go west, and shout of freedom.” –D. H. Lawrence,Studies in Classic American Literature
The greatest of America’s homegrown religions — greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology — is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his 1833 testament The Paradise within the Reach of All Men. By fulfilling its “mechanical purposes,” he wrote, the United States would turn itself into a new Eden, a “state of superabundance” where “there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations,” not to mention “vegetables of infinite variety and appearance.”
Similar predictions proliferated throughout the nineteenth and twentieth centuries, and in their visions of “technological majesty,” as the critic and historian Perry Miller wrote, we find the true American sublime. We may blow kisses to agrarians like Jefferson and tree-huggers like Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.
The internet, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for America’s spiritual yearnings and tropes. “What better way,” wrote Cal State philosopher Michael Heim in 1991, “to emulate God’s knowledge than to generate a virtual world constituted by bits of information?” In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting “the second coming of the computer,” replete with gauzy images of “cyberbodies drift[ing] in the computational cosmos” and “beautifully-laid-out collections of information, like immaculate giant gardens.” The millenarian rhetoric swelled with the arrival of Web 2.0. “Behold,” proclaimed Kevin Kelly in an August 2005 Wired cover story: We are entering a “new world,” powered not by God’s grace but by the web’s “electricity of participation.” It would be a paradise of our own making, “manufactured by users.” History’s databases would be erased, humankind rebooted. “You and I are alive at this moment.”
The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, venture capitalist Marc Andreessen sent out a rhapsodic series of tweets — he called it a “tweetstorm” — announcing that computers and robots were about to liberate us all from “physical need constraints.” Echoing John Adolphus Etzler (and also Karl Marx), he declared that “for the first time in history” humankind would be able to express its full and true nature: “We will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.” The only thing he left out was the vegetables.
Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: They’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.
The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate, and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in “social production” outside the marketplace. “This new freedom,” wrote law professor Yochai Benkler in his influential 2006 book The Wealth of Networks, “holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.” Calling it a revolution, he went on, is no exaggeration.
Benkler and his cohorts had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when its commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organized to enrich a small group of businesses and their owners.
The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort—and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterized by frenetic production and consumption — smartphones have made media machines of us all — but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that has come to shroud the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.
Late in his life, the economist John Kenneth Galbraith coined the term “innocent fraud.” He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. “It is innocent because most who employ it are without conscious guilt,” Galbraith wrote. “It is fraud because it is quietly in the service of special interest.” The idea of the computer network as an engine of liberation is an innocent fraud.
I have an op-ed in today’s New York Times about how domestic robots, which we always assumed would resemble ourselves when they entered our homes, have instead arrived in the form of chatbot-powered smart speakers. The shift from the Jetsons’ embodied Rosie to Amazon’s disembodied Alexa says something important about our times, I suggest. The piece begins:
From the moment we humans first imagined having mechanical servants at our beck and call, we’ve assumed they would be constructed in our own image. Outfitted with arms and legs, heads and torsos, they would perform everyday tasks that we’d otherwise have to do ourselves. Like the indefatigable maid Rosie on The Jetsons, the officious droid C-3PO in Star Wars and the tortured “host” Dolores Abernathy in Westworld, the robotic helpmates of popular culture have been humanoid in form and function.
It’s time to rethink our assumptions. A robot invasion of our homes is underway, but the machines — so-called smart speakers like Amazon Echo, Google Home and the forthcoming Apple HomePod — look nothing like what we expected. Small, squat and stationary, they resemble vases or cat food tins more than they do people.
Echo and its ilk do, however, share a crucial trait with their imaginary forebears: They illuminate the times. Whatever their shape, robots tell us something important about our technologies and ourselves. …
By leaps, steps, and stumbles, science progresses. Its seemingly inexorable advance promotes a sense that everything can be known and will be known. Through observation and experiment, and lots of hard thinking, we will come to explain even the murkiest and most complicated of nature’s secrets: consciousness, dark matter, time, the origin and fate of the universe.
But what if our faith in nature’s knowability is just an illusion, a trick of the overconfident human mind? That’s the working assumption behind a school of thought known as mysterianism. Situated at the fruitful if sometimes fraught intersection of scientific and philosophic inquiry, the mysterianist view has been promulgated, in different ways, by many respected thinkers, from the philosopher Colin McGinn to the cognitive scientist Steven Pinker. The mysterians propose that human intellect has boundaries and that some of nature’s mysteries may forever lie beyond our comprehension.
Mysterianism is most closely associated with the so-called hard problem of consciousness: How can the inanimate matter of the brain produce subjective feelings? The mysterians argue that the human mind may be incapable of understanding itself, that we will never know how consciousness works. But if mysterianism applies to the workings of the mind, there’s no reason it shouldn’t also apply to the workings of nature in general. As McGinn has suggested, “It may be that nothing in nature is fully intelligible to us.”
The simplest and best argument for mysterianism is founded on evolutionary evidence. When we examine any other living creature, we understand immediately that its intellect is limited. Even the brightest, most curious dog is not going to master arithmetic. Even the wisest of owls knows nothing of the physiology of the field mouse it devours. If all the minds that evolution has produced have bounded comprehension, then it’s only logical that our own minds, also products of evolution, would have limits as well. As Pinker has put it, “The brain is a product of evolution, and just as animal brains have their limitations, we have ours.” To assume that there are no limits to human understanding is to believe in a level of human exceptionalism that seems miraculous, if not mystical.
Mysterianism, it’s important to emphasize, is not inconsistent with materialism. The mysterians don’t suggest that what’s unknowable must be spiritual or otherwise otherworldly. They posit that matter itself has complexities that lie beyond our ken. Like every other animal on earth, we humans are just not smart enough to understand all of nature’s laws and workings.
What’s truly disconcerting about mysterianism is that, if our intellect is bounded, we can never know how much of existence lies beyond our grasp. What we know or may in the future know may be trifling compared with the unknowable unknowns. “As to myself,” remarked Isaac Newton in his old age, “I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” It may be that we are all like that child on the strand, playing with the odd pebble or shell — and fated to remain so.
Mysterianism teaches us humility. Through science, we have come to understand much about nature, but much more may remain outside the scope of our perception and comprehension. If the mysterians are right, science’s ultimate achievement may be to reveal to us its own limits.
This post originally appeared in Edge, as an answer to the question “What scientific term or concept ought to be more widely known?”
I’ve always seen reality as mixed, so when I heard today that Microsoft is about to launch a line of Windows Mixed Reality Headsets, I was chuffed. Everyone who dons the eyewear, I assumed, would see the world exactly as I do. It was a dream come true. Subjectivity would finally be resolved, and in my favor. Here at last was a gizmo — from Microsoft, no less — that I could get behind.
Then I read that Microsoft “defines Mixed Reality as anything that includes or falls between Virtual Reality and Augmented Reality.” My chuffiness evaporated like dew on a summer morn. Whatever these headsets are going to reveal, it’s not going to be my reality. It’s not even going to be a reality. It’s going to be a bunch of realities subsumed into a meta reality.
We seem to have an abundance of realities all of a sudden. It’s like the Yippies finally went through with their plan to put acid in the water supply. I’m feeling overwhelmed. I hadn’t even realized that there was a gap between Augmented Reality and Virtual Reality that other realities could squeeze into. I had taken it as a given that VR begins right where AR ends — that they share a border. I was mistaken. MR encompasses AR and VR, but it also includes many other, as yet unbranded Rs. Conceptually, it’s something like the multiverse. Once you admit the possibility of two realities, you get a multitude of realities, all higgledy-piggledy.
“Humankind cannot bear very much reality,” T. S. Eliot wrote. But he wasn’t wearing a Windows Mixed Reality Headset.
I see where this is heading. Reality is about to be platformed. Reality, in fact, is going to be the ultimate platform: the Superplatform. You’re going to have Apple Reality, Facebook Reality, Google Reality, Amazon Reality, and Microsoft Reality, and each of them is going to be enclosed in a Trump-sized wall. The Reality War will be the war to end all Platform Wars.
Who am I kidding? There’s not going to be any war. Competition is for losers, particularly when it comes to reality-building. What we’re actually going to see is the rise of a Reality Oligopoly: five great and profitable Reality Monopolies feigning rivalry but existing comfortably side by side. I expect that the walls between them will end up being slightly porous — just enough to allow a bit of reality-hopping. You may be a member of Facebook Reality, but you’ll be able to vacation in Amazon Reality. Some sort of cross-payments system will be arranged.
It seems weird to think of reality as being ad-based, but I suppose a cynic would say it’s been that way for a while. Still, I can’t help but see a protest movement emerging: a small band of lefties and libertarians marching together under the banner of Reality Neutrality. “Reality wants to be free!” they’ll declare in a manifesto. To which the Reality Monopolists will quietly reply: “You’re entitled to your own facts, but you’re not entitled to your own reality.”
The following review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins appeared originally in the Los Angeles Review of Books.
¤ ¤ ¤
Chess is the game not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling idea, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.
At the dawn of the computer age, in 1950, the Bell Labs engineer and information scientist Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they would influence the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.
When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”
Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. As the public’s fascination with digital computers intensified during the 1950s, the machines began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.
¤ ¤ ¤
It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.
One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.
But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”
The history of computer chess is the history of artificial intelligence. After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. If in the 1950s and 1960s the emphasis in the phrase “artificial intelligence” fell heavily on the word “intelligence,” today it falls with even greater weight on the word “artificial.”
Particularly fruitful has been the deployment of search algorithms similar to those that powered Deep Blue. If a machine can search millions of options in a matter of milliseconds, ranking each according to how well it fulfills some specified goal, then it can outperform experts in a lot of problem-solving tasks without having to match their experience or insight. More recently, AI programmers have added another brute-force technique to their repertoire: machine learning. In simple terms, machine learning is a statistical method for discovering correlations in past events that can then be used to make predictions about future events. Rather than giving a computer a set of instructions to follow, a programmer feeds the computer many examples of a phenomenon and from those examples the machine deciphers relationships among variables. Whereas most software programs apply rules to data, machine-learning algorithms do the reverse: they distill rules from data, and then apply those rules to make judgments about new situations.
In modern translation software, for example, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesn’t require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translator’s work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. Kasparov quotes an engineer from Google’s popular translation program: “When you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”
The pragmatic turn in AI research is producing many such breakthroughs, but the shift also highlights the limitations of artificial intelligence. Through brute-force data processing, computers can churn out answers to well-defined questions and forecast how complex events may play out, but they lack the understanding, imagination, and common sense to do what human minds do naturally: turn information into knowledge, think conceptually and metaphorically, and negotiate the world’s flux and uncertainty without a script. Machines remain machines.
That fact hasn’t blunted the public’s enthusiasm for AI fantasies. Along with TV shows and movies featuring scheming computers and bloody-minded robots, we’ve recently seen a slew of earnest nonfiction books with titles like Superintelligence, Smarter Than Us, and Our Final Invention, all suggesting that machines will soon be brainier than we are. The predictions echo those made in the 1950s and 1960s, and, as before, they’re founded on speculation, not fact. Despite monumental advances in hardware and software, computers give no sign of being any nearer to self-awareness, volition, or emotion. Their strength — what Kasparov describes as an “amnesiac’s objectivity” — is also their weakness.
¤ ¤ ¤
In addition to questioning the common wisdom about artificial intelligence, Kasparov challenges our preconceptions about chess. The game, particularly when played at its highest levels, is far more than a cerebral exercise in logic and calculation, and the expert player is anything but a stereotypical egghead. The connection between chess skill and the kind of intelligence measured by IQ scores, Kasparov observes, is weak at best. “There is no more truth to the thought that all chess players are geniuses than in saying that all geniuses play chess,” he writes. “One of the things that makes chess so interesting is that it’s still unclear exactly what separates good chess players from great ones.”
Chess is a grueling sport. It demands stamina, resilience, and an aptitude for psychological warfare. It also requires acute sensory perception. “Move generation seems to involve more visuospatial brain activity than the sort of calculation that goes into solving math problems,” writes Kasparov, drawing on recent neurological experiments. To the chess master, the board’s 64 squares define not just an abstract geometry but an actual terrain. Like figures on a landscape, the pieces form patterns that the master, drawing on years of experience, reads intuitively, often at a glance. Methodical analysis is important, too, but it is carried out as part of a multifaceted and still mysterious thought process involving the body and its senses as well as the brain’s neurons and synapses.
The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparov’s account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethal attack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov “confused and in agony,” unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.
One of Kasparov’s strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machine’s lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. “I didn’t know my opponent at all,” he recalls. “This intense confusion left my mind to wander to darker places.” The irony is that the machine’s victory was as much a matter of psychology as of skill.*
If Kasparov hadn’t become flustered, he might have won the 1997 match. But that would have just postponed the inevitable. By the turn of the century, the era of computer dominance in chess was well established. Today, not even the grandest of grandmasters would bother challenging a computer to a match. They know they wouldn’t stand a chance.
But if computers have become unbeatable at the board, they remain incapable of exhibiting what Kasparov calls “the ineffable nature of human chess.” To Kasparov, this is cause for optimism about the future of humanity. Unlike the eight-by-eight chessboard, the world is an unbounded place without a rigid set of rules, and making sense of it will always require more than mathematical or statistical calculations. The inherent rigidity of computer intelligence leaves plenty of room for humans to exercise their flexible and intuitive intelligence. If we remain vigilant in turning the power of our computers to our own purposes, concludes Kasparov, our machines will not replace us but instead propel us to ever-greater achievements.
One hopes he’s right. Still, as computers become more powerful and more adept at fulfilling our needs, there is a danger. The benefits of computer processing are easy to measure — in speed, in output, in dollars — while the benefits of human thought are often impossible to express in hard numbers. Given contemporary society’s worship of the measurable and suspicion of the ineffable, our own intelligence would seem to be at a disadvantage as we rush to computerize more and more aspects of our jobs and lives. The question isn’t whether the subtleties of human thought will continue to lie beyond the reach of computers. They almost certainly will. The question is whether we’ll continue to appreciate the value of those subtleties as we become more dependent on the mindless but brutally efficient calculations of our machines. In the face of the implacable, the contingent can seem inferior, its strengths appearing as weaknesses.
Near the end of his book, Kasparov notes, with some regret, that “humans today are starting to play chess more like computers.” Once again, the ancient game may be offering us an omen.
*A bit of all-too-human deviousness was also involved in Deep Blue’s win. IBM’s coders, it was later revealed, programmed the computer to display erratic behavior — delaying certain moves, for instance, and rushing others — in an attempt to unsettle Kasparov. Computers may be innocents, but that doesn’t mean their programmers are.