Question marks of the mysterians

By leaps, steps, and stumbles, science progresses. Its seemingly inexorable advance promotes a sense that everything can be known and will be known. Through observation and experiment, and lots of hard thinking, we will come to explain even the murkiest and most complicated of nature’s secrets: consciousness, dark matter, time, the origin and fate of the universe.

But what if our faith in nature’s knowability is just an illusion, a trick of the overconfident human mind? That’s the working assumption behind a school of thought known as mysterianism. Situated at the fruitful if sometimes fraught intersection of scientific and philosophic inquiry, the mysterianist view has been promulgated, in different ways, by many respected thinkers, from the philosopher Colin McGinn to the cognitive scientist Steven Pinker. The mysterians propose that human intellect has boundaries and that some of nature’s mysteries may forever lie beyond our comprehension.

Mysterianism is most closely associated with the so-called hard problem of consciousness: How can the inanimate matter of the brain produce subjective feelings? The mysterians argue that the human mind may be incapable of understanding itself, that we will never know how consciousness works. But if mysterianism applies to the workings of the mind, there’s no reason it shouldn’t also apply to the workings of nature in general. As McGinn has suggested, “It may be that nothing in nature is fully intelligible to us.”

The simplest and best argument for mysterianism is founded on evolutionary evidence. When we examine any other living creature, we understand immediately that its intellect is limited. Even the brightest, most curious dog is not going to master arithmetic. Even the wisest of owls knows nothing of the physiology of the field mouse it devours. If all the minds that evolution has produced have bounded comprehension, then it’s only logical that our own minds, also products of evolution, would have limits as well. As Pinker has put it, “The brain is a product of evolution, and just as animal brains have their limitations, we have ours.” To assume that there are no limits to human understanding is to believe in a level of human exceptionalism that seems miraculous, if not mystical.

Mysterianism, it’s important to emphasize, is not inconsistent with materialism. The mysterians don’t suggest that what’s unknowable must be spiritual or otherwise otherworldly. They posit that matter itself has complexities that lie beyond our ken. Like every other animal on earth, we humans are just not smart enough to understand all of nature’s laws and workings.

What’s truly disconcerting about mysterianism is that, if our intellect is bounded, we can never know how much of existence lies beyond our grasp. What we know or may in the future know may be trifling compared with the unknowable unknowns. “As to myself,” remarked Isaac Newton in his old age, “I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” It may be that we are all like that child on the strand, playing with the odd pebble or shell — and fated to remain so.

Mysterianism teaches us humility. Through science, we have come to understand much about nature, but much more may remain outside the scope of our perception and comprehension. If the mysterians are right, science’s ultimate achievement may be to reveal to us its own limits.

This post originally appeared in Edge, as an answer to the question “What scientific term or concept ought to be more widely known?”

Reality as a service

I’ve always seen reality as mixed, so when I heard today that Microsoft is about to launch a line of Windows Mixed Reality Headsets, I was chuffed. Everyone who dons the eyewear, I assumed, would see the world exactly as I do. It was a dream come true. Subjectivity would finally be resolved, and in my favor. Here at last was a gizmo — from Microsoft, no less — that I could get behind.

Then I read that Microsoft “defines Mixed Reality as anything that includes or falls between Virtual Reality and Augmented Reality.” My chuffiness evaporated like dew on a summer morn. Whatever these headsets are going to reveal, it’s not going to be my reality. It’s not even going to be a reality. It’s going to be a bunch of realities subsumed into a meta reality.

We seem to have an abundance of realities all of a sudden. It’s like the Yippies finally went through with their plan to put acid in the water supply. I’m feeling overwhelmed. I hadn’t even realized that there was a gap between Augmented Reality and Virtual Reality that other realities could squeeze into. I had taken it as a given that VR begins right where AR ends — that they share a border. I was mistaken. MR encompasses AR and VR, but it also includes many other, as yet unbranded Rs. Conceptually, it’s something like the multiverse. Once you admit the possibility of two realities, you get a multitude of realities, all higgledy-piggledy.

“Humankind cannot bear very much reality,” T. S. Eliot wrote. But he wasn’t wearing a Windows Mixed Reality Headset.

I see where this is heading. Reality is about to be platformed. Reality, in fact, is going to be the ultimate platform: the Superplatform. You’re going to have Apple Reality, Facebook Reality, Google Reality, Amazon Reality, and Microsoft Reality, and each of them is going to be enclosed in a Trump-sized wall. The Reality War will be the war to end all Platform Wars.

Who am I kidding? There’s not going to be any war. Competition is for losers, particularly when it comes to reality-building. What we’re actually going to see is the rise of a Reality Oligopoly: five great and profitable Reality Monopolies feigning rivalry but existing comfortably side by side. I expect that the walls between them will end up being slightly porous — just enough to allow a bit of reality-hopping. You may be a member of Facebook Reality, but you’ll be able to vacation in Amazon Reality. Some sort of cross-payments system will be arranged.

It seems weird to think of reality as being ad-based, but I suppose a cynic would say it’s been that way for a while. Still, I can’t help but see a protest movement emerging: a small band of lefties and libertarians marching together under the banner of Reality Neutrality. “Reality wants to be free!” they’ll declare in a manifesto. To which the Reality Monopolists will quietly reply: “You’re entitled to your own facts, but you’re not entitled to your own reality.”

Image: Microsoft.

AI’s game

The following review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins appeared originally in the Los Angeles Review of Books.

¤ ¤ ¤

Chess is the game not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling idea, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the Bell Labs engineer and information scientist Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they would influence the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”

Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. As the public’s fascination with digital computers intensified during the 1950s, the machines began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤ ¤ ¤

It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.

One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence. After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. If in the 1950s and 1960s the emphasis in the phrase “artificial intelligence” fell heavily on the word “intelligence,” today it falls with even greater weight on the word “artificial.”

Particularly fruitful has been the deployment of search algorithms similar to those that powered Deep Blue. If a machine can search millions of options in a matter of milliseconds, ranking each according to how well it fulfills some specified goal, then it can outperform experts in a lot of problem-solving tasks without having to match their experience or insight. More recently, AI programmers have added another brute-force technique to their repertoire: machine learning. In simple terms, machine learning is a statistical method for discovering correlations in past events that can then be used to make predictions about future events. Rather than giving a computer a set of instructions to follow, a programmer feeds the computer many examples of a phenomenon and from those examples the machine deciphers relationships among variables. Whereas most software programs apply rules to data, machine-learning algorithms do the reverse: they distill rules from data, and then apply those rules to make judgments about new situations.

In modern translation software, for example, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesn’t require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translator’s work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. Kasparov quotes an engineer from Google’s popular translation program: “When you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”

The pragmatic turn in AI research is producing many such breakthroughs, but the shift also highlights the limitations of artificial intelligence. Through brute-force data processing, computers can churn out answers to well-defined questions and forecast how complex events may play out, but they lack the understanding, imagination, and common sense to do what human minds do naturally: turn information into knowledge, think conceptually and metaphorically, and negotiate the world’s flux and uncertainty without a script. Machines remain machines.

That fact hasn’t blunted the public’s enthusiasm for AI fantasies. Along with TV shows and movies featuring scheming computers and bloody-minded robots, we’ve recently seen a slew of earnest nonfiction books with titles like SuperintelligenceSmarter Than Us, and Our Final Invention, all suggesting that machines will soon be brainier than we are. The predictions echo those made in the 1950s and 1960s, and, as before, they’re founded on speculation, not fact. Despite monumental advances in hardware and software, computers give no sign of being any nearer to self-awareness, volition, or emotion. Their strength — what Kasparov describes as an “amnesiac’s objectivity” — is also their weakness.

¤ ¤ ¤

In addition to questioning the common wisdom about artificial intelligence, Kasparov challenges our preconceptions about chess. The game, particularly when played at its highest levels, is far more than a cerebral exercise in logic and calculation, and the expert player is anything but a stereotypical egghead. The connection between chess skill and the kind of intelligence measured by IQ scores, Kasparov observes, is weak at best. “There is no more truth to the thought that all chess players are geniuses than in saying that all geniuses play chess,” he writes. “One of the things that makes chess so interesting is that it’s still unclear exactly what separates good chess players from great ones.”

Chess is a grueling sport. It demands stamina, resilience, and an aptitude for psychological warfare. It also requires acute sensory perception. “Move generation seems to involve more visuospatial brain activity than the sort of calculation that goes into solving math problems,” writes Kasparov, drawing on recent neurological experiments. To the chess master, the board’s 64 squares define not just an abstract geometry but an actual terrain. Like figures on a landscape, the pieces form patterns that the master, drawing on years of experience, reads intuitively, often at a glance. Methodical analysis is important, too, but it is carried out as part of a multifaceted and still mysterious thought process involving the body and its senses as well as the brain’s neurons and synapses.

The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparov’s account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethal attack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov “confused and in agony,” unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.

One of Kasparov’s strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machine’s lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. “I didn’t know my opponent at all,” he recalls. “This intense confusion left my mind to wander to darker places.” The irony is that the machine’s victory was as much a matter of psychology as of skill.*

If Kasparov hadn’t become flustered, he might have won the 1997 match. But that would have just postponed the inevitable. By the turn of the century, the era of computer dominance in chess was well established. Today, not even the grandest of grandmasters would bother challenging a computer to a match. They know they wouldn’t stand a chance.

But if computers have become unbeatable at the board, they remain incapable of exhibiting what Kasparov calls “the ineffable nature of human chess.” To Kasparov, this is cause for optimism about the future of humanity. Unlike the eight-by-eight chessboard, the world is an unbounded place without a rigid set of rules, and making sense of it will always require more than mathematical or statistical calculations. The inherent rigidity of computer intelligence leaves plenty of room for humans to exercise their flexible and intuitive intelligence. If we remain vigilant in turning the power of our computers to our own purposes, concludes Kasparov, our machines will not replace us but instead propel us to ever-greater achievements.

One hopes he’s right. Still, as computers become more powerful and more adept at fulfilling our needs, there is a danger. The benefits of computer processing are easy to measure — in speed, in output, in dollars — while the benefits of human thought are often impossible to express in hard numbers. Given contemporary society’s worship of the measurable and suspicion of the ineffable, our own intelligence would seem to be at a disadvantage as we rush to computerize more and more aspects of our jobs and lives. The question isn’t whether the subtleties of human thought will continue to lie beyond the reach of computers. They almost certainly will. The question is whether we’ll continue to appreciate the value of those subtleties as we become more dependent on the mindless but brutally efficient calculations of our machines. In the face of the implacable, the contingent can seem inferior, its strengths appearing as weaknesses.

Near the end of his book, Kasparov notes, with some regret, that “humans today are starting to play chess more like computers.” Once again, the ancient game may be offering us an omen.

___________________________

*A bit of all-too-human deviousness was also involved in Deep Blue’s win. IBM’s coders, it was later revealed, programmed the computer to display erratic behavior — delaying certain moves, for instance, and rushing others — in an attempt to unsettle Kasparov. Computers may be innocents, but that doesn’t mean their programmers are.

Image: Google’s cat.

The robot paradox, continued

You can see the robot age everywhere but in the labor statistics, I wrote a few months ago, channeling Robert Solow. The popular and often alarming predictions of a looming unemployment crisis, one that would stem from rapid advances in robotics, artificial intelligence, and other computer automation technologies, have become increasingly hard to square with the economy’s rebound to near full employment. If computers were going to devastate jobs on a broad scale, one would think there’d be signs of it by now. We have, after all, been seeing remarkable gains in computing and software for many decades, while the broadband internet has been working its putative magic for more than twenty years. And it’s not like a shortage of corporate cash is curtailing investment in technology. Profits have been robust and capital cheap.

Still, even as jobs rebounded from the depths of the Great Recession, overall wage growth has appeared sluggish, at times stagnant. It has seemed possible that the weakness in wages might be the canary in the automated coal mine, an early indication of a coming surge in technological unemployment. If humans are increasingly competing for jobs against automatons, of both the hardware and software variety, that might explain workers’ inability to reap wage gains from a tightening labor market — and it might presage a broad shift of work from people to machines. At some point, if automation continued to put downward pressure on pay, workers would simply give up trying to compete with technology. The robots would win.

But even here, there’s growing reason to doubt the conventional wisdom. For one thing, earnings growth has been picking up, hitting an annualized 4.2 percent in July, its highest mark in a decade. Second, and more telling, the wage statistics may not have been giving us an accurate picture. The sluggishness in earnings growth may have been something of an illusion all along, a distortion resulting from a combination of demographic changes in the American work force and post-recession labor market dynamics. That’s the implication of a new study of wage growth in this century from the Federal Reserve Bank of San Francisco. The researchers found that average wages have been depressed by two unusual trends: (1) Baby boomers are retiring at a high rate, and they’re being replaced by younger and less experienced workers. The inexperienced workers are naturally being paid less than the veteran workers they’re replacing, which in the labor statistics appears as a drop in pay for those jobs. (2) A lot of the workers getting full-time jobs have either been unemployed for a while or are moving from part-time to full-time posts. These workers, too, will tend to earn below-average wages in their new positions, which also serves to pull down average wages. As the researchers explain: “Counterintuitively, this means that strong job growth can pull average wages in the economy down and slow the pace of wage growth.”

When you adjust the numbers for these factors, the wage picture improves considerably. “Overall,” the researchers report, “these factors have combined to hold down growth in the median weekly earnings measure by a little under 2 percentage points, a sizable effect relative to the normal expected gains.” Here’s the money graph from the Fed report:

The black line in the middle tracks wage growth as reported in the labor statistics. The dotted red line shows the effect on the numbers of recent changes in the makeup of the workforce. The dotted blue line shows what wage growth looks like when you account for those demographic shifts — when you isolate, in other words, the actual changes in the wages of employed, full-time workers. What you’re left with, clearly, is a much brighter and much more typical picture. As the Fed’s economic research director Mary Daly told Bloomberg, “Wage growth, when cleaned up, looks consistent with other measures seen in the labor market.”

I’m sure this research won’t be the final word on the complex issue of jobs, wages, and technological unemployment. But the findings do provide further reason for skepticism when examining claims that a robot horde is about to eat the job market.

Postscript: In a new article in Wired, Andrew McAfee, coauthor with Erik Brynjolfsson of the influential book The Second Machine Age, says he now regrets the stress he placed on automation’s impact on overall employment: “If I had to do it over again, I would put more emphasis on the way technology leads to structural changes in the economy, and less on jobs, jobs, jobs. The central phenomenon is not net job loss. It’s the shift in the kinds of jobs that are available.” I think that’s right, but I’d add another concern that will become more pressing: the impact of automation on the structure of jobs themselves. Human beings and computers are going to be working together, more closely than ever, and we need to get the division of labor right. The “robots are taking over” rhetoric is a distraction from what’s most important about the second machine age.

Image: still from Lost in Space.

The virtual postman never stops ringing

In the latest issue of New Philosopher, I have an essay, “Speaking Through Computers,” that looks at how the form and content of our speech have been shaped by communications networks, from the postal system to social media. It begins:

Much modern technology has its origins in war, or the anticipation of war, and that’s the case with Google, Facebook, Snapchat, and all the other networks that stream data through our phones and lives. The Big Bang of digital communication came on the morning of August 29, 1949, when the Soviet Union carried out its first test of an atomic bomb. The explosion jolted the U.S. government, and the American military soon began work on a vast air-defense system, known as Semi-Automatic Ground Environment, or SAGE, to provide early warnings of air attacks on North America.

The system required a fast computer network. Readings from radar stations would be collected in digital form by mainframes stationed around the continent, and the data would be sent in real time to other computers at command centers and air bases. The output would be a complete picture of the sky at every moment. There was just one hitch: computers at the time worked in solitude; they didn’t know how to talk to each other. The Air Force called in the crack engineers at Bell Labs, and they solved the problem by devising a digital modem able to turn the ones and zeroes of computer code into electrical pulses that could be sent over wires. The telephone lines that for decades had carried the conversations of human beings now carried computer chatter as well. The melding of personal and machine communication had begun. . . .

New Philosopher is available at newsstands and bookstores or by subscription. More info.

The soft tyranny of the rating system

In his darkly comic 2010 novel Super Sad True Love Story, Gary Shteyngart imagines a Yelpified America in which people are judged not by the content of their character but by their streamed credit scores and crowdsourced “hotness” points. Social relations of even the most intimate variety are governed by online rating systems.

A sanitized if more insidious version of Shteyngart’s big-data dystopia is taking shape in China today. At its core is the government’s “Social Credit System,” a centrally managed data-analysis program that, using facial-recognition software, mobile apps, and other digital tools, collects exhaustive information on people’s behavior and, running the data through an evaluative algorithm, assigns each person a “social trustworthiness” score. If you run a red light or fail to pick up your dog’s poop, your score goes down. If you shovel snow off a sidewalk or exhibit good posture in riding your bicycle, your score goes up. People with high scores get a variety of benefits, from better seats on trains to easier credit at banks. People with low scores suffer various financial and social penalties.

As Kai Strittmatter reports in a Süddeutsche Zeitung article, the Social Credit System is already operating in three dozen test cities in China, including Shanghai, and the government’s goal is to have everyone in the country enrolled by 2020:

Each company and person in China is to take part in it. Everyone will be continuously assessed at all times and accorded a rating. In [the test cities], each participant starts with 1000 points, and then their score either improves or worsens. You can be a triple-A citizen (“Role Model of Honesty,” with more than 1050 points), or a double-A (“Outstanding Honesty”). But if you’ve messed up often enough, you can drop down to a C, with fewer than 849 points (“Warning Level”), or even a D (“Dishonest”) with 599 points or less. In the latter case, your name is added to a black list, the general public is informed, and you become an “object of significant surveillance.”

As Strittmatter points out, the Chinese government has long monitored its citizenry. But while the internet-based Social Credit System may be nothing new from a policy standpoint, it allows a depth and immediacy of behavioral monitoring and correction that go far beyond anything that was possible before:

The Social Credit System’s heart and soul is the algorithm that gathers information without pause, and then processes, structures and evaluates it. The “Accelerate Punishment Software” section of the system guidelines describes the aim: “automatic verification, automatic interception, automatic supervision, and automatic punishment” of each breach of trust, in real time, everywhere. If all goes as planned, there will no longer be any loopholes anywhere.

The government officials that Strittmatter talked to were eager to discuss the program and to emphasize how it would encourage citizens to act more responsibly, leading to a happier, more harmonious society. As one planning document puts it, “the system will stamp out ‘lies and deception’ [and] increase ‘the nation’s honesty and quality.'” Those sound like worthy goals, and the rhetoric is not so different from that used in the U.S. and U.K. to promote governmental and commercial programs that employ online data collection and automated “nudge” systems to encourage good behavior and social harmony. I recall something Mark Zuckerberg wrote in his recent “Building Global Community” manifesto: “Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community.” I’m not suggesting any equivalence. I am suggesting that when it comes to using automated behavioral monitoring and control systems for “beneficial” ends, the boundaries can get fuzzy fast. “Of all tyrannies,” wrote C. S. Lewis in God in the Dock, “a tyranny sincerely exercised for the good of its victims may be the most oppressive.”

What’s particularly worrisome about behavior-modification systems that employ publicly posted numerical ratings is that they encourage citizens to serve as their own tyrants. Using peer pressure, competition, and status-establishing prizes to shape behavior, the systems raise the specter of a “gamification” of tyranny. Nobody wants the stigma of a low score, particularly when it’s out there on the net for everyone to see. We’ll strive for Status Credits just as we strive for Likes or, to return to Shteyngart’s world, Hotness Points. “Our aim is to standardize people’s behavior,“ a Communist Party Secretary tells Strittmatter. “If everyone behaves according to standard, society is automatically stable and harmonious. This makes my work much easier.”

On Robert Pollard: “Chicken Blows”

[No. 04 in a Series]

Take one of those short Beatles songs from the medley that closes Abbey Road, turn it inside out, fill it with nitrous oxide, and let a kindergarten class use it as a ball during recess. That’s “Chicken Blows.” A seeming throwaway that arrives near the end of the nearly endless Alien Lanes, the song reveals itself as a miniature pop masterpiece only after many listens: the exquisitely frayed melody, the trembling vocal, the aching background harmonies, all washing across the tidal pull of a hazy, hypnotic guitar line. Everything feels exhausted, out of focus, dreamlike. “Chicken Blows” is the last song you hear before you fall asleep after a night that’s gone on much too long.

Like so many Guided by Voices songs, “Chicken Blows” has a warped backstory. It was originally released in 1994, a year before Alien Lanes came out, as a track on an exceedingly obscure compilation EP called The Polite Cream Tea Corps, which was included in an issue of Ptolemaic Terrascope, an occasional British psychedelic-music magazine. But the song seems to have been written and recorded much earlier than that, perhaps even in the 1980s. It was slated to be on the aborted 1991 Guided by Voices album Back to Saturn X, which Robert Pollard “shitcanned” just as it was going into production. It was then held in suspended animation for a few years, as three seminal GBV records appeared (1992’s Propeller, 1993’s Vampire on Titus, and 1994’s Bee Thousand), before Pollard decided the time was right to release it.

What’s remarkable about “Chicken Blows” is that it sounds much more contemporary today than it did when it came out more than twenty years ago. Sonically, it anticipates the entropic, Auto-Tune experiments of Bon Iver, Kanye West, and others. It’s fitting that Frank Ocean included the number on one of his Beats I playlists earlier this year. Sometimes seeds spend a lot of time underground before they sprout.

“Chicken blows”? The lyrics are funny, but as usual they’re hiding something sad.

I’m not here to drink all the beer
in the fridge,
in the room,
in the house,
in the place
that we both so love.

“The intellect of man is forced to choose,” wrote W. B. Yeats, “perfection of the life, or of the work.” In the singleminded pursuit of his art, Pollard has had to live something of a broken life, at least when it comes to playing the domestic roles of son, husband, and father — those tireless consumers of poultry meals — and it’s this tension that gives so much of his work its heartbreaking quality. “Chicken Blows” is, among other things, a confession and an apology.

Can you sink
to the depths?
I don’t know,
I don’t even care,
and our lives
slip away.
In the end
we will probably reach
all the way
to the walls
over there.
Have you flown?

The walls of the home are the bonds of love, and it’s the sound of them slowly collapsing that gives “Chicken Blows” its poignancy.

Image: Detail of “His Beautiful Women Crying” by Robert Pollard.