Beware the bored computer

boredom2

Last month a controversy broke out over reports that “Eugene Goostman,” a computer simulation of a 13-year-old boy, had passed the Turing Test and ushered in an era of true Artificial Intelligence. The discussions soon turned to the question of whether Alan Turing’s famous test is actually the best means of identifying AI. One group of researchers suggested that the Turing Test should be replaced by the Lovelace Test (named for Ada, not Linda), which requires a computer to create something original. That seems reasonable, but we still prefer the Rough Type Test, which was introduced in a post on this blog two years ago. We republish that post, with a couple of tweaks, below.

“Heaven is a place where nothing ever happens.” —Talking Heads

Let’s say, for the sake of argument, that David Byrne was correct and that the distinguishing characteristic of paradise is the absence of event, the total nonexistence of the new. Everything is beautifully, perfectly unflummoxed. If we further assume that hell is the opposite of heaven, then the distinguishing characteristic of hell is unrelenting eventfulness, the constant, unceasing arrival of the new. Hell is a place where something always happens. One would have to conclude, on that basis, that the great enterprise of our time is the creation of hell on earth. Every new smartphone should have, affixed to its screen, one of those transparent, peel-off stickers on which is written, “Abandon hope, ye who enter here.”

Maybe I’m making too many assumptions. But I was intrigued by Tom Simonite’s report today on the strides Google is making in the creation of neural nets that can actually learn useful things. The technology, it’s true, remains in its early infancy, but it appears at least to be post-fetal. It’s not at the level of, say, a one-and-a-half-year-old child who points at an image of a cat in a book and says “cat,” but it’s sort of in that general neighborhood. “Google’s engineers have found ways to put more computing power behind [machine learning] than was previously possible,” writes Simonite, “creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations. The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.”

The company has begun applying its neural nets to speech-recognition and image-recognition tasks. And, according to Google engineer Jeff Dean, the technology can already outperform people at some jobs:

“We are seeing better than human-level performance in some visual tasks,” he says, giving the example of labeling where house numbers appear in photos taken by Google’s Street View car, a job that used to be farmed out to many humans. “They’re starting to use neural nets to decide whether a patch [in an image] is a house number or not,” says Dean, and they turn out to perform better than humans.

But the real advantage of a neural net in such work, Dean goes on to say, probably has less to do with any real “intelligence” than with the machine’s utter inability to experience boredom. “It’s probably that [the task is] not very exciting, and a computer never gets tired,” he says. Comments Simonite, sagely: “It takes real intelligence to get bored.”

Forget the Turing Test. We’ll know that computers are really smart when computers start getting bored. If you assign a computer a profoundly tedious task like spotting house numbers in video images, and then you come back a couple of hours later and find that the computer is scrolling through its Facebook feed or surfing porn, then you’ll know that artificial intelligence has truly arrived. The Singularity begins with an ennui-stricken microchip.

There’s another angle here, though. As many have pointed out, one thing that networked computers are supremely good at is preventing their users from experiencing boredom. A smartphone is the most perfect boredom-eradication device ever created. (Some might argue that smartphones don’t so much eradicate boredom as lend to boredom an illusion of excitement, but that’s probably just semantics.) To put it another way, what networked computers are doing is stealing from humans one of the essential markers of human intelligence: the capacity to experience boredom.

And that brings us back to the Talking Heads. For the non-artificially intelligent, boredom is not an end-state; it’s a portal to elsewhere — a way out of quotidian eventfulness and into some other, perhaps higher state of consciousness. Heaven is a place where nothing ever happens, but that’s a place that the computer, and, as it turns out, the computer-enabled human, can never visit. In hell, the house numbers, or their equivalents, never stop coming, and we never stop being amused by them.

Image: Petr Dosek.

One thought on “Beware the bored computer

  1. Deborah

    “If you assign a computer a profoundly tedious task like spotting house numbers in video images, and then you come back a couple of hours later and find that the computer is scrolling through its Facebook feed or surfing porn, then you’ll know that artificial intelligence has truly arrived. The Singularity begins with an ennui-stricken microchip.”

    This is priceless.

Comments are closed.