Can computers improvise?

eminem

Bust this:

Girl I’m down for whatever cause my love is true
This one goes to my man old dirty one love we be swigging brew
My brother I love you Be encouraged man And just know
When you done let me know cause my love make you be like WHOA

These rap lyrics, cobbled together by a computer from a database of lines from actual rap songs, “rival those of Eminem,” wrote Esquire‘s Jill Krasny last week. I have to think that’s the biggest dis ever thrown Eminem’s way. But Krasny was not the only one gushing over the witless mashup. A Mashable headline said the program, dubbed DeepBeat by its Finnish creators, “produced rap lyrics that rival human-generated rhymes.” Quartz‘s Adam Epstein suggested that robots can now be considered “lyrical wordsmiths.” Reported UPI: “Even rappers might soon lose their jobs to robots.”

I guess it must have been a slow news day.

Silly as it is, the story is not atypical, and it illuminates something important about our sense of the possibilities and threats presented by computers. Our expectations about artificial intelligence have raced ahead of the reality, and that’s skewing our view not only of the future but of the very real accomplishments being made in the AI and robotics fields. We take a modest but meaningful advance in natural-language processing — DeepBeat fits lines together through a statistical analysis of rhyme, line length, and wording, its choices constrained by a requirement that a specified keyword (“love” in the example above) appear in every line* — and we leap to the conclusion that computers are mastering wordplay and, by implication, encroaching on the human facility for creativity and improvisation. In the process, we denigrate the accomplishments of talented people — just to make the case for the computer seem a little more compelling.

We humans have a well-documented tendency to perceive human characteristics in, and attribute human agency to, inanimate objects. That’s a side effect, scientists believe, of the human mind’s exquisite sensitivity to social signals. A hint of human-like cognition or behavior triggers a sense that we’re in the presence of a human-like being. The bias becomes particularly strong when we observe computers and automatons performing manual or analytical tasks similar to those we do ourselves. Joseph Weizenbaum, the MIT computer scientist who wrote the program for the early chatbot ELIZA, limned the phenomenon in a 1966 paper:

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.

The procedures have grown more complex and impressive since then, and their practical applications have widened enormously, but Weizenbaum’s point still holds. We’re quick to mistake clever programming for actual talent.

Last week, in a New York Times op-ed examining human and machine error, I wrote a couple of sentences that I suspected would raise the odd hackle: “Computers are wonderful at following instructions, but they’re terrible at improvisation. Their talents end at the limits of their programming.” And hackles were raised. Krasny hooked her Esquire piece on DeepBeat to those lines, arguing that the program’s ability to spot correlations in spoken language is an example of machine improvisation that proves me wrong. On Twitter, the sociologist and technology writer Zeynep Tufekci suggested I was “denying the true state of advances in artificial intelligence.” She wrote: “I don’t agree that [computers] are unable to ‘improvise’ in the most practical way.”

The quotation marks in Tufekci’s statement are revealing. If we redefine what we mean by improvisation to encompass a computer’s ability to respond programmatically to events within a constrained field of activity, then, sure, we can say that computers “improvise.” But that’s not what we really mean by improvisation. To improvise — the word derives from a Latin term meaning “without preparation” — is to act without instruction, without programming, in novel and unforeseen situations. To improvise is to go off script. Our talent for improvisation, a talent we share with other animals, stems from the mind’s ability to translate particular experiences into a store of general know-how, or common sense, which then can be deployed, fluidly and often without conscious deliberation, to meet new challenges in new circumstances.

No computer has demonstrated an act of true improvisation, an act that can’t be explained by the instructions written by its programmers. Great strides are being made in machine learning and other AI techniques,** but the programming of common sense remains out of reach. The cognitive scientist Gary Marcus, in a recent New Yorker essay, “Hyping Artificial Intelligence, Yet Again,” explains:

Trendy new techniques like deep learning and neuromorphic engineering give A.I. programmers purchase on a particular kind of problem that involves categorizing familiar stimuli, but say little about how to cope with things we haven’t seen before. As machines get better at categorizing things they can recognize, some tasks, like speech recognition, improve markedly, but others, like comprehending what a speaker actually means, advance more slowly.

Marcus is hardly blasé about advances in artificial intelligence. He thinks it likely that “machines will be smarter than us before the end of the century — not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.” But he stresses that we’re still a long way from building machines with common sense, much less an ability to program themselves, and he’s skeptical that existing AI techniques will get us there.

Since we don’t know how the minds of human beings and other animals develop common sense, or gain self-awareness, or learn to improvise in novel situations, we have no template to follow in designing machines with such abilities. We’re working in the dark. As University of California, Berkeley, professor Michael Jordan, a leading expert in machine intelligence, said in an IEEE Spectrum interview with Lee Gomes last year, when it comes to “issues of higher cognition — how we perceive, how we remember, how we act — we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.”

And yet the assumption that computers are replicating our own thought processes persists. In reporting on DeepBeat’s use of a neural network, two Wall Street Journal bloggers wrote that the software “is based on a field of artificial intelligence that mimics the way the human brain works.” Writing about neural nets in general, Wired‘s Cade Metz says that “these systems approximate the networks of neurons inside the human brain.” As Jordan cautions, “people continue to infer that something involving neuroscience is behind [neural nets], and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.”

Even Marcus’s expectation of the arrival of human-level artificial intelligence in the next eighty or so years is based on a faith that we will find a way, without a map in hand or in the offing, to cross the undiscovered country that lies between where we are today and the promised land of human-level AI. That’s not to say it can’t happen. Our own minds would seem to be proof that common sense and improvisational skill can come from an assemblage of inanimate components. But it is to say that predictions that it will happen — in twenty years or fifty years or a hundred years — are speculations, not guarantees. They assume a lot of things that haven’t happened yet.

If in interpreting the abilities of machines we fall victim to our anthropomorphizing instinct, in forecasting the progress of machine abilities we’re often misled by our tendency to place unwarranted faith in our prediction techniques. Many of the predictions for the rapid arrival of human-like artificial intelligence, or “superintelligence” that exceeds human intelligence, begin with reference to “the exponential advance of computer power.” But even if we assume a doubling of available computer-processing power every year or two indefinitely into the future, that doesn’t tell us much about how techniques for programming AI will unfold. In warning against AI hype in another IEEE Spectrum interview, published earlier this year, Facebook’s director of AI research, Yann LeCun, described how easy it is to be led astray in anticipating ongoing exponential advances in AI programming:

As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit — physical, economical, societal — then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

Here’s a sketch of the sigmoidal pattern of progress that LeCun is talking about:

sigmoid

It’s very easy to get carried away during that exponential phase (“computers will soon do everything that people can do!”; “superintelligence will arrive in x years!”), and the more carried away you get, the more disheartened you’ll be when the plateau phase arrives. We’ve seen this cycle of hype and disappointment before in the progress of artificial intelligence, and it hasn’t just distorted our sense of the future of AI; it has also had a debilitating effect on the research itself. Coming after a period of hype, the plateau stage tends to provoke a sharp drop in both interest and investment, which ends up extending the plateau. Notes LeCun, “AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”

To be skeptical about the promises being made for AI is not to denigrate the ingenuity of the people who are actually doing the work. Appreciating the difficulties, uncertainties, and limits in pushing AI forward makes the achievements of computer scientists and programmers in the field seem more impressive, even heroic. If computers were actually capable of improvisation, a lot of the hardest programming challenges would evaporate. It’s useful, in this regard, to look at how Google has gone about programming its “autonomous” car to deal with unusual traffic situations. If you watch the car perform tricky maneuvers, you might be tempted to think that it has common sense and a talent for improvisation. But what it really has is very good and diligent programmers. Here’s how Astro Teller, the head of Google X, the R&D unit developing the vehicle, explains how the process has proceeded:

When we started, we couldn’t make a list of the 10,000 things we’d have to do to make a car drive itself. We knew the top 100 things, of course. But pretty good, pretty safe, most of the time isn’t good enough. We had to go out and just find a way to learn what should be on that list of 10,000 things. We had to see what all of the unusual real world situations our cars would face were. There is a real sense in which the making of that list, the gathering of that data, is fully half of what is hard about solving the self driving car problem.

The Google team, Teller says, “drives a thousand miles of city streets every single day, in pursuit of moments that stump the car.” As the team methodically uncovers novel driving challenges — challenges that human drivers routinely handle with ease, without requiring new instructions — it updates the car’s software to give the vehicle the ability to handle new categories of situations:

When we produce a new version of our software, before that software ends up on our actual cars, it has to prove itself in tens of thousands of [possible situations] in our simulator, but using real world data. We show the new software moments like this and say “and what would you do now?” Then, if the software fails to make a good choice, we can fail in simulation rather than in the physical world. In this way, what one car learns or is challenged by in the real world can be transferred to all the other cars and to all future versions of the software we’ll make so we only have to learn each lesson once and every rider we have forever after can get the benefit from that one learning moment.

Behind the illusion of machine improvisation lies a whole lot of painstaking effort. In an article this week about the Darpa Robotics Challenge, an annual event that tests the limits of robots, New York Times science reporter John Markoff emphasized this point:

Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, [Darpa program manager Gill] Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships. “It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”

If you’re worried about a robot or an AI algorithm taking your job, you can take a little comfort in what I’ve written here. But you shouldn’t take a lot of comfort. As the Google car shows, computers can take over a whole lot of sophisticated manual and intellectual work without demonstrating any common sense or improvisational skill. Many years ago, Alan Turing observed that, as computers sped up and databases swelled, the “ingenuity” of programmers would be able to be substituted for the “intuition” of skilled professionals in many fields. We’re seeing that today on a broad scale, and we’re going to be seeing even more of it tomorrow. But Turing also concluded that there would always be limits to the use of ingenuity. There would always be an important place for intuition — for “spontaneous judgments which are not the result of conscious trains of reasoning.” Computers would not be able to substitute for talented, experienced people in all situations. And we’re seeing plenty of evidence of that today, too. If you’re a rapper, you may need to worry about shifts in fashion — capriciousness is another human quality that computers can’t match — but you can rest assured that robot rappers pose no threat whatsoever to your livelihood. Computers in the future will be able to do more than we assume but less than we fear.

Ever since our ancestors made the first tools, we have been dividing labor between ourselves and our technologies. And the line between human effort and machine effort is always changing, for better and for worse. It would probably be a good idea to spend a little less time worrying about, or yearning for, a future in which robots take all our jobs and a little more time thinking about how to divide labor between people and computers in the wisest possible way.

______

*The Finnish researchers admit that when they apply their statistical model of rap lyrics to Eminem’s work, it scores poorly. The reason? Eminem is a master at “bending” — altering the pronunciation of words to create assonance and other rhymes where the rules say they shouldn’t exist. Eminem, in other words, improvises.

**Tomorrow, for example, Berkeley researchers will present a paper on a machine-learning technique that appears to enable a robot to master simple new tasks through a process of trial and error.

Image: Marvel.

2 thoughts on “Can computers improvise?

  1. Trevor Miles

    Loved the article Nick.
    It reminds me so much of a conference I attended in the mid-1980s when AI was a nascent study. There were tons of AI presentations and then there was Lotfi Zadeh presenting on Fuzzy Logic. It was also the time of a major herpes epidemic in the US, and before political correctness took hold. Lotfi skewered AI by describing how he boarded the plan to travel to the conference. First of all, when he arrived at the plane he asked the cabin attendant to tell him where eh was seated. She replied that the plane was quite empty so he could just choose a seat. First condition-response pair. He looked around the pane and found the prettiest woman with an empty seat next to her. Second condition-response pair. When he sat down, he have the woman a big smile and introduced himself. She had a big herpes sore on her lip. Third condition-response pair. So he stood up and look for the second best looking woman with an empty seat next to her who turned out to have 2 herpes sores. You get the picture. He said that any decision tree that requires precise condition-response pairs is inexhaustible.
    Zedah went on to say that he is much more comfortable in countries such as India where if the train departure time is listed as 10:00am the train can leave any time between 9:00am and 4:00pm, with the most likely time being 10:00am and the most probable time being 1:00pm. While his talk was entertaining it has stuck with me because of its insightful content.
    Warren Buffet supposedly said that “It is better to be approximately right than precisely wrong.” Perhaps this is his way of capturing the same concept.
    What I find interesting is the section in which you discuss the Google car. Which is worse, to be approximately right or precisely wrong?

  2. yvesT

    The key issue with AI, is that instead of talking about “research in Artificial Intelligence”, we should talk about “research of Artificial Intelligence”.
    There is absolutely no “discoveries” in “AI”.

Comments are closed.