Moral code

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.

The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?

We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.

54 thoughts on “Moral code

  1. Bill

    Until the computer can tell that what’s in the path is a person, there will have to be provision for the human to override the computer. Current considerations about whether there was time to react will continue to be in effect. (Your example specifies that it’s a child who fell or was pushed into your path, but really the age of the person is irrelevant, it seems to me.)

    When the computer can detect that it’s a person, the decision will not be left up to a programmer. Lawmakers, ethicists, and others whose expertise is deciding about moral and legal issues will wind up making the determination. My guess is that it will come down to “do no harm and if that isn’t possible, do the least amount of harm or injury.”

  2. Ted Seeber

    WWI taught us how to make an orderly battlefield less chaotic; it is a shame we never followed up on the research. Modern fully-fully automatic weapons could easily make the No Man’s Zone a completely foolproof reality (pun intended) thus providing an impenetrable three dimensional barrier between tribes of humans that can’t get along.

    And using this strategy- EVERY engagement is intentional- even friendly fire on a unit who was too stupid to follow instructions.

  3. Bill

    I’m guessing that some of you have never served in a war. It’s not as cut and dried as you seem to think. There’s an old bit of military wisdom: no battle plan ever survives the first shot. In addition to which, the classic battlefield no longer exists. Hasn’t existed since the communist insurrection in Cuba. Guerrillas and terrorists do not play by rules of engagement, and they don’t fight pitched battles. Drones don’t concern themselves with collateral damage. Neither do governments.

  4. LeRoi

    Some have questioned whether this scenario falls within the bounds of realistic engineering. I think it does.

    (1) True, human reaction time varies from 0.15 to 1.5 seconds.

    (2) And check out these little guys, just for giggles. I’d be willing to bet they react faster than humans, or will soon.

    (3) Google cars haven’t yet had an accident (while being driven by the computer, anyway). But Sebastian Thrun says they can’t yet tell the difference between a mattress and a rock.

    (4) Let’s assume the car reacts faster than a human. Chet: “But to the decision engine driving the car, there’s all the time in the world to start applying the brakes literally just as soon as a collision with a pedestrian becomes even remotely plausible.”

    (5) Chet’s using an idealized machine as a deus ex machina (ironic, right?). Even giving the machines much better reaction times, the car is still in the physical world, where brakes can require 88-150 feet to stop a car moving at 40 mph.,

    (6) So it’s reasonable to ask what will happen if a car is driving in a neighborhood or on a bridge and kids jump in the way – the car must choose which direction to steer.

    (7) This is a tough question: what if there are kids inside the car, too? I find seth y’s comments above particularly insightful, and I’d bet Seth Finkelstein is correct in saying that the public debate will change as the technology updates.

    (8) One could imagine the G-cars might have some degree of individual preference built in: since I’m single, I’d tell it to err on the side of caution. But if I had kids likely to ride in the car, I’d tell the car not to sacrifice us. Which opens up further possibilities for debate on liability.

Comments are closed.