So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.
The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?
We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.