Moral code

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.

The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?

We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.

54 thoughts on “Moral code

  1. Dan Miller

    I’m not sure this is as big a problem as the author makes it out to be. The scenario where you explicitly endanger the driver to save innocents is a rare subset of accidents; in most cases, “protect the driver at all costs” will be functionally identical to “protect pedestrians at all costs”, with the action being “come to a stop as fast as possible”. And that’s leaving aside the utilitarian benefits, assuming that automated cars become better at driving than people (which I think is pretty reasonable).

  2. Nick Post author

    Saying a dilemma is rare does not exempt you, or a robot, from dealing with it when it arises.

  3. Scott Wilson

    But these dilemmas already exist and are dealt with, frequently imperfectly, by humans now. If you’re going to try to argue that a snap judgment is somehow more moral than one that has been programmed in after due thought and consideration, you’re going to have to do more than point quailing into the future saying “But it’s a MACHINE!”

    I’m not going to say that programmers will always exercise due thought and consideration; however, it seems obvious they will have more an opportunity to do so than the humans that are already making these instantaneous calls in the heat of the action. Professor Marcus’ work, among others, seems to show that technologists are already delving into these considerations. Their efforts will almost certainly continue to deliver imperfect results, but there is at least some chance that the consideration will in fact improve on the responses of stressed-out humans who are already making decisions while driving and fighting.

  4. Mark

    It won’t be the programmers making the decisions, nor will find themselves in an extraordinarily difficult situation for the reason you think. The CEOs will be making the decisions based on profit driven market forces. The programmers will be told what to program and not to decide such issues that could cost the company its future. The difficult situation some programmers may face is if he or she does not agree with the moral programming they are being asked to program into a machine, then do they quit their jobs perhaps, therefore, bringing suffering to the family.

    Great article. I love this kind of stuff.

  5. Nick Post author

    Scott: The programmer as deus ex machina, eh? Arriving to deliver us from our age-old moral struggles. OK, so tell me: Does the car hit the kids or go off the bridge? And if you’re not willing to do the programming, point me to your nominee, or specify the process that, with due thought and consideration, will answer such questions in the optimum ethical manner.

    Mark: So the question is: Who programs the programmers? I think you’re onto something there.

  6. Chet

    I’ll be honest – I wish philosophers like you would bother to learn something about the fields they philosophize about. There’s literally nothing in this post that suggests you have any knowledge whatsoever about the capabilities of programming, software, robotics, or decision systems.

  7. Mark

    Chet, this post isn’t about what software or hardware is capable of. This is about what the people in control of those systems decide what to make those systems do.

  8. Nick Post author

    I’m no philosopher, Chet, but if you’re willing to divulge your knowledge, I’m happy to be educated.

  9. Seth Finkelstein

    Nick, this is the sort of stuff which constantly reminds me that topics aren’t often about the nominal issue, but underlying anxieties and concerns of the writers. This is about “OMG, *machines*, scary *technology*, whatever is becoming of the world, it’s so frightening with all these newfangled inventions …”.

    The writer mentions Asimov’s Laws Of Robotics. He doesn’t mention that Asimov’s stories are all about mediating on the complexity and ambiguities in them, from practical interpretation to philosophical implications. I think that’s because Asimov treats it as a problem to be engaged, not as a source of fear of technology (he wrote very much that in one introduction).

    I suspect this is what Chet is getting at more harshly. It’s not about the difficulties inherent in automatic decision-making. There’s real-world examples – air-bag deployment comes to mind. When should a car deploy an air-bag, given that a deployment can actually cause injury or even death in some cases? Rather, it’s about what’s been called the Frankenstein Complex, muttering that our creations will do evil immoral things because they lack a human soul.

  10. Bill

    I can see where you’re going with this and why you chose the example. However, I doubt you’re ever really going to find such a black and white situation in reality. Kill the child or kill the driver. What about other traffic, on both sides of the road? What about the grey area of injury to both parties, but not death and the extent of those injuries? (it’s not inconceivable the car could determine that) Also in a robotised world, cars are likely to be more closely packed, so what of the concertina effect of sudden braking (kill the child or kill dozens of motorists)?

    Suddenly we’re in the realm of having to make a number of moral decisions in a split second. No human could do this and whatever choice is made in that split second is unlikely to be the perfect one. Can the machine do this more quickly? Perhaps. Maybe it can calculate likely outcomes and choose the most favourable. And in that situation, in that spectrum of situations, one is likely to be the best (or least worse).

    As Seth suggests, the tension isn’t so much the decision the machine makes, more that we are no longer in the loop. And then, in this type of situation, ultimately we’re at Asimov’s Laws of Robotics as applied to society (“a robot may not injure society, or through inaction, allow society to come to harm”). From what I remember of the books, individual humans didn’t necessarily do so well out of that though.

  11. Chet

    I’m no philosopher, Chet, but if you’re willing to divulge your knowledge, I’m happy to be educated.

    Well, for instance the notion of a “split-second” decision really makes no sense, here. You have a split-second because you’re a human being, thinking in a human time scale.

    But to a system-on-a-chip, like a cheap ARM processor – twelve dollars, or less in bulk – running at a paltry 60 mHz, the wheels of a car at 50 mph won’t have rotated even a single milliradian from one clock cycle to the next. To a philosopher, the notion of a “split-second” decision is a way of saying “what do our moral and ethical intuitions say?” But to the decision engine driving the car, there’s all the time in the world to start applying the brakes literally just as soon as a collision with a pedestrian becomes even remotely plausible. From the perspective of a software platform, the situation as you describe is a lot less like “split-second decision to choose between your life and the children’s” and a lot more like that of a train engineer who gets a call over the radio that a family station wagon is trapped on the crossing three miles up ahead. It would be deeply, deeply absurd to portray his only options as 1) steaming ahead and obliterating the family or 2) derailing the train.

    Look, I mean, Microsoft does full-on kinesthetic skeleton detection with a consumer device that costs $100. It’s not unreasonable to assume that the car starts reacting as soon as the children start to fall into the street. I just don’t see where morality comes into it. It’s like asking about the morality of a drone operation station when the operator clicks the button that launches the Hellfire missile, and expecting the station to sometimes say “no.” It doesn’t work like that. We don’t impugne morality to firearms or tanks, we impugne it to the designers and operators of those technologies. If you have some reason to believe that the autonomous-car designers at Google aren’t moral individuals, then maybe there’s something to be worried about. But wondering about the morality of their software is just bizarre.

  12. Chet

    Also in a robotised world, cars are likely to be more closely packed, so what of the concertina effect of sudden braking (kill the child or kill dozens of motorists)?

    The concertina effect only exists because of signal propagation delay and how long it takes for a human to respond to an unexpected stimulus. Robotic cars would communicate their intention to brake as they did it; there’d be no concertina effect at all because all the cars would brake at once.

  13. Scott Wilson

    The programmer as deus ex machina, eh? Arriving to deliver us from our age-old moral struggles. OK, so tell me: Does the car hit the kids or go off the bridge? And if you’re not willing to do the programming, point me to your nominee, or specify the process that, with due thought and consideration, will answer such questions in the optimum ethical manner.

    I’m not sure if you are deliberately misunderstanding me or trying to get at a different point that I am somehow missing. But, to answer your questions: I don’t know what the car does when it is programmed, any more than you know what it does when there is a driver at the wheel. I’ll have to brush up on my BASIC a little first, but I’ll be happy to do the programming, and in doing so I’d love to take into consideration the thoughts of Asimov, Marcus, and other ethicists and roboticists who have and continue to ponder the morality of coding robotic behavior.

    It’s not as if people making these decisions now generally sit around debating and considering the ethics and preparing themselves for what to do if a gaggle of school kids show up in front of their bumper. As far as I am concerned, you still haven’t made even the most basic argument as to why their snap-decision is somehow more moral than that of someone who has had a chance to consider it first. Is it Zen? God’s will? A preference for the honesty of un-deliberated human expression over science and rationality?

    I’m not arguing the programmer will achieve perfection in this… in fact, the point of these questions is that there is no perfect answer. But, aside from Mark’s excellent point, I just don’t see how giving whoever is making the decision more time and tools to make it is going to result in more immoral decisions. My guess is, if, god forbid, you actually found and talked to someone who had run over a bunch of school kids, they’d give anything in the world for an extra few seconds to have considered their options.

  14. Nick Post author

    Thanks for all the thoughtful comments. A brief point (I’ll post a longer reply later, when I have some time):

    The problem isn’t whether the programmer or designer is moral or not. Nor is the problem whether the software, working at computer speeds, will be able to be “more moral” (ie, bring more data to bear) than a human working at human speeds. The problem is that situations are ambiguous, and what constitutes the moral choice ultimately rests on a subjective judgment. Two moral people may make different choices, and defend their choices as ethical. So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software? We can debate hypothetical scenarios all we want, but the fact is that self-driving cars, lethal military machines, and other autonomous robots, when placed into the human world, will face ambiguous situations in which there is no time for human judgment to be introduced, and they will have to make decisions. And all the number-crunching in the world will not resolve the moral ambiguity.

  15. Nick Post author

    And one more future hypothetical situation to mull over:

    You have a computer-controlled “rifleman robot” stationed on a street corner in a city your forces are defending. The robot’s camera sees a man in civilian clothes acting in what past patterns of behavior suggest is a suspicious manner. The robot, drawing on a thorough analysis of the immediate situation and a rich database of past experiences, swiftly calculates that there’s a 68 percent chance that the person is an insurgent who is preparing to detonate a bomb and a 32 percent chance that he’s just an innocent civilian. At that moment, a personnel carrier is coming down the street with a dozen soldiers on board. If there is a bomb, it could be detonated any second. There’s no time to bring “human judgment” to bear. What does the robot’s software tell it to do: shoot or hold fire?

    Note that if you establish a rule that says no robot can take lethal action without human review and authorization (following Asimov’s ethics), that rule in itself reflects a moral choice (which in this case may leave 12 people dead).

  16. Seth Finkelstein

    The answer to “So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software?” is, exactly how it’s done RIGHT NOW, through an elaborate political fight involving corporate profit-motives, tort law, regulatory agencies, public scandals, etc. Programmers are really the least significant people here 1/2 :-). That political fight is interesting. Framing it as about the soulless machines (“We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”) is just tickling the lizard-brain of technophobia.

    Again – at what velocity does a car’s airbag deploy, and with how much force? These parameters aren’t difficult to program. What the parameters should be is the moral choice. And there’s a massive policy argument over what they should be, which has plenty written about it. The irritating thing, which several people have now objected to in various ways, is that the (my phrasing) Frankstein Complex articles are written as if we don’t already deal with these general types of issues, and in a context that’s less thoughtful than decades-old science fiction stories.

    Note, it’s a SF cliche that the “good guys” set their defense system not use lethal force without operator authorization (if at all), and the “bad guys” set their defense system to try to kill enemies automatically. Having the chief “bad guy” get killed by their own defense system’s actions is a stock ironic ending.

  17. Scott Holloway

    This discussion reminds me of two things: the investigation behind the 1986 Challenger explosion, and run-away cars with an inadvertantly “locked” accelerator.

    I see that the question ultimately is “In what way should the autonomous cars/guns/machines be programmed to save lives, when the taking of a life is unavoidable?” and “Who decides how the cars are programmed?” To me, the answer to these questions is the same answer to the question, “who decided to launch Challenger, despite the engineers’ warning about increased danger because of freezing temperatures?” That is scary.

    Just as the out-of-control accelerating problem in cars could have been a bug in firmware, imagine a Google car programmed to avoid hitting the three children (or programmed to “choose” the least amount of human life lost), but because of a bug, it ends up killing the children. That is scary.

  18. Nick Post author

    Seth,

    I think your technophobia-phobia may be skewing your perception of this discussion, at least a bit. But you make a very good point in saying that the design and regulation of technologies have always entailed moral calculations, often requiring judgments about, say, the proper balance between safety and cost minimization. But I think there’s a crucial difference. In the past, the ethical judgments were generalizations, made based on averages. When you’re dealing with here are real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations, actions leading to a set of particular consequences.

  19. Chet

    What does the robot’s software tell it to do: shoot or hold fire?

    Doesn’t it do whatever we told it to do in that situation? Code is deterministic like that.

    But my more realistic prediction is that the robot responds with a signal to the personnel carrier to retreat to a safe distance pending resolution of the situation, and then jams every radio and IR frequency used by any non-military device in a 20 block radius. If there’s not enough time before the bomb goes off to get the PC out of harm’s way, then there’s not enough time to get the PC out of harms way. If worse comes to worse and the PC comes to harm because it really was an insurgent with a bomb, then they push Robot Rifleman v1.1, which now sends “retreat to safe distance” signals at a threshold of 55% “chance it’s a bomb.” Firing on a human just never comes into it and it’s difficult to image in a situation where we’ve programmed robots to fire on targets they designate themselves, rather than targets we designate for them. It’s difficult to imagine how software could ever know the difference, absent full general AI, at which point the answer to the question of whose moral decisionmaking they’re using is “their own.”

  20. Nick Post author

    Doesn’t it do whatever we told it to do in that situation? Code is deterministic like that.

    I believe we’re in agreement there.

  21. Seth Finkelstein

    Nick, I admit, I tend to get peevish on certain topics. But even taking that into account, would you consider from the several programmers posting here that there really has been decades of practical engineering thought about the dangerous of automated systems, and this subject is not uncharted territory that’s just been discovered and brought to attention by philosophers? That we (not only programmers, but society overall) actually do know something about it that’s deeper than RoboCop? In specific “real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations” is not new. Increasing in complexity, certainly. But not so disconnected from current issues as to be unprecedented.

    If you don’t like airbag deployment, consider the popular anti-spam program “spam-assassin”, or similar. It’s a software agent that tries to make an algorithmic determination as to whether a piece of mail is spam or not. Should it delete the mail? Mark it as spam for possible review? Those are configurable options. What should the defaults be? What if important mail is lost (liability!) ? That’s all a problem right now.

    Just because writers are unfamiliar with something doesn’t mean it doesn’t exist.

  22. Scott Wilson

    I think you are making a distinction without a difference here, Nick. If this is genuinely the crux of your position:

    The problem is that situations are ambiguous, and what constitutes the moral choice ultimately rests on a subjective judgment. Two moral people may make different choices, and defend their choices as ethical. So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software?

    …then how exactly is it any different from the non-automated now? We no more (in fact, probably less) get to decide whose subjective judgement is exercised in these situations today than in the world of tomorrow. Throw out my contention that the ability to select the programmers avails us an opportunity to pick a more ethical code; you’re still left with the same scenario we have right now: maybe the guy behind the wheel is a saint, maybe he’s a sadist.

    To Seth, you say,

    In the past, the ethical judgments were generalizations, made based on averages. When you’re dealing with here are real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations, actions leading to a set of particular consequences.

    But we’re not dealing with a genuine intelligence here. In the future, as the past, coded ethical judgements are equally likely to be made based on averages. Casting this as “real-time evaluations” both over-emphasizes the role of the computer and under-appreciates the extent to which this is already the case… falling back to someone else’s airbag scenario, the accelerometers are evaluated in real-time leading to particular actions with particular consequences. That’s not much different from your rifleman robot scenario. Someone programmed those odds based on number-crunching in both cases. In both cases, they may or may not make a more ethical judgement than a real rifleman, or real driver, both of whom would be in the same position with respect to other real riflemen or drivers.

    The complexity of the coding will surely increase and the many branching consequences will certainly grow, but the mechanics of the process are no different. Nor is the fact that this is all equally the case with humans who are currently in these situations. The ambiguity and subjectivity are unchanged.

    If you want to argue that morality by committee is folly or that corporate malfeasance is likely or that a Cylon apocalypse is surely coming, I think those are reasonable topics for debate as we increasingly automate potentially lethal systems. Otherwise, I still haven’t heard anything suggesting that doing so in any way negatively impacts the morality of the decision-making over what we already have today.

  23. Nick Post author

    Bill,

    You find my opening example extreme, which is true. I don’t think it is so extreme as to be impossible – in fact some version of the scenario probably happens fairly often – and hence such situations would have to be anticipated and accounted for, somehow, in the programming of the driver-less car (should the driver-less car progress beyond its current attentive-backup-human-driver-monitoring-the-situation phase).

    But let me go ahead and sketch out a much less extreme and much more common scenario. Your driver-less car is driving you home from work in the evening. It’s dark, and you live in a wooded area, with lots of bushes along the road. You’re drinking a beer and listening to some tunes on Spotify – hey, why not? – but then, suddenly, a dog rushes out of the bushes into the road, and it freezes directly in front of your car. The dog is so close that it would be impossible for the brakes to stop the car quickly enough to avoid a grisly collision. But you’re going slowly enough that if the car swerves off the road to avoid the dog, the car will suffer fairly severe damage – in the thousands of dollars, certainly, and possibly a total loss should the airbags deploy – but you will almost certainly suffer no serious injuries beyond, say, a broken nose. So what does the car do: swerve off the road or kill Rover?

    I think you’d agree that different people would react in different ways in this situation, even though they have only a split second to react. To some, the idea of hitting a living thing would be so repellent that they’d immediately swerve and take their chances with a crash. Others would be averse to damaging their car just to save a dumb animal and would run the beast over. Still others would just freeze, so startled that no ethical consideration even enters their mind. None of them are acting in an immoral fashion, by their own lights.

    Surely, your driver-less car knows that this dog is a living thing and not some inanimate object. So what does its program tell it to do? What’s a dog’s life worth, exactly, to a driver-less car – or, to be more precise, to the car’s designers?

Comments are closed.