Moral code

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.

The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?

We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.

54 Comments

Filed under Uncategorized

54 Responses to Moral code

  1. Dan Miller

    I’m not sure this is as big a problem as the author makes it out to be. The scenario where you explicitly endanger the driver to save innocents is a rare subset of accidents; in most cases, “protect the driver at all costs” will be functionally identical to “protect pedestrians at all costs”, with the action being “come to a stop as fast as possible”. And that’s leaving aside the utilitarian benefits, assuming that automated cars become better at driving than people (which I think is pretty reasonable).

  2. Nick

    Saying a dilemma is rare does not exempt you, or a robot, from dealing with it when it arises.

  3. Scott Wilson

    But these dilemmas already exist and are dealt with, frequently imperfectly, by humans now. If you’re going to try to argue that a snap judgment is somehow more moral than one that has been programmed in after due thought and consideration, you’re going to have to do more than point quailing into the future saying “But it’s a MACHINE!”

    I’m not going to say that programmers will always exercise due thought and consideration; however, it seems obvious they will have more an opportunity to do so than the humans that are already making these instantaneous calls in the heat of the action. Professor Marcus’ work, among others, seems to show that technologists are already delving into these considerations. Their efforts will almost certainly continue to deliver imperfect results, but there is at least some chance that the consideration will in fact improve on the responses of stressed-out humans who are already making decisions while driving and fighting.

  4. Mark

    It won’t be the programmers making the decisions, nor will find themselves in an extraordinarily difficult situation for the reason you think. The CEOs will be making the decisions based on profit driven market forces. The programmers will be told what to program and not to decide such issues that could cost the company its future. The difficult situation some programmers may face is if he or she does not agree with the moral programming they are being asked to program into a machine, then do they quit their jobs perhaps, therefore, bringing suffering to the family.

    Great article. I love this kind of stuff.

  5. Nick

    Scott: The programmer as deus ex machina, eh? Arriving to deliver us from our age-old moral struggles. OK, so tell me: Does the car hit the kids or go off the bridge? And if you’re not willing to do the programming, point me to your nominee, or specify the process that, with due thought and consideration, will answer such questions in the optimum ethical manner.

    Mark: So the question is: Who programs the programmers? I think you’re onto something there.

  6. Chet

    I’ll be honest – I wish philosophers like you would bother to learn something about the fields they philosophize about. There’s literally nothing in this post that suggests you have any knowledge whatsoever about the capabilities of programming, software, robotics, or decision systems.

  7. Mark

    Chet, this post isn’t about what software or hardware is capable of. This is about what the people in control of those systems decide what to make those systems do.

  8. Nick

    I’m no philosopher, Chet, but if you’re willing to divulge your knowledge, I’m happy to be educated.

  9. Nick, this is the sort of stuff which constantly reminds me that topics aren’t often about the nominal issue, but underlying anxieties and concerns of the writers. This is about “OMG, *machines*, scary *technology*, whatever is becoming of the world, it’s so frightening with all these newfangled inventions …”.

    The writer mentions Asimov’s Laws Of Robotics. He doesn’t mention that Asimov’s stories are all about mediating on the complexity and ambiguities in them, from practical interpretation to philosophical implications. I think that’s because Asimov treats it as a problem to be engaged, not as a source of fear of technology (he wrote very much that in one introduction).

    I suspect this is what Chet is getting at more harshly. It’s not about the difficulties inherent in automatic decision-making. There’s real-world examples – air-bag deployment comes to mind. When should a car deploy an air-bag, given that a deployment can actually cause injury or even death in some cases? Rather, it’s about what’s been called the Frankenstein Complex, muttering that our creations will do evil immoral things because they lack a human soul.

  10. Bill

    I can see where you’re going with this and why you chose the example. However, I doubt you’re ever really going to find such a black and white situation in reality. Kill the child or kill the driver. What about other traffic, on both sides of the road? What about the grey area of injury to both parties, but not death and the extent of those injuries? (it’s not inconceivable the car could determine that) Also in a robotised world, cars are likely to be more closely packed, so what of the concertina effect of sudden braking (kill the child or kill dozens of motorists)?

    Suddenly we’re in the realm of having to make a number of moral decisions in a split second. No human could do this and whatever choice is made in that split second is unlikely to be the perfect one. Can the machine do this more quickly? Perhaps. Maybe it can calculate likely outcomes and choose the most favourable. And in that situation, in that spectrum of situations, one is likely to be the best (or least worse).

    As Seth suggests, the tension isn’t so much the decision the machine makes, more that we are no longer in the loop. And then, in this type of situation, ultimately we’re at Asimov’s Laws of Robotics as applied to society (“a robot may not injure society, or through inaction, allow society to come to harm”). From what I remember of the books, individual humans didn’t necessarily do so well out of that though.

  11. Chet

    I’m no philosopher, Chet, but if you’re willing to divulge your knowledge, I’m happy to be educated.

    Well, for instance the notion of a “split-second” decision really makes no sense, here. You have a split-second because you’re a human being, thinking in a human time scale.

    But to a system-on-a-chip, like a cheap ARM processor – twelve dollars, or less in bulk – running at a paltry 60 mHz, the wheels of a car at 50 mph won’t have rotated even a single milliradian from one clock cycle to the next. To a philosopher, the notion of a “split-second” decision is a way of saying “what do our moral and ethical intuitions say?” But to the decision engine driving the car, there’s all the time in the world to start applying the brakes literally just as soon as a collision with a pedestrian becomes even remotely plausible. From the perspective of a software platform, the situation as you describe is a lot less like “split-second decision to choose between your life and the children’s” and a lot more like that of a train engineer who gets a call over the radio that a family station wagon is trapped on the crossing three miles up ahead. It would be deeply, deeply absurd to portray his only options as 1) steaming ahead and obliterating the family or 2) derailing the train.

    Look, I mean, Microsoft does full-on kinesthetic skeleton detection with a consumer device that costs $100. It’s not unreasonable to assume that the car starts reacting as soon as the children start to fall into the street. I just don’t see where morality comes into it. It’s like asking about the morality of a drone operation station when the operator clicks the button that launches the Hellfire missile, and expecting the station to sometimes say “no.” It doesn’t work like that. We don’t impugne morality to firearms or tanks, we impugne it to the designers and operators of those technologies. If you have some reason to believe that the autonomous-car designers at Google aren’t moral individuals, then maybe there’s something to be worried about. But wondering about the morality of their software is just bizarre.

  12. Chet

    Also in a robotised world, cars are likely to be more closely packed, so what of the concertina effect of sudden braking (kill the child or kill dozens of motorists)?

    The concertina effect only exists because of signal propagation delay and how long it takes for a human to respond to an unexpected stimulus. Robotic cars would communicate their intention to brake as they did it; there’d be no concertina effect at all because all the cars would brake at once.

  13. Scott Wilson

    The programmer as deus ex machina, eh? Arriving to deliver us from our age-old moral struggles. OK, so tell me: Does the car hit the kids or go off the bridge? And if you’re not willing to do the programming, point me to your nominee, or specify the process that, with due thought and consideration, will answer such questions in the optimum ethical manner.

    I’m not sure if you are deliberately misunderstanding me or trying to get at a different point that I am somehow missing. But, to answer your questions: I don’t know what the car does when it is programmed, any more than you know what it does when there is a driver at the wheel. I’ll have to brush up on my BASIC a little first, but I’ll be happy to do the programming, and in doing so I’d love to take into consideration the thoughts of Asimov, Marcus, and other ethicists and roboticists who have and continue to ponder the morality of coding robotic behavior.

    It’s not as if people making these decisions now generally sit around debating and considering the ethics and preparing themselves for what to do if a gaggle of school kids show up in front of their bumper. As far as I am concerned, you still haven’t made even the most basic argument as to why their snap-decision is somehow more moral than that of someone who has had a chance to consider it first. Is it Zen? God’s will? A preference for the honesty of un-deliberated human expression over science and rationality?

    I’m not arguing the programmer will achieve perfection in this… in fact, the point of these questions is that there is no perfect answer. But, aside from Mark’s excellent point, I just don’t see how giving whoever is making the decision more time and tools to make it is going to result in more immoral decisions. My guess is, if, god forbid, you actually found and talked to someone who had run over a bunch of school kids, they’d give anything in the world for an extra few seconds to have considered their options.

  14. Nick

    Thanks for all the thoughtful comments. A brief point (I’ll post a longer reply later, when I have some time):

    The problem isn’t whether the programmer or designer is moral or not. Nor is the problem whether the software, working at computer speeds, will be able to be “more moral” (ie, bring more data to bear) than a human working at human speeds. The problem is that situations are ambiguous, and what constitutes the moral choice ultimately rests on a subjective judgment. Two moral people may make different choices, and defend their choices as ethical. So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software? We can debate hypothetical scenarios all we want, but the fact is that self-driving cars, lethal military machines, and other autonomous robots, when placed into the human world, will face ambiguous situations in which there is no time for human judgment to be introduced, and they will have to make decisions. And all the number-crunching in the world will not resolve the moral ambiguity.

  15. Nick

    And one more future hypothetical situation to mull over:

    You have a computer-controlled “rifleman robot” stationed on a street corner in a city your forces are defending. The robot’s camera sees a man in civilian clothes acting in what past patterns of behavior suggest is a suspicious manner. The robot, drawing on a thorough analysis of the immediate situation and a rich database of past experiences, swiftly calculates that there’s a 68 percent chance that the person is an insurgent who is preparing to detonate a bomb and a 32 percent chance that he’s just an innocent civilian. At that moment, a personnel carrier is coming down the street with a dozen soldiers on board. If there is a bomb, it could be detonated any second. There’s no time to bring “human judgment” to bear. What does the robot’s software tell it to do: shoot or hold fire?

    Note that if you establish a rule that says no robot can take lethal action without human review and authorization (following Asimov’s ethics), that rule in itself reflects a moral choice (which in this case may leave 12 people dead).

  16. The answer to “So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software?” is, exactly how it’s done RIGHT NOW, through an elaborate political fight involving corporate profit-motives, tort law, regulatory agencies, public scandals, etc. Programmers are really the least significant people here 1/2 :-). That political fight is interesting. Framing it as about the soulless machines (“We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”) is just tickling the lizard-brain of technophobia.

    Again – at what velocity does a car’s airbag deploy, and with how much force? These parameters aren’t difficult to program. What the parameters should be is the moral choice. And there’s a massive policy argument over what they should be, which has plenty written about it. The irritating thing, which several people have now objected to in various ways, is that the (my phrasing) Frankstein Complex articles are written as if we don’t already deal with these general types of issues, and in a context that’s less thoughtful than decades-old science fiction stories.

    Note, it’s a SF cliche that the “good guys” set their defense system not use lethal force without operator authorization (if at all), and the “bad guys” set their defense system to try to kill enemies automatically. Having the chief “bad guy” get killed by their own defense system’s actions is a stock ironic ending.

  17. The truth is that, in a world of self-driving cars, pedestrians will no longer be permitted.

  18. Scott Holloway

    This discussion reminds me of two things: the investigation behind the 1986 Challenger explosion, and run-away cars with an inadvertantly “locked” accelerator.

    I see that the question ultimately is “In what way should the autonomous cars/guns/machines be programmed to save lives, when the taking of a life is unavoidable?” and “Who decides how the cars are programmed?” To me, the answer to these questions is the same answer to the question, “who decided to launch Challenger, despite the engineers’ warning about increased danger because of freezing temperatures?” That is scary.

    Just as the out-of-control accelerating problem in cars could have been a bug in firmware, imagine a Google car programmed to avoid hitting the three children (or programmed to “choose” the least amount of human life lost), but because of a bug, it ends up killing the children. That is scary.

  19. jimbo in limbo

    People will have to be assigned values. Don’t insurers already do this?

  20. Nick

    Seth,

    I think your technophobia-phobia may be skewing your perception of this discussion, at least a bit. But you make a very good point in saying that the design and regulation of technologies have always entailed moral calculations, often requiring judgments about, say, the proper balance between safety and cost minimization. But I think there’s a crucial difference. In the past, the ethical judgments were generalizations, made based on averages. When you’re dealing with here are real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations, actions leading to a set of particular consequences.

  21. Chet

    What does the robot’s software tell it to do: shoot or hold fire?

    Doesn’t it do whatever we told it to do in that situation? Code is deterministic like that.

    But my more realistic prediction is that the robot responds with a signal to the personnel carrier to retreat to a safe distance pending resolution of the situation, and then jams every radio and IR frequency used by any non-military device in a 20 block radius. If there’s not enough time before the bomb goes off to get the PC out of harm’s way, then there’s not enough time to get the PC out of harms way. If worse comes to worse and the PC comes to harm because it really was an insurgent with a bomb, then they push Robot Rifleman v1.1, which now sends “retreat to safe distance” signals at a threshold of 55% “chance it’s a bomb.” Firing on a human just never comes into it and it’s difficult to image in a situation where we’ve programmed robots to fire on targets they designate themselves, rather than targets we designate for them. It’s difficult to imagine how software could ever know the difference, absent full general AI, at which point the answer to the question of whose moral decisionmaking they’re using is “their own.”

  22. Nick

    Doesn’t it do whatever we told it to do in that situation? Code is deterministic like that.

    I believe we’re in agreement there.

  23. Seth Finkelstein

    Nick, I admit, I tend to get peevish on certain topics. But even taking that into account, would you consider from the several programmers posting here that there really has been decades of practical engineering thought about the dangerous of automated systems, and this subject is not uncharted territory that’s just been discovered and brought to attention by philosophers? That we (not only programmers, but society overall) actually do know something about it that’s deeper than RoboCop? In specific “real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations” is not new. Increasing in complexity, certainly. But not so disconnected from current issues as to be unprecedented.

    If you don’t like airbag deployment, consider the popular anti-spam program “spam-assassin”, or similar. It’s a software agent that tries to make an algorithmic determination as to whether a piece of mail is spam or not. Should it delete the mail? Mark it as spam for possible review? Those are configurable options. What should the defaults be? What if important mail is lost (liability!) ? That’s all a problem right now.

    Just because writers are unfamiliar with something doesn’t mean it doesn’t exist.

  24. Scott Wilson

    I think you are making a distinction without a difference here, Nick. If this is genuinely the crux of your position:

    The problem is that situations are ambiguous, and what constitutes the moral choice ultimately rests on a subjective judgment. Two moral people may make different choices, and defend their choices as ethical. So whose morality is programmed into the automaton, and how is that decided and how is it effected, in software?

    …then how exactly is it any different from the non-automated now? We no more (in fact, probably less) get to decide whose subjective judgement is exercised in these situations today than in the world of tomorrow. Throw out my contention that the ability to select the programmers avails us an opportunity to pick a more ethical code; you’re still left with the same scenario we have right now: maybe the guy behind the wheel is a saint, maybe he’s a sadist.

    To Seth, you say,

    In the past, the ethical judgments were generalizations, made based on averages. When you’re dealing with here are real-time evaluations of particular situations leading to decisions about what actions to take in those particular situations, actions leading to a set of particular consequences.

    But we’re not dealing with a genuine intelligence here. In the future, as the past, coded ethical judgements are equally likely to be made based on averages. Casting this as “real-time evaluations” both over-emphasizes the role of the computer and under-appreciates the extent to which this is already the case… falling back to someone else’s airbag scenario, the accelerometers are evaluated in real-time leading to particular actions with particular consequences. That’s not much different from your rifleman robot scenario. Someone programmed those odds based on number-crunching in both cases. In both cases, they may or may not make a more ethical judgement than a real rifleman, or real driver, both of whom would be in the same position with respect to other real riflemen or drivers.

    The complexity of the coding will surely increase and the many branching consequences will certainly grow, but the mechanics of the process are no different. Nor is the fact that this is all equally the case with humans who are currently in these situations. The ambiguity and subjectivity are unchanged.

    If you want to argue that morality by committee is folly or that corporate malfeasance is likely or that a Cylon apocalypse is surely coming, I think those are reasonable topics for debate as we increasingly automate potentially lethal systems. Otherwise, I still haven’t heard anything suggesting that doing so in any way negatively impacts the morality of the decision-making over what we already have today.

  25. Nick

    Bill,

    You find my opening example extreme, which is true. I don’t think it is so extreme as to be impossible – in fact some version of the scenario probably happens fairly often – and hence such situations would have to be anticipated and accounted for, somehow, in the programming of the driver-less car (should the driver-less car progress beyond its current attentive-backup-human-driver-monitoring-the-situation phase).

    But let me go ahead and sketch out a much less extreme and much more common scenario. Your driver-less car is driving you home from work in the evening. It’s dark, and you live in a wooded area, with lots of bushes along the road. You’re drinking a beer and listening to some tunes on Spotify – hey, why not? – but then, suddenly, a dog rushes out of the bushes into the road, and it freezes directly in front of your car. The dog is so close that it would be impossible for the brakes to stop the car quickly enough to avoid a grisly collision. But you’re going slowly enough that if the car swerves off the road to avoid the dog, the car will suffer fairly severe damage – in the thousands of dollars, certainly, and possibly a total loss should the airbags deploy – but you will almost certainly suffer no serious injuries beyond, say, a broken nose. So what does the car do: swerve off the road or kill Rover?

    I think you’d agree that different people would react in different ways in this situation, even though they have only a split second to react. To some, the idea of hitting a living thing would be so repellent that they’d immediately swerve and take their chances with a crash. Others would be averse to damaging their car just to save a dumb animal and would run the beast over. Still others would just freeze, so startled that no ethical consideration even enters their mind. None of them are acting in an immoral fashion, by their own lights.

    Surely, your driver-less car knows that this dog is a living thing and not some inanimate object. So what does its program tell it to do? What’s a dog’s life worth, exactly, to a driver-less car – or, to be more precise, to the car’s designers?

  26. Seth Finkelstein

    Nick, forgive me another of my pet peeves, that reading science-fiction sadly doesn’t help nearly as much SF writers would hope, since outside the tech community, nobody listens or cares.

    You ask “So what does its program tell it to do?”. This goes back to my half-joke that “programmers are really the least significant people here”. The outline is pretty clear.

    1) The initial system is going to be set to try to brake and/or take evasive action whenever possible. This is because engineering matters! You postulate the car knows it’s a *dog*. But it will be a long time before any such system will be able to distinguish “dog” from “child” from “crawling adult”. The lawsuits for any error would be ruinous. The manufactures aren’t going to want to deal with headlines of “The Frankenstein cars which could KILL YOUR CHILD!!!”. If anyone complains that they suffered minor injuries and damaged their car over a dog, they’re going to be told “Yeah, it was a dog, but what if it was a little kid? Can’t tell the difference. Better safe than sorry.”

    2) Many, many, years in the future, long after it’s accepted that brake/evasive action is the right default, sensors will get better so that people will revisit this. It will then be a fight between the businesses or government payers of medical expenses, versus the product liability and accident insurers, as to whether the technology is really good enough at detecting “animal” to warrant changing the setting. Now the headline will be “KILLER CARS!” on both sides. The result will depend heavily on the state of the technology and the relative strength of the lobbying forces.

    3) Somewhere between 1 and 2, hot-rodders are going to change the setting for their own car. This will be forbidden by their insurance companies. But the hot-rodders will do it anyway. The inevitable tragedies will have a small silver lining of adding to the empirical knowledge of whether it actually works in the field.

    Transportation is a heavily regulated industry. It’s not like it’s brains-in-jars wondering what world they can dream up.

  27. Nick

    I actually think it will be the opposite. The system will indeed know it’s a dog or small animal, not a human, and hence insurance companies and corporate lawyers will demand that the car be programmed to run the animal over since the settlement costs and litigation risks would be far lower than if the car suffered heavy damage and the occupant suffered even slight injuries.

    The dog dies. Every time.

  28. Chet

    So what does its program tell it to do?

    Doesn’t it do whatever is consistent with whatever certification process was required to allow that software to be installed into an autonomous car? Again, I’m not understanding the perspective that these are vast frontiers of unanswered questions. How technologies perform in the field under various circumstances is, for the most part, always unknowable with certainty, but the central focus of any engineering discipline and any certification process is to decide how our technologies should perform when they’re operating correctly.

    Asking whether the car veers off the road or hits the dog is not a fundamentally different question than “suppose an Airbus A320 hits a flock of geese on departure out of La Guardia and is forced to make a water landing in the Hudson. Do the wings snap off on impact, or remain attached?” It’s a question of engineering, not moral philosophy.

  29. Nick

    Sorry, Chet, but you’re being obtuse. “Do the wings snap off or remain attached?” is indeed an engineering question — of course the goal is to have the wings remain intact. “Do you kill the dog or damage the car?” is an ethical question – there is no right answer. So somebody, or some group of somebodies, must make an ethical judgment before the engineers go to work.

  30. Joshua

    I would argue that this hypothetical would never happen. Why? Because the car would be programmed to go the correct speed regardless of the speed limit.

    If the care was driving on a road next to a sidewalk without a railing and there were pedestrians, it wouldn’t be going 50 mph, it would slow down as needed given that there is a chance that humans will behave unpredictably.

    In the example above, any bridge where cars are going 50 mph will have a difficult to cross guardrail for pedestrian. If not, the car will drive at a slower speed.

    To answer the question about a dark forest, self driving cars rely on a multitude of sensors besides cameras (LIDAR, RADAR, GPS). It fact they probably rely on cameras the least. Unlike humans, the car will have a 360* vision and 3d map of the surroundings. I imagine it will have infrared vision as well by the time they are released. In the end, the car will have 10-1000x better perception than humans and immediate reaction time. It will slow down when it sees that deer on the edge of the woods and stop the micor second it dectect the deer running towards the road. Unlike humans who begin o break once the deer is directly in front of them.

    I agree that this convo needs to happen, but the likelihood of even being in a situation like to post suggests is will be a ten thousandth of a human driver.

  31. Neil

    This is an easy one. Save the kids. Two reasons: (1) the driver accepts a greater burden of responsibility when entering in the car, which is a lethal weapon, be it automated or otherwise; (2) They’re kids. Go ahead and write it into the algorithm, case closed.

    The rifleman example is harder, but only because the only options are shoot or don’t shoot. However, if there is a third option – say, send a warning to the personnel carrier of a 68% chance of danger – then it’s easy, too.

    The dog example is more complex. It makes me wonder if this is ever an ethical choice for a human being, who is likely to simply react out of reflex or instinct. Sure, in retrospect, we’d say that the driver identified two options, deliberated between them, elected the one they felt was right, and should be judged for this decision. But that characterization of the events is a bit of a sham. On the other hand, the automated car DOES have time to transform the situation from the realm of instinct to the realm of ethics. And you’re right — the dog dies every time – just not because of ethics, but because of money.

  32. seth y

    It seems to me that the implications of this hypothetical are that the decision (to the extent there is one in split second), is essentially outsourced by the driver to the programmer. I agree with much of the criticism that this is a an imperfect hypothetical meant to invoke fear of machines, but I disagree with the criticism that there is no meaningful moral distinction between the decision that must be made by the programmer and the decision that must be made by the driver. The driver has to balance his own interests against those of the children. Thus, a fully selfless driver may feel compelled to drive off the bridge, even if his suffering exceeds that of the children. Conversely, if he does not drive off the bridge, he faces a lifetime of guilt and second-guessing. A selfish driver may do the opposite. The programmer, however, does not have to make a choice between himself and others, but can consider matters objectively, resulting in a better outcome even leaving aside the benefits of time and consideration.

    Also, from an society-wide utilitarian standpoint, it is probably better if there are agreed upon programming standards that dictate how the car should behave in such circumstances rather than inconsistent individual decisions. From a quasi-Rawlsian perspective, if anyone could be the child crossing the road or the driver, but whatever rule is adopted will be adopted uniformly, a more just result would occur. My guess is that it will be the best and safest outcome to simply program cars to preserve the life of the driver at all costs, so there are no cars driving off bridges when they incorrectly perceive roadkill to be pedestrians, but that is not really the issue at hand. Whatever relevance of moral quandaries such as these, the benefits to automated cars so overwhelm the slight danger that exists that it would be insane to slow their development in any way over concerns such as these.

    Automated weapons are much much more complicated because their essential function is to do harm, not to provide safe transportation. The danger in them is not that programmers may impose a false morality as much as that the limits of programming perfection impose significant risks that unjustified harm will be done on a massive scale with no ability to stop it until it is too late. Better not to risk the automated soldier killing 50 civilians because of a programming glitch. Hence the need to have human oversight over the decision to do harm.

  33. Jeremy Friesner

    Regarding the children-on-the-bridge question, I believe the best answer is that if the autocar ever gets into a situation like this, its programming has already failed to handle the situation correctly.

    A properly designed car would have detected the childrens’ presence ahead of time, noted them as a potential hazard, and slowed down in advance so that when a child fell into the street, the car would have time to come to a stop before it hit the child. A conscientious human driver would do the same.

  34. Ken Adler

    The computer would know that there are not supposed to be things on the bridge. The computerized car would probably try to make contact with these things and upon failure to make contact, would slow down as the program would conclude that this might be a problem. Maybe frisky kids.

  35. Steve

    This whole conversation/debate has been wonderful and from my perspective as a developer fascinating. I haven’t read all the responses yet but wanted to chime in here on Nick’s second scenario, the robotic infantryman. As a former paratrooper I find this one really interesting:

    First off, that’s a real world situation that was and is being played out numerous times in Iraq and Afghanistan. How many stories of checkpoint guards lighting up a vehicle full of kids and innocent people because the driver didn’t slow down or stop, whether it was a language barrier/fear/cultural misunderstanding thing? Those are terrible situations but were routinely filed under escalation of force incidents and few if any of the soldiers involved were tried as they followed the rules of engagement. On a more benign level we allow this to happen in the US on a daily basis with ‘stop and frisk’ laws, were the police are allowed to stop and frisk anyone on the street if they have probable suspicion, and overwhelmingly these stop and frisks are on minorities. Your basically talking profiling here, taking statistics, applying it to the population and using it for various purposes. How is this morally any different than your scenario? You’re just moving it from a cop on a beat or soldier at a checkpoint to a robot/machine in a similar situation.

    Specifically, to your scenario, if we use Asimov’s no lethal action imperative we are still allowing the robotic sentry a multitude of options. Let’s accept that our robotic soldier will be much more qualified on the firing range than your average human. It would have the ability at precise targeting. In this scenario I could see it commanding the suspicious character to halt (in the local language and dialect), while simultaneously sending a warning signal to the troop carrier to also halt as there is a unresolved potentially harmful situation in their path (and if it’s a smart vehicle it’ll not only automatically stop but probably pick up the GPS location of the suspicious character, figure potential munitions being carried, blast radius, etc and move the troop carrier back to the safest position available.) A human could not accomplish this as the chances of them speaking the local language would be slim (most infantryman are not linguist) and they wouldn’t be able to tell the suspicious character to stop and signal the truck at the same time (at least with clarity and with keeping their weapon trained on the suspicious character).

    Say our suspicious character ignores the audio warning and continues moving in the direction of the vehicle. Now the advanced marksmanship of the robot comes into play. It could lay down precise rounds in the path of this person, essentially a series of shots across the bow so to speak, probably also able to calculate ricochet and breakage factors to continue with Asimov’s imperative. This is usually enough to deter most people or at least cause them to reconsider their actions.

    And yet still he continues. Clearly the suspicious character has surpassed a certain level of profiling to raise the insurgent percentage level to a higher number. Still, with no definitive proof (BTW – all this is also being relayed in video and audio back to human handlers with the capability to shut the robot down at any time). But there’s no need for lethal (lethal meaning deadly action) as our robot has superior aim and can stop the suspected insurgent with a round to the knee or some other disabling part of the body without killing him.

    So, like human soldiers manning a checkpoint with escalation of force rules our robot can also be instilled with them. Is the robot any better with algorithms and statistics any better or worse than the human with statistics and human failings?

    I don’t want to come off as defending robotic soldiers, they kind of creep me out anyway, (Skynet and all that) but I also don’t think the human version are any better.

    Here’s a different scenario: Which is better, 3 or 4 troops at a checkpoint who yesterday on patrol lost one of their own to a suicide bomber and are hated by the local populace. They spent the night drinking in memory of their buddy and are now standing in the mid day heat, pissed off, hung over, hot and looking to avenge his death. Or 3 or 4 robotic soldiers who yesterday lost a comrade to a suicide bomber but are indifferent to it?

    When a group of local teenagers, who hate the occupation, approach the checkpoint taunting and cursing and things begin to escalate who will have the better decision making process?

  36. Chet

    of course the goal is to have the wings remain intact.

    But it might not be! Just as you might naively think that the goal of a car’s unibody would be to remain intact during any collision, but in fact, they design car unibodies with “crumple zones” to control the deformation of the body during a collision. Sometimes its better to design failure into the system so that you can control it, rather than try to design a system completely impervious to failure.

    So maybe it makes sense to have the wings snap off, under certain crash circumstances. The wings are usually where the fuel tanks are, for instance. Maybe it makes sense to have those separate and be left behind rather than rupture and present a hazard to flight crew, rescue workers on the scene, and passengers.

    Making that decision has a “moral” dimension, in the sense that the decision should be based on what saves the most lives over the service lifetime of the device. But for the most part they’re practical engineering decisions, and evaluating them in the context of extremely unlikely corner cases is precisely the wrong way to approach them. You don’t design like that. So the answer to your questions – the dog question, the rifleman question, the car question – are all the same; the program follows whatever it was programmed to do, which was to maximize safety and minimize risk across all situations the device is likely to encounter, not just the specific situations you pose.

  37. Nick

    Thanks, again, for the comments.

    Joshua, re: “the car would be programmed to go the correct speed regardless of the speed limit … [slowing down] as needed given that there is a chance that humans will behave unpredictably” Surely you’re mistaken. Driver-less cars will not be programmed to always slow down to whatever speed minimizes the probability of doing any harm in unpredictable circumstances. The public would not stand for it. Can you imagine the outrage if, every time there was a pedestrian or bicyclist in the vicinity who might unexpectedly move into traffic, a driver-less car would automatically slow down to, say, 15 miles an hour? First, during the period when driver-less cars share the road with human drivers, you would get enormous amounts of road rage and lots of rear-end collisions. And even when it’s all driver-less cars, people would rebel against the delays. Have you ever seen how human beings drive? No, minimizing the risk of accident or injury has never been the overriding criterion in driver decisions or, for that matter, political decisions about driving — and it will not be in the future. Unless, of course, the driver-less car arrives in the context of a techno-fascist state.

  38. Cynic

    Interesting stuff.

    It’s not entirely clear to me whether you’re trying to figure out what the software will do, as a normative matter, or what it ought to do, as a moral one. I’ll restrict myself to the former. I cannot imagine any corporation successfully peddling a software control system that, in some circumstances, deliberately kills its operator. Nor, for that matter, one that deliberately mows down innocent children. So I’ve got two words for you: Kobayashi Maru.

    Americans, outside of moral philosophy seminars, don’t generally acknowledge the existence of un-winnable scenarios. And behavioral psychologist have demonstrated, time and again, that we tend to overestimate our own capacities. So I suspect that an operator-free vehicle manufacturer would respond like this:

    Our driverless vehicles represent an astonishing advance for passenger safety, and have saved thousands of lives every year since their introduction. Their responsive systems act to prevent collisions before human drivers can even register the danger, much less respond. Regrettably, even with the best available technology, some collisions are inevitable. When these do occur, the rapid braking and instant evasion systems serve to mitigate its impact and reduce its hazards. There remains no question that both passengers and pedestrians are far less likely to suffer injuries when cars are operated by integrated software and sensors.

    They will, in short, program the vehicle to come to a safe stop as quickly as possible – and to slow as much as possible when a collision is unavoidable. This will be called something like ARMS: Accident Reduction and Mitigation System, with the acronym stressing power and agency, even if the actual purpose of the software is to strip the operator of precisely those things. They’ll stress the relative safety and superiority of their systems, and the overall reduction in injury and death.

    That may, in moral terms, be no different than a car programmed to kill pedestrians. But in rhetorical terms, it’s hugely different.

    It’s also, as the law presently stands, the only legal option. Neither the manufacturer nor the owner has created the hazard. There’s no duty to rescue in this circumstance – and if there were, it wouldn’t extend to endangering the owner’s safety, much less sacrificing his own life. Nor does this meet standards of reckless operation or depraved indifference. The liability here falls squarely on the children. And if that’s true of your first case, it’s even clearer in the case of the dog. A car programmed to take what you present as the moral choice – sacrificing its passenger to save the children – would expose its manufacturer to more-or-less unlimited liability.

    In general, I think, these issues are far less novel than you suggest. They were hashed out during an earlier age of technological innovation, with the advent of railroads. That brought with it the invention of modern liability law – the diminution, for example, of the fellow-servant rule, and the advent of liability for a manufacturer or operator who fails to take reasonable safety precautions or operates an inherently hazardous machine. But we didn’t require our freight trains to carry an automatic derailment switch that operators could trigger when they saw a schoolbus stuck on the tracks, even if sacrificing the engineer and fireman to save the schoolchildren might, in theory, be more ‘moral.’ And I’m not sure how the scenario you posit is really any different.

    Similarly, we’ve been debating targeting and morality during war for a long, long time now. Soldiers operate under Rules of Engagement, which function (at least in theory) like an algorithm – providing them with a checklist of circumstances necessary for the application of varying levels of force. A soldier at a Kabul checkpoint approached by a speeding car faces precisely the dilemma you outlined above – the need to act upon a probabilistic assessment using insufficient and imperfect information. Her commanders give her the ROE, and a set of guidelines for the escalation of force – a program, if you will. It will matter whether it’s a soldier or a computer that applies those guidelines, but not necessarily in the ways that you seem to imply. Both will apply them imperfectly, but in different ways. But it’s not unreasonable to wonder if putting remotely-operated units on the battlefield might, perhaps, allow for more restrictive Rules of Engagement. If that armored vehicle coming down the street wasn’t actually full of soldiers, but instead, if it were a remotely operated drone, then the sentinel could accept a much higher degree of risk before pulling the trigger. Of course, by removing human aversion from the equation and reducing the risk to soldiers, it might lower the bar to employment of lethal force or the willingness to engage in conflicts. Many of these things cut both ways.

    ‘Programming consciences,’ as you put it, is something that every command structure or bureaucracy already does. It’s that larger moral calculus here that really fascinates me.

  39. James

    I think Jeremy has made an important point, as well as those commenting that the machine has a much greater ability to respond than a human.

    Stipulating, however, that there will be an instance at some point that it comes down to choosing between the driver and the pedestrian, I have two thoughts.

    First, in the scenario given (drive off the bridge or hit the kids), this strikes me as more of a problem for a theoretical conscience than a real one. Does anyone really expect a driver would choose to go off the bridge? I certainly don’t. It’s unfortunate for the kids and their family, but suicide in defense of others is not expected of a moral conscience.

    Secondly, we will resolve these future dilemmas as we resolve current ones: When the rules before the event are unclear, we have courts to figure it out after the fact, and give input to the next generation of rulesets.

  40. Timothy

    (C) The computer will slow the car to a school zone speed (or slower) as appropriate for the relative proximity of pedestrians. In other words, the moral decision will be to slow the f*** down in that situation while actually speeding up in lower risk areas. Also, the car will be vastly lighter and lower powered because the occupant protection is already part of automated driving, and the automated driver doesn’t need zero to sixty in 5 seconds to impress its girlfriend. (The automated driver will coordinate pacing through intersections and passing with little acceleration required.) Thus stopping distance will be comparatively trivial and, as with aircraft, the computer would be allowed to destroy the brakes if necessary to stop. Moreover, if the occupant isn’t wearing a seat belt to be ready for a sudden stop then the car won’t even move (or will safely pull over and stop if already moving).

  41. Katherine

    I’m with Joshua and Jeremy. My first thought was “why is the car going 50 mph around pedestrians?” The first requirement for a viable autonomous car is that it be smart enough to *not* just cruise along at the posted speed limit, but to adjust to changing conditions at least as well as a human would.

    There’s actually quite a lot of existing law regarding the responsibilities of designers and manufacturers of life critical systems. Combine that with existing law regarding the responsibilities of motor vehicle operators and you’ve got at least a framework for an autonomous car’s “judgment” engine.

  42. Nick

    I probably should have made it 35 mph. I’m a bit of a lead foot.

  43. Josh

    Honestly, if I were driving that car, I would probably run off the bridge _and_ fail to miss the children.

  44. Nick,

    My two cents:

    1. “A pedestrian gets Googled”, http://youtu.be/a93xjkQFDyM

    2. “In March 2011, a Predator parked at the camp started its engine without any human direction, even though the ignition had been turned off and the fuel lines closed. Technicians concluded that a software bug had infected the “brains” of the drone, but never pinpointed the problem. “After that whole starting-itself incident, we were fairly wary of the aircraft and watched it pretty closely,” an unnamed Air Force squadron commander testified to an investigative board, according to a transcript. “Right now, I still think the software is not good.”

    http://www.washingtonpost.com/world/national-security/remote-us-base-at-core-of-secret-operations/2012/10/25/a26a9392-197a-11e2-bd10-5ff056538b7c_story.html

  45. Kelly Norton

    I slightly ashamed to say it, but as a programmer, my first thought was to put this in the user settings for the software. (AKA punt). Of course, then the debate centers around what the default settings should be.

  46. Nick

    re: “my first thought was to put this in the user settings for the software”

    I actually thought about that, as something along those lines strikes me as being in the realm of the possible.

    Say, before purchasing a driver-less vehicle, you have to take a psychological test, and then the car is programmed based on the test results. Your ethics are reflected in its operations. That also gets rid of the default settings problem.

  47. Daniel

    1) Stopping from 35MPH with full braking force takes almost no distance (about 10 yards) in a modern car. The amount of time it takes the human brain to process the visual input would take more time than the actual act of stopping the car. Computer reaction time would be nearly instantaneous, provided proper programming.

    Assuming that there was not enough time to avoid the children and that the bridge had only one lane and there was no room for the car to swerve past the children, the guard rail on a properly designed and constructed bridge would be able to stop the car from going over.

    2) Sensors in your car would be able to see the dog coming. Thermal, Radar, Sonar, whatever. Darkness and thick bushes aren’t a barrier to the right combination of sensor technology. Programming would allow car to slow down as appropriate.

    Even if none of the car’s sensors could not see the dog until it cleared the bushes, as soon as it did and the on-board computer calculated its path, the car would apply braking which would be able to stop a car going at residential speeds.

    But let’s say it’s a narrow street with cars parked on either side, providing objects much harder for sensors to see through, and the dog darts between two cars directly into the path of your car. Sorry, Spot, but you’ve been run over. Sensor data from the car can show there was nothing to be done. Perhaps Spot’s owner should have closed the gate to their fence?

    3) Google has developed a neural network that allows a computer to recognize different objects. It’s a ways off, but your car also wouldn’t be starting from scratch. http://www.theverge.com/2012/6/26/3117956/google-x-object-recognition-research-youtube

    As said previously, a lot split second decisions made in vehicles are only split second decisions because the human brain wasn’t made to process decisions while going at 35MPH or higher. Our brain’s decision making speed is no match for a computer connected to an array of sensors providing 360 degree coverage.

    The technology won’t be perfect and there will be sudden and unexpected scenarios, but there will fewer than you seem to assume.

  48. BobN

    The obvious answer is to use a Microsoft self-driving car which would use Bing maps to make decisions. The car would just smoothly drive off the bridge and right onto the river or its bank which are level with the road surface.

  49. Nick

    One theme I’m hearing from a lot of these comments is that, with enough data and analytical muscle, ethical ambiguity will evaporate like morning dew under a hot summer sun.

    We are nothing if not children of our time.

  50. Katherine

    Ethical ambiguity won’t evaporate, but it won’t manifest itself in such simplistic ways as this.

    Rather, what are the ethical responsibilities of the driver if/when the software fails? If the car deliberately decides to hit the kids, is the driver obligated to hurl himself off the bridge?

    (Assuming, of course, that the manual override even still exists. But if it doesn’t, that raises all kinds of interesting issues about whether outside actors — police, manufacturers, angry spouses — can make the car do something you don’t like.)