Moral code

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.

The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?

We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.

54 thoughts on “Moral code

  1. Seth Finkelstein

    Nick, forgive me another of my pet peeves, that reading science-fiction sadly doesn’t help nearly as much SF writers would hope, since outside the tech community, nobody listens or cares.

    You ask “So what does its program tell it to do?”. This goes back to my half-joke that “programmers are really the least significant people here”. The outline is pretty clear.

    1) The initial system is going to be set to try to brake and/or take evasive action whenever possible. This is because engineering matters! You postulate the car knows it’s a *dog*. But it will be a long time before any such system will be able to distinguish “dog” from “child” from “crawling adult”. The lawsuits for any error would be ruinous. The manufactures aren’t going to want to deal with headlines of “The Frankenstein cars which could KILL YOUR CHILD!!!”. If anyone complains that they suffered minor injuries and damaged their car over a dog, they’re going to be told “Yeah, it was a dog, but what if it was a little kid? Can’t tell the difference. Better safe than sorry.”

    2) Many, many, years in the future, long after it’s accepted that brake/evasive action is the right default, sensors will get better so that people will revisit this. It will then be a fight between the businesses or government payers of medical expenses, versus the product liability and accident insurers, as to whether the technology is really good enough at detecting “animal” to warrant changing the setting. Now the headline will be “KILLER CARS!” on both sides. The result will depend heavily on the state of the technology and the relative strength of the lobbying forces.

    3) Somewhere between 1 and 2, hot-rodders are going to change the setting for their own car. This will be forbidden by their insurance companies. But the hot-rodders will do it anyway. The inevitable tragedies will have a small silver lining of adding to the empirical knowledge of whether it actually works in the field.

    Transportation is a heavily regulated industry. It’s not like it’s brains-in-jars wondering what world they can dream up.

  2. Nick Post author

    I actually think it will be the opposite. The system will indeed know it’s a dog or small animal, not a human, and hence insurance companies and corporate lawyers will demand that the car be programmed to run the animal over since the settlement costs and litigation risks would be far lower than if the car suffered heavy damage and the occupant suffered even slight injuries.

    The dog dies. Every time.

  3. Chet

    So what does its program tell it to do?

    Doesn’t it do whatever is consistent with whatever certification process was required to allow that software to be installed into an autonomous car? Again, I’m not understanding the perspective that these are vast frontiers of unanswered questions. How technologies perform in the field under various circumstances is, for the most part, always unknowable with certainty, but the central focus of any engineering discipline and any certification process is to decide how our technologies should perform when they’re operating correctly.

    Asking whether the car veers off the road or hits the dog is not a fundamentally different question than “suppose an Airbus A320 hits a flock of geese on departure out of La Guardia and is forced to make a water landing in the Hudson. Do the wings snap off on impact, or remain attached?” It’s a question of engineering, not moral philosophy.

  4. Nick Post author

    Sorry, Chet, but you’re being obtuse. “Do the wings snap off or remain attached?” is indeed an engineering question — of course the goal is to have the wings remain intact. “Do you kill the dog or damage the car?” is an ethical question – there is no right answer. So somebody, or some group of somebodies, must make an ethical judgment before the engineers go to work.

  5. Joshua

    I would argue that this hypothetical would never happen. Why? Because the car would be programmed to go the correct speed regardless of the speed limit.

    If the care was driving on a road next to a sidewalk without a railing and there were pedestrians, it wouldn’t be going 50 mph, it would slow down as needed given that there is a chance that humans will behave unpredictably.

    In the example above, any bridge where cars are going 50 mph will have a difficult to cross guardrail for pedestrian. If not, the car will drive at a slower speed.

    To answer the question about a dark forest, self driving cars rely on a multitude of sensors besides cameras (LIDAR, RADAR, GPS). It fact they probably rely on cameras the least. Unlike humans, the car will have a 360* vision and 3d map of the surroundings. I imagine it will have infrared vision as well by the time they are released. In the end, the car will have 10-1000x better perception than humans and immediate reaction time. It will slow down when it sees that deer on the edge of the woods and stop the micor second it dectect the deer running towards the road. Unlike humans who begin o break once the deer is directly in front of them.

    I agree that this convo needs to happen, but the likelihood of even being in a situation like to post suggests is will be a ten thousandth of a human driver.

  6. Neil

    This is an easy one. Save the kids. Two reasons: (1) the driver accepts a greater burden of responsibility when entering in the car, which is a lethal weapon, be it automated or otherwise; (2) They’re kids. Go ahead and write it into the algorithm, case closed.

    The rifleman example is harder, but only because the only options are shoot or don’t shoot. However, if there is a third option – say, send a warning to the personnel carrier of a 68% chance of danger – then it’s easy, too.

    The dog example is more complex. It makes me wonder if this is ever an ethical choice for a human being, who is likely to simply react out of reflex or instinct. Sure, in retrospect, we’d say that the driver identified two options, deliberated between them, elected the one they felt was right, and should be judged for this decision. But that characterization of the events is a bit of a sham. On the other hand, the automated car DOES have time to transform the situation from the realm of instinct to the realm of ethics. And you’re right — the dog dies every time – just not because of ethics, but because of money.

  7. seth y

    It seems to me that the implications of this hypothetical are that the decision (to the extent there is one in split second), is essentially outsourced by the driver to the programmer. I agree with much of the criticism that this is a an imperfect hypothetical meant to invoke fear of machines, but I disagree with the criticism that there is no meaningful moral distinction between the decision that must be made by the programmer and the decision that must be made by the driver. The driver has to balance his own interests against those of the children. Thus, a fully selfless driver may feel compelled to drive off the bridge, even if his suffering exceeds that of the children. Conversely, if he does not drive off the bridge, he faces a lifetime of guilt and second-guessing. A selfish driver may do the opposite. The programmer, however, does not have to make a choice between himself and others, but can consider matters objectively, resulting in a better outcome even leaving aside the benefits of time and consideration.

    Also, from an society-wide utilitarian standpoint, it is probably better if there are agreed upon programming standards that dictate how the car should behave in such circumstances rather than inconsistent individual decisions. From a quasi-Rawlsian perspective, if anyone could be the child crossing the road or the driver, but whatever rule is adopted will be adopted uniformly, a more just result would occur. My guess is that it will be the best and safest outcome to simply program cars to preserve the life of the driver at all costs, so there are no cars driving off bridges when they incorrectly perceive roadkill to be pedestrians, but that is not really the issue at hand. Whatever relevance of moral quandaries such as these, the benefits to automated cars so overwhelm the slight danger that exists that it would be insane to slow their development in any way over concerns such as these.

    Automated weapons are much much more complicated because their essential function is to do harm, not to provide safe transportation. The danger in them is not that programmers may impose a false morality as much as that the limits of programming perfection impose significant risks that unjustified harm will be done on a massive scale with no ability to stop it until it is too late. Better not to risk the automated soldier killing 50 civilians because of a programming glitch. Hence the need to have human oversight over the decision to do harm.

  8. Jeremy Friesner

    Regarding the children-on-the-bridge question, I believe the best answer is that if the autocar ever gets into a situation like this, its programming has already failed to handle the situation correctly.

    A properly designed car would have detected the childrens’ presence ahead of time, noted them as a potential hazard, and slowed down in advance so that when a child fell into the street, the car would have time to come to a stop before it hit the child. A conscientious human driver would do the same.

  9. Ken Adler

    The computer would know that there are not supposed to be things on the bridge. The computerized car would probably try to make contact with these things and upon failure to make contact, would slow down as the program would conclude that this might be a problem. Maybe frisky kids.

  10. Steve

    This whole conversation/debate has been wonderful and from my perspective as a developer fascinating. I haven’t read all the responses yet but wanted to chime in here on Nick’s second scenario, the robotic infantryman. As a former paratrooper I find this one really interesting:

    First off, that’s a real world situation that was and is being played out numerous times in Iraq and Afghanistan. How many stories of checkpoint guards lighting up a vehicle full of kids and innocent people because the driver didn’t slow down or stop, whether it was a language barrier/fear/cultural misunderstanding thing? Those are terrible situations but were routinely filed under escalation of force incidents and few if any of the soldiers involved were tried as they followed the rules of engagement. On a more benign level we allow this to happen in the US on a daily basis with ‘stop and frisk’ laws, were the police are allowed to stop and frisk anyone on the street if they have probable suspicion, and overwhelmingly these stop and frisks are on minorities. Your basically talking profiling here, taking statistics, applying it to the population and using it for various purposes. How is this morally any different than your scenario? You’re just moving it from a cop on a beat or soldier at a checkpoint to a robot/machine in a similar situation.

    Specifically, to your scenario, if we use Asimov’s no lethal action imperative we are still allowing the robotic sentry a multitude of options. Let’s accept that our robotic soldier will be much more qualified on the firing range than your average human. It would have the ability at precise targeting. In this scenario I could see it commanding the suspicious character to halt (in the local language and dialect), while simultaneously sending a warning signal to the troop carrier to also halt as there is a unresolved potentially harmful situation in their path (and if it’s a smart vehicle it’ll not only automatically stop but probably pick up the GPS location of the suspicious character, figure potential munitions being carried, blast radius, etc and move the troop carrier back to the safest position available.) A human could not accomplish this as the chances of them speaking the local language would be slim (most infantryman are not linguist) and they wouldn’t be able to tell the suspicious character to stop and signal the truck at the same time (at least with clarity and with keeping their weapon trained on the suspicious character).

    Say our suspicious character ignores the audio warning and continues moving in the direction of the vehicle. Now the advanced marksmanship of the robot comes into play. It could lay down precise rounds in the path of this person, essentially a series of shots across the bow so to speak, probably also able to calculate ricochet and breakage factors to continue with Asimov’s imperative. This is usually enough to deter most people or at least cause them to reconsider their actions.

    And yet still he continues. Clearly the suspicious character has surpassed a certain level of profiling to raise the insurgent percentage level to a higher number. Still, with no definitive proof (BTW – all this is also being relayed in video and audio back to human handlers with the capability to shut the robot down at any time). But there’s no need for lethal (lethal meaning deadly action) as our robot has superior aim and can stop the suspected insurgent with a round to the knee or some other disabling part of the body without killing him.

    So, like human soldiers manning a checkpoint with escalation of force rules our robot can also be instilled with them. Is the robot any better with algorithms and statistics any better or worse than the human with statistics and human failings?

    I don’t want to come off as defending robotic soldiers, they kind of creep me out anyway, (Skynet and all that) but I also don’t think the human version are any better.

    Here’s a different scenario: Which is better, 3 or 4 troops at a checkpoint who yesterday on patrol lost one of their own to a suicide bomber and are hated by the local populace. They spent the night drinking in memory of their buddy and are now standing in the mid day heat, pissed off, hung over, hot and looking to avenge his death. Or 3 or 4 robotic soldiers who yesterday lost a comrade to a suicide bomber but are indifferent to it?

    When a group of local teenagers, who hate the occupation, approach the checkpoint taunting and cursing and things begin to escalate who will have the better decision making process?

  11. Chet

    of course the goal is to have the wings remain intact.

    But it might not be! Just as you might naively think that the goal of a car’s unibody would be to remain intact during any collision, but in fact, they design car unibodies with “crumple zones” to control the deformation of the body during a collision. Sometimes its better to design failure into the system so that you can control it, rather than try to design a system completely impervious to failure.

    So maybe it makes sense to have the wings snap off, under certain crash circumstances. The wings are usually where the fuel tanks are, for instance. Maybe it makes sense to have those separate and be left behind rather than rupture and present a hazard to flight crew, rescue workers on the scene, and passengers.

    Making that decision has a “moral” dimension, in the sense that the decision should be based on what saves the most lives over the service lifetime of the device. But for the most part they’re practical engineering decisions, and evaluating them in the context of extremely unlikely corner cases is precisely the wrong way to approach them. You don’t design like that. So the answer to your questions – the dog question, the rifleman question, the car question – are all the same; the program follows whatever it was programmed to do, which was to maximize safety and minimize risk across all situations the device is likely to encounter, not just the specific situations you pose.

  12. Nick Post author

    Thanks, again, for the comments.

    Joshua, re: “the car would be programmed to go the correct speed regardless of the speed limit … [slowing down] as needed given that there is a chance that humans will behave unpredictably” Surely you’re mistaken. Driver-less cars will not be programmed to always slow down to whatever speed minimizes the probability of doing any harm in unpredictable circumstances. The public would not stand for it. Can you imagine the outrage if, every time there was a pedestrian or bicyclist in the vicinity who might unexpectedly move into traffic, a driver-less car would automatically slow down to, say, 15 miles an hour? First, during the period when driver-less cars share the road with human drivers, you would get enormous amounts of road rage and lots of rear-end collisions. And even when it’s all driver-less cars, people would rebel against the delays. Have you ever seen how human beings drive? No, minimizing the risk of accident or injury has never been the overriding criterion in driver decisions or, for that matter, political decisions about driving — and it will not be in the future. Unless, of course, the driver-less car arrives in the context of a techno-fascist state.

  13. Cynic

    Interesting stuff.

    It’s not entirely clear to me whether you’re trying to figure out what the software will do, as a normative matter, or what it ought to do, as a moral one. I’ll restrict myself to the former. I cannot imagine any corporation successfully peddling a software control system that, in some circumstances, deliberately kills its operator. Nor, for that matter, one that deliberately mows down innocent children. So I’ve got two words for you: Kobayashi Maru.

    Americans, outside of moral philosophy seminars, don’t generally acknowledge the existence of un-winnable scenarios. And behavioral psychologist have demonstrated, time and again, that we tend to overestimate our own capacities. So I suspect that an operator-free vehicle manufacturer would respond like this:

    Our driverless vehicles represent an astonishing advance for passenger safety, and have saved thousands of lives every year since their introduction. Their responsive systems act to prevent collisions before human drivers can even register the danger, much less respond. Regrettably, even with the best available technology, some collisions are inevitable. When these do occur, the rapid braking and instant evasion systems serve to mitigate its impact and reduce its hazards. There remains no question that both passengers and pedestrians are far less likely to suffer injuries when cars are operated by integrated software and sensors.

    They will, in short, program the vehicle to come to a safe stop as quickly as possible – and to slow as much as possible when a collision is unavoidable. This will be called something like ARMS: Accident Reduction and Mitigation System, with the acronym stressing power and agency, even if the actual purpose of the software is to strip the operator of precisely those things. They’ll stress the relative safety and superiority of their systems, and the overall reduction in injury and death.

    That may, in moral terms, be no different than a car programmed to kill pedestrians. But in rhetorical terms, it’s hugely different.

    It’s also, as the law presently stands, the only legal option. Neither the manufacturer nor the owner has created the hazard. There’s no duty to rescue in this circumstance – and if there were, it wouldn’t extend to endangering the owner’s safety, much less sacrificing his own life. Nor does this meet standards of reckless operation or depraved indifference. The liability here falls squarely on the children. And if that’s true of your first case, it’s even clearer in the case of the dog. A car programmed to take what you present as the moral choice – sacrificing its passenger to save the children – would expose its manufacturer to more-or-less unlimited liability.

    In general, I think, these issues are far less novel than you suggest. They were hashed out during an earlier age of technological innovation, with the advent of railroads. That brought with it the invention of modern liability law – the diminution, for example, of the fellow-servant rule, and the advent of liability for a manufacturer or operator who fails to take reasonable safety precautions or operates an inherently hazardous machine. But we didn’t require our freight trains to carry an automatic derailment switch that operators could trigger when they saw a schoolbus stuck on the tracks, even if sacrificing the engineer and fireman to save the schoolchildren might, in theory, be more ‘moral.’ And I’m not sure how the scenario you posit is really any different.

    Similarly, we’ve been debating targeting and morality during war for a long, long time now. Soldiers operate under Rules of Engagement, which function (at least in theory) like an algorithm – providing them with a checklist of circumstances necessary for the application of varying levels of force. A soldier at a Kabul checkpoint approached by a speeding car faces precisely the dilemma you outlined above – the need to act upon a probabilistic assessment using insufficient and imperfect information. Her commanders give her the ROE, and a set of guidelines for the escalation of force – a program, if you will. It will matter whether it’s a soldier or a computer that applies those guidelines, but not necessarily in the ways that you seem to imply. Both will apply them imperfectly, but in different ways. But it’s not unreasonable to wonder if putting remotely-operated units on the battlefield might, perhaps, allow for more restrictive Rules of Engagement. If that armored vehicle coming down the street wasn’t actually full of soldiers, but instead, if it were a remotely operated drone, then the sentinel could accept a much higher degree of risk before pulling the trigger. Of course, by removing human aversion from the equation and reducing the risk to soldiers, it might lower the bar to employment of lethal force or the willingness to engage in conflicts. Many of these things cut both ways.

    ‘Programming consciences,’ as you put it, is something that every command structure or bureaucracy already does. It’s that larger moral calculus here that really fascinates me.

  14. James

    I think Jeremy has made an important point, as well as those commenting that the machine has a much greater ability to respond than a human.

    Stipulating, however, that there will be an instance at some point that it comes down to choosing between the driver and the pedestrian, I have two thoughts.

    First, in the scenario given (drive off the bridge or hit the kids), this strikes me as more of a problem for a theoretical conscience than a real one. Does anyone really expect a driver would choose to go off the bridge? I certainly don’t. It’s unfortunate for the kids and their family, but suicide in defense of others is not expected of a moral conscience.

    Secondly, we will resolve these future dilemmas as we resolve current ones: When the rules before the event are unclear, we have courts to figure it out after the fact, and give input to the next generation of rulesets.

  15. Timothy

    (C) The computer will slow the car to a school zone speed (or slower) as appropriate for the relative proximity of pedestrians. In other words, the moral decision will be to slow the f*** down in that situation while actually speeding up in lower risk areas. Also, the car will be vastly lighter and lower powered because the occupant protection is already part of automated driving, and the automated driver doesn’t need zero to sixty in 5 seconds to impress its girlfriend. (The automated driver will coordinate pacing through intersections and passing with little acceleration required.) Thus stopping distance will be comparatively trivial and, as with aircraft, the computer would be allowed to destroy the brakes if necessary to stop. Moreover, if the occupant isn’t wearing a seat belt to be ready for a sudden stop then the car won’t even move (or will safely pull over and stop if already moving).

  16. Katherine

    I’m with Joshua and Jeremy. My first thought was “why is the car going 50 mph around pedestrians?” The first requirement for a viable autonomous car is that it be smart enough to *not* just cruise along at the posted speed limit, but to adjust to changing conditions at least as well as a human would.

    There’s actually quite a lot of existing law regarding the responsibilities of designers and manufacturers of life critical systems. Combine that with existing law regarding the responsibilities of motor vehicle operators and you’ve got at least a framework for an autonomous car’s “judgment” engine.

  17. Josh

    Honestly, if I were driving that car, I would probably run off the bridge _and_ fail to miss the children.

  18. Sander Duivestein

    Nick,

    My two cents:

    1. “A pedestrian gets Googled”, http://youtu.be/a93xjkQFDyM

    2. “In March 2011, a Predator parked at the camp started its engine without any human direction, even though the ignition had been turned off and the fuel lines closed. Technicians concluded that a software bug had infected the “brains” of the drone, but never pinpointed the problem. “After that whole starting-itself incident, we were fairly wary of the aircraft and watched it pretty closely,” an unnamed Air Force squadron commander testified to an investigative board, according to a transcript. “Right now, I still think the software is not good.”

    http://www.washingtonpost.com/world/national-security/remote-us-base-at-core-of-secret-operations/2012/10/25/a26a9392-197a-11e2-bd10-5ff056538b7c_story.html

  19. Kelly Norton

    I slightly ashamed to say it, but as a programmer, my first thought was to put this in the user settings for the software. (AKA punt). Of course, then the debate centers around what the default settings should be.

  20. Nick Post author

    re: “my first thought was to put this in the user settings for the software”

    I actually thought about that, as something along those lines strikes me as being in the realm of the possible.

    Say, before purchasing a driver-less vehicle, you have to take a psychological test, and then the car is programmed based on the test results. Your ethics are reflected in its operations. That also gets rid of the default settings problem.

  21. Daniel

    1) Stopping from 35MPH with full braking force takes almost no distance (about 10 yards) in a modern car. The amount of time it takes the human brain to process the visual input would take more time than the actual act of stopping the car. Computer reaction time would be nearly instantaneous, provided proper programming.

    Assuming that there was not enough time to avoid the children and that the bridge had only one lane and there was no room for the car to swerve past the children, the guard rail on a properly designed and constructed bridge would be able to stop the car from going over.

    2) Sensors in your car would be able to see the dog coming. Thermal, Radar, Sonar, whatever. Darkness and thick bushes aren’t a barrier to the right combination of sensor technology. Programming would allow car to slow down as appropriate.

    Even if none of the car’s sensors could not see the dog until it cleared the bushes, as soon as it did and the on-board computer calculated its path, the car would apply braking which would be able to stop a car going at residential speeds.

    But let’s say it’s a narrow street with cars parked on either side, providing objects much harder for sensors to see through, and the dog darts between two cars directly into the path of your car. Sorry, Spot, but you’ve been run over. Sensor data from the car can show there was nothing to be done. Perhaps Spot’s owner should have closed the gate to their fence?

    3) Google has developed a neural network that allows a computer to recognize different objects. It’s a ways off, but your car also wouldn’t be starting from scratch. http://www.theverge.com/2012/6/26/3117956/google-x-object-recognition-research-youtube

    As said previously, a lot split second decisions made in vehicles are only split second decisions because the human brain wasn’t made to process decisions while going at 35MPH or higher. Our brain’s decision making speed is no match for a computer connected to an array of sensors providing 360 degree coverage.

    The technology won’t be perfect and there will be sudden and unexpected scenarios, but there will fewer than you seem to assume.

  22. BobN

    The obvious answer is to use a Microsoft self-driving car which would use Bing maps to make decisions. The car would just smoothly drive off the bridge and right onto the river or its bank which are level with the road surface.

  23. Nick Post author

    One theme I’m hearing from a lot of these comments is that, with enough data and analytical muscle, ethical ambiguity will evaporate like morning dew under a hot summer sun.

    We are nothing if not children of our time.

  24. Katherine

    Ethical ambiguity won’t evaporate, but it won’t manifest itself in such simplistic ways as this.

    Rather, what are the ethical responsibilities of the driver if/when the software fails? If the car deliberately decides to hit the kids, is the driver obligated to hurl himself off the bridge?

    (Assuming, of course, that the manual override even still exists. But if it doesn’t, that raises all kinds of interesting issues about whether outside actors — police, manufacturers, angry spouses — can make the car do something you don’t like.)

Comments are closed.