The artificial morality of the robot warrior

Great strides have been made in recent years in the development of combat robots. The US military has deployed ground robots, aerial robots, marine robots, stationary robots, and (reportedly) space robots. The robots are used for both reconnaissance and fighting, and further rapid advances in their design and capabilities can be expected in the years ahead. One consequence of these advances is that robots will gain more autonomy, which means they will have to act in uncertain situations without direct human instruction. That raises a large and thorny challenge: how do you program a robot to be an ethical warrior?

The Times of London this week pointed to an extensive report on military robots, titled Autonomous Military Robotics: Risk, Ethics, and Design, that was prepared in December for the US Navy by the Ethics & Emerging Technologies Group at the California State Polytechnic University. In addition to providing a useful overview of the state of the art in military robots, the report provides a fascinating examination of how software writers might go about programming what the authors call “artificial morality” into machines.

The authors explain why it’s imperative that we begin to explore robot morality:

Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways … Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity.

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The authors also note that “military robotics have already failed on the battlefield, creating concerns with their deployment (and perhaps even more concern for more advanced, complicated systems) that ought to be addressed before speculation, incomplete information, and hype fill the gap in public dialogue.” They point to a mysterious 2008 incident when “several TALON SWORDS units—mobile robots armed with machine guns—in Iraq were reported to be grounded for reasons not fully disclosed, though early reports claim the robots, without being commanded to, trained their guns on ‘friendly’ soldiers; and later reports denied this account but admitted there had been malfunctions during the development and testing phase prior to deployment.” They also report that in 2007 “a semi‐autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others.” These failures, along with some spectacular failures of robotic systems in civilian applications, raise “a concern that we … may not be able to halt some (potentially‐fatal) chain of events caused by autonomous military systems that process information and can act at speeds incomprehensible to us, e.g., with high‐speed unmanned aerial vehicles.”

In the section of the report titled “Programming Morality,” the authors describe some of the challenges of creating the software that will ensure that robotic warriors act ethically on the battlefield:

Engineers are very good at building systems to satisfy clear task specifications, but there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI …

The choices available to systems that possess a degree of autonomy in their activity and in the contexts within which they operate, and greater sensitivity to the moral factors impinging upon the course of actions available to them, will eventually outstrip the capacities of any simple control architecture. Sophisticated robots will require a kind of functional morality, such that the machines themselves have the capacity for assessing and responding to moral considerations. However, the engineers that design functionally moral robots confront many constraints due to the limits of present‐day technology. Furthermore, any approach to building machines capable of making moral decisions will have to be assessed in light of the feasibility of implementing the theory as a computer program.

After reviewing a number of possible approaches to programming a moral sense into machines, the authors recommend an approach that combines the imposition of “top-down” rules with the development of a capacity for “bottom-up” learning:

A top‐down approach would program rules into the robot and expect the robot to simply obey those rules without change or flexibility. The downside … is that such rigidity can easily lead to bad consequences when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule‐bound.

A bottom‐up approach, on the other hand, depends on robust machine learning: like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do. General, universal rules are eschewed. But this too becomes problematic, especially as the robot is introduced to novel situations: it cannot fall back on any rules to guide it beyond the ones it has amassed from its own experience, and if those are insufficient, then it will likely perform poorly as well.

As a result, we defend a hybrid architecture as the preferred model for constructing ethical autonomous robots. Some top‐down rules are combined with machine learning to best approximate the ways in which humans actually gain ethical expertise … The challenge for the military will reside in preventing the development of lethal robotic systems from outstripping the ability of engineers to assure the safety of these systems.

The development of autonomous robot warriors stirs concerns beyond just safety, the authors acknowledge:

Some have [suggested that] the rise of such autonomous robots creates risks that go beyond specific harms to societal and cultural impacts. For instance, is there a risk of (perhaps fatally?) affronting human dignity or cherished traditions (religious, cultural, or otherwise) in allowing the existence of robots that make ethical decisions? Do we ‘cross a threshold’ in abrogating this level of responsibility to machines, in a way that will inevitably lead to some catastrophic outcome? Without more detail and reason for worry, such worries as this appear to commit the ‘slippery slope’ fallacy. But there is worry that as robots become ‘quasi‐persons,’ even under a ‘slave morality’, there will be pressure to eventually make them into full‐fledged Kantian‐autonomous persons, with all the risks that entails. What seems certain is that the rise of autonomous robots, if mishandled, will cause popular shock and cultural upheaval, especially if they are introduced suddenly and/or have some disastrous safety failures early on.

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors. “Robots,” they write, “would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost.” Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

4 thoughts on “The artificial morality of the robot warrior

  1. Seth Finkelstein

    “to the emergence of a capacity for emotion as well”

    Hmm … what is emotion?

    The following is geeky, yet I think it illuminates the issue: On the series Star Trek: The Next Generation, why was the android character Lt. Data described as having “no emotions” (besides the obvious hackery that robots or androids don’t have them)? But he readily displayed likes, dislikes, initiative, curiosity, a moderate amount of ambition, and so on. Arguably pride and a bit of self-pity too. Maybe he didn’t have rage, lust, or humor, but that’s about spectrum, not entire absence.

    If you have autonomy and action, doesn’t that at some point add up to desire?

    By the way, if a robot is built with the capacity for moral action and moral choice – so what? (in a non-technical sense). It’s not like there’s a notable lack of opportunity. And trying to find the balance in a complicated weapon between going off too readily, and not working at all, is a problem dating back to the first mechanical trap.

  2. David Evans

    An early descriptions of autonomous kill-capable weapons was, revealingly, “fourth generation land mines”. This was of course dropped when the international treaty on land mines was introduced. Last February at a RUSI/BCS event, ethicists, military lawyers, operational personnel, procurement, engineers and scientists were brought together to look at this. It was fascinating to see the penny drop across some of these groups, as some of the implications became clear. The conclusion was that unless you can program the rules of engagement into a robot, or you can guarantee that all those in range are combatants, then you may be committing a war crime simply by deploying the weapon. That was a major shock for the procurement people, who realised they may be spending money on developing weapon systems it would be illegal to use.

  3. LaneLester

    Given our lack of success in “programming” ethical human warriors, the prospect is not very good for robots. I’m sure the military views as at least one advantage the fact that robots will have no compunction in performing any acts at all.

    Lane

    “No human being has the right, under any circumstances, to initiate force against another human being, nor to advocate or delegate its initiation.”

  4. Tom Lord

    Ethics and morality seem to be largely deep features of human physiology, with a little bit of culture layered on. It’s not very meaningful to talk about programming these things. They are busy, instead, redefining the terms.

    -t

Comments are closed.