Lethal autonomous robots are coming

battle-thermopylae

In Geneva today, United Nations special rapporteur Christof Heyns presented his report on lethal autonomous robots, or LARs, to the Human Rights Council. You can download the full report, which is methodical, dispassionate, and chilling, here.

LARs, which Heyns defines as “weapon systems that, once activated, can select and engage targets without further human intervention,” have not yet been deployed in wars or other conflicts, but the technology to produce them is very much in reach. It’s just a matter of taking the human decision-maker out of the hurly-burly of the immediate “kill loop” and leaving the firing decision to algorithms (ie, abstract protocols scripted by humans in calmer circumstances). Governments with the capability to field such weapons “indicate that their use during armed conflict or elsewhere is not currently envisioned,” but history, as Heyns points out, suggests that such assurances are subject to revision without warning:

It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside. Likewise, military technology is easily transferred into the civilian sphere. If the international legal framework has to be reinforced against the pressures of the future, this must be done while it is still possible.

Another complicating factor, and one that makes the issue of LARs even more pressing, is that “the nature of robotic development generally makes it a difficult subject of regulation”:

Bright lines are difficult to find. Robotic development is incremental in nature. Furthermore, there is significant continuity between military and non-military technologies. The same robotic platforms can have civilian as well as military applications, and can be deployed for non-lethal purposes (e.g. to defuse improvised explosive devices) or be equipped with lethal capability (i.e. LARs). Moreover, LARs typically have a composite nature and are combinations of underlying technologies with multiple purposes.

The importance of the free pursuit of scientific study is a powerful disincentive to regulate research and development in this area. Yet “technology creep” in this area may over time and almost unnoticeably result in a situation which presents grave dangers to core human values and to the international security system.

The UN report makes it clear that there are practical advantages as well as drawbacks to using LARs in place of soldiers and airmen:

Robots may in some respects serve humanitarian purposes. While the current emergence of unmanned systems may be related to the desire on the part of States not to become entangled in the complexities of capture, future generations of robots may be able to employ less lethal force, and thus cause fewer unnecessary deaths. Technology can offer creative alternatives to lethality, for instance by immobilizing or disarming the target. Robots can be programmed to leave a digital trail, which potentially allows better scrutiny of their actions than is often the case with soldiers and could therefore in that sense enhance accountability.

The progression from remote controlled systems to LARs, for its part, is driven by a number of other considerations. Perhaps foremost is the fact that, given the increased pace of warfare, humans have in some respects become the weakest link in the military arsenal and are thus being taken out of the decision-making loop. The reaction time of autonomous systems far exceeds that of human beings, especially if the speed of remote-controlled systems is further slowed down through the inevitable time-lag of global communications. States also have incentives to develop LARs to enable them to continue with operations even if communication links have been broken off behind enemy lines.

LARs will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.

Yet robots have limitations in other respects as compared to humans. Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values and anticipation of the direction in which events are unfolding. Decisions over life and death in armed conflict may require compassion and intuition. Humans – while they are fallible – at least might possess these qualities, whereas robots definitely do not. While robots are especially effective at dealing with quantitative issues, they have limited abilities to make the qualitative assessments that are often called for when dealing with human life. Machine calculations are rendered difficult by some of the contradictions often underlying battlefield choices. A further concern relates to the ability of robots to distinguish legal from illegal orders.

While LARs may thus in some ways be able to make certain assessments more accurately and faster than humans, they are in other ways more limited, often because they have restricted abilities to interpret context and to make value-based calculations.

Beyond the obvious moral and technical questions, one of the greatest and most insidious risks of autonomous killer robots, Heyns writes, is that they can erode the “built-in constraints that humans have against going to war,” notably “our aversion to getting killed, losing loved ones, or having to kill other people”:

Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the “normalization” of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.

It seems clear that the time to think about lethal autonomous robots is now. Writes Heyns: “This report is a call for pause, to allow serious and meaningful international engagement with this issue.” Once LARs are deployed, he implies, almost certainly correctly, it will probably be too late to restrict their use. So here we find ourselves in the midst of a case study, with extraordinarily high stakes, about whether or not society is capable of weighing the costs and benefits of a particular technology before it goes into use and of choosing a course rather than having a course imposed on it.

11 thoughts on “Lethal autonomous robots are coming

  1. Max

    Dunno. Does anybody have any serious doubt that LARs will happen? The logic behind the development will, obviously, be “if we don’t do it, then Kim|Ahmadinejad| will do it, and we’ll be unprotected!” And the problem is that the logic will be right. Any dictator worth his salt will develop any and all weapons [s]he possibly can. As well [s]he should. I don’t necessarily see how this is principally different from nuclear and thermonuclear weapons, which, by the way, have even less discretion in who they kill (by several orders of magnitude) than a conventionally armed LAR would. This is just another round of the perpetual arms race, and does not even seem all that scary, really.

    What is scary, actually, is a more general trend where technology is a leverage, which, over time, becomes a) more powerful (atomic bomb does more damage than a club), and b) more easily accessible (50 years ago only two superpowers were able to build atomic bombs, now it’s any somewhat advanced state, fifty years into the future — Al Qaeda’s spiritual successors will be baking them in the caves of Afganistan or whatever). So the question then is whether humanity will figure out a way to keep the wackos from the club that’s long enough to kill everybody…

  2. MK

    Max: I understand your logic in the fact that as a human race we will always be pursuing “the next big breakthrough.” And this does apply to the arms race. However, the atomic bomb, even though extremely destructive and it does not discern between intended targets and innocent bystanders, it is still controlled by human beings. And even though as humans, we are definitely faulted, biased, and bad decision-makers, we still have a sense of guilt, humanity and try to preserve life when we can (at least most humans do). So even though we are far from perfect and still learning, we aren’t that bad (most of the time).

    So my question to you is this — explain to me why a LAR is not any more scarier than the atom bomb? We still control the atom bomb. It can’t go find some place and decide to go off by itself. It can’t make decisions on its own. It can’t harm anyone on its own. And it is only as dangerous as the people who control it. (Which granted, taking into account who has them as this point and in the future is frightening enough as it is….)

    But any “thing” that has the capability to move on its own, make decisions on its own and then also carries the ability to kill people, it FAR more frightening to me than the atom bomb.

    Just my two cents…..

  3. yt75

    When you look at these appaches videos (where the helis are really high I think) and the dialog between the guy at the trigger and some remote guy giving “please engage” orders both looking at the same videos feed, you really sense the “dilution” in the killing act decision.
    But energy/maintenance remains the key for these LARS, and also in fact the basic reason of current wars, with more or less totally brainswashed “citizens”.
    How many Americans know that they have passed their oil production peak in 1970 and that this was the key reason for the first oil shock much more than the “embargo” little song for instance, 0,1 %, 1% ? 0,01 ?

  4. max

    MK: the bomb is scarier, because if deployed, it kills far more people then a stupid drone with a machine gun. If I remember correctly, a decent-sized (250 megaton, let’s say) warhead, detonated high enough, turns a country the size of Poland into a glassy parking lot. Yeah, it is controlled by humans, sure. But in my humble opinion LARs with conventional weapons are just not scary. Now, if we deploy LARs with [thermo]nuclear warheads — that would be scary, yes.

  5. david martin

    If all can go as planned, I think it is inevitable. Review the parameters before the LAR’s are released onto the battle space and pass laws similar to merchantability. My concern is the code virus that can shut off external communication and continue unimpeded.

  6. Daniel C.

    The erosion of the reasons for not going to war is a particularly chilling thought, and an accurate one, I believe. I imagine these LARs might be very effective for escalating tense situations between hostile bordering nations.

    The only amendment I’d suggest is that it will most likely be too late to restrict their use not once they are deployed, but once they are developed.

Comments are closed.