In Geneva today, United Nations special rapporteur Christof Heyns presented his report on lethal autonomous robots, or LARs, to the Human Rights Council. You can download the full report, which is methodical, dispassionate, and chilling, here.
LARs, which Heyns defines as “weapon systems that, once activated, can select and engage targets without further human intervention,” have not yet been deployed in wars or other conflicts, but the technology to produce them is very much in reach. It’s just a matter of taking the human decision-maker out of the hurly-burly of the immediate “kill loop” and leaving the firing decision to algorithms (ie, abstract protocols scripted by humans in calmer circumstances). Governments with the capability to field such weapons “indicate that their use during armed conflict or elsewhere is not currently envisioned,” but history, as Heyns points out, suggests that such assurances are subject to revision without warning:
It should be recalled that aeroplanes and drones were first used in armed conflict for surveillance purposes only, and offensive use was ruled out because of the anticipated adverse consequences. Subsequent experience shows that when technology that provides a perceived advantage over an adversary is available, initial intentions are often cast aside. Likewise, military technology is easily transferred into the civilian sphere. If the international legal framework has to be reinforced against the pressures of the future, this must be done while it is still possible.
Another complicating factor, and one that makes the issue of LARs even more pressing, is that “the nature of robotic development generally makes it a difficult subject of regulation”:
Bright lines are difficult to find. Robotic development is incremental in nature. Furthermore, there is significant continuity between military and non-military technologies. The same robotic platforms can have civilian as well as military applications, and can be deployed for non-lethal purposes (e.g. to defuse improvised explosive devices) or be equipped with lethal capability (i.e. LARs). Moreover, LARs typically have a composite nature and are combinations of underlying technologies with multiple purposes.
The importance of the free pursuit of scientific study is a powerful disincentive to regulate research and development in this area. Yet “technology creep” in this area may over time and almost unnoticeably result in a situation which presents grave dangers to core human values and to the international security system.
The UN report makes it clear that there are practical advantages as well as drawbacks to using LARs in place of soldiers and airmen:
Robots may in some respects serve humanitarian purposes. While the current emergence of unmanned systems may be related to the desire on the part of States not to become entangled in the complexities of capture, future generations of robots may be able to employ less lethal force, and thus cause fewer unnecessary deaths. Technology can offer creative alternatives to lethality, for instance by immobilizing or disarming the target. Robots can be programmed to leave a digital trail, which potentially allows better scrutiny of their actions than is often the case with soldiers and could therefore in that sense enhance accountability.
The progression from remote controlled systems to LARs, for its part, is driven by a number of other considerations. Perhaps foremost is the fact that, given the increased pace of warfare, humans have in some respects become the weakest link in the military arsenal and are thus being taken out of the decision-making loop. The reaction time of autonomous systems far exceeds that of human beings, especially if the speed of remote-controlled systems is further slowed down through the inevitable time-lag of global communications. States also have incentives to develop LARs to enable them to continue with operations even if communication links have been broken off behind enemy lines.
LARs will not be susceptible to some of the human shortcomings that may undermine the protection of life. Typically they would not act out of revenge, panic, anger, spite, prejudice or fear. Moreover, unless specifically programmed to do so, robots would not cause intentional suffering on civilian populations, for example through torture. Robots also do not rape.
Yet robots have limitations in other respects as compared to humans. Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people’s actions, and understanding of values and anticipation of the direction in which events are unfolding. Decisions over life and death in armed conflict may require compassion and intuition. Humans – while they are fallible – at least might possess these qualities, whereas robots definitely do not. While robots are especially effective at dealing with quantitative issues, they have limited abilities to make the qualitative assessments that are often called for when dealing with human life. Machine calculations are rendered difficult by some of the contradictions often underlying battlefield choices. A further concern relates to the ability of robots to distinguish legal from illegal orders.
While LARs may thus in some ways be able to make certain assessments more accurately and faster than humans, they are in other ways more limited, often because they have restricted abilities to interpret context and to make value-based calculations.
Beyond the obvious moral and technical questions, one of the greatest and most insidious risks of autonomous killer robots, Heyns writes, is that they can erode the “built-in constraints that humans have against going to war,” notably “our aversion to getting killed, losing loved ones, or having to kill other people”:
Due to the low or lowered human costs of armed conflict to States with LARs in their arsenals, the national public may over time become increasingly disengaged and leave the decision to use force as a largely financial or diplomatic question for the State, leading to the “normalization” of armed conflict. LARs may thus lower the threshold for States for going to war or otherwise using lethal force, resulting in armed conflict no longer being a measure of last resort.
It seems clear that the time to think about lethal autonomous robots is now. Writes Heyns: “This report is a call for pause, to allow serious and meaningful international engagement with this issue.” Once LARs are deployed, he implies, almost certainly correctly, it will probably be too late to restrict their use. So here we find ourselves in the midst of a case study, with extraordinarily high stakes, about whether or not society is capable of weighing the costs and benefits of a particular technology before it goes into use and of choosing a course rather than having a course imposed on it.