The rapid advance of robotic weapons is beginning to stir some intriguing, and disturbing, questions about the future rules of war. The Register and New Scientist point to a presentation by John Canning, an engineer with the U.S. Naval Surface Warfare Center, who argues that we’ve come to an important juncture in the history of warfare in which military robots will increasingly have the ability to autonomously select and destroy targets without human guidance. “With regard to Armed Autonomous Systems,” Canning writes, “the critical issue is the ability for the weapon to discriminate a legal target.”
Up to now, he notes, there has been “a requirement to maintain an operator in the ‘weapons release’-loop to avoid the possibility of accidentally killing someone. [A human] operator is effectively ‘welded’ to each armed unmanned system for this purpose.” But this requirement for human control undermines the performance benefits and cost savings that can now be gained through “the employment of large numbers of armed unmanned systems.”
Canning argues that, when it comes to the use of sophisticated warbots, the military needs to establish clear rules of engagement. In particular, he recommends that machines should only be able to autonomously target other machines: “let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies – not the people using the weapons. This gives us the possibility of disarming a threat force without the need for killing them … In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are ‘in-the-weapons-control-loop.'” The ability to switch from an autonomous machine-killing mode to a human-directed people-killing mode should be built into military robots, he says.
The Register’s Lewis Page notes that there would seem to be some practical obstacles to imposing the targeting restrictions: “It isn’t really made clear how [the] rule could really be applied in these cases. Doppler radar is going to have trouble distinguishing between attacking manned jets and incoming missiles, for instance. Even if the two could be swiftly and reliably differentiated, adding a human reaction and decision period in an air-defence scenario may not be a survivable thing to do.” It’s a fair point – how exactly do you program a warbot to, as Canning puts it, “discriminate a legal target”? – but as we look ahead to the prospect of ever more sophisticated autonomous weapons, the questions Canning is raising seem like very good ones to ask.