Rules for warbots

The rapid advance of robotic weapons is beginning to stir some intriguing, and disturbing, questions about the future rules of war. The Register and New Scientist point to a presentation by John Canning, an engineer with the U.S. Naval Surface Warfare Center, who argues that we’ve come to an important juncture in the history of warfare in which military robots will increasingly have the ability to autonomously select and destroy targets without human guidance. “With regard to Armed Autonomous Systems,” Canning writes, “the critical issue is the ability for the weapon to discriminate a legal target.”

Up to now, he notes, there has been “a requirement to maintain an operator in the ‘weapons release’-loop to avoid the possibility of accidentally killing someone. [A human] operator is effectively ‘welded’ to each armed unmanned system for this purpose.” But this requirement for human control undermines the performance benefits and cost savings that can now be gained through “the employment of large numbers of armed unmanned systems.”

Canning argues that, when it comes to the use of sophisticated warbots, the military needs to establish clear rules of engagement. In particular, he recommends that machines should only be able to autonomously target other machines: “let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies – not the people using the weapons. This gives us the possibility of disarming a threat force without the need for killing them … In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are ‘in-the-weapons-control-loop.'” The ability to switch from an autonomous machine-killing mode to a human-directed people-killing mode should be built into military robots, he says.

The Register’s Lewis Page notes that there would seem to be some practical obstacles to imposing the targeting restrictions: “It isn’t really made clear how [the] rule could really be applied in these cases. Doppler radar is going to have trouble distinguishing between attacking manned jets and incoming missiles, for instance. Even if the two could be swiftly and reliably differentiated, adding a human reaction and decision period in an air-defence scenario may not be a survivable thing to do.” It’s a fair point – how exactly do you program a warbot to, as Canning puts it, “discriminate a legal target”? – but as we look ahead to the prospect of ever more sophisticated autonomous weapons, the questions Canning is raising seem like very good ones to ask.

5 thoughts on “Rules for warbots

  1. IsaacGarcia

    “If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.”

    The above quote is from Theodore Kacyinski’s “Unabomber Manifesto” [as quoted by Bill Joy who was quoting Ray Kurzweil’s book “The Age of Spiritual Machines.”]

    A broader discussion of the topic can be found in Bill Joy’s April 2000 essay in Wired Magazine: “Why The Future Doesn’t Need Us.”

    Bill Joy goes on to write:

    “Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.”

    Scary Stuff.

  2. Mike

    Somehow I suspect that John Canning is asking the wrong questions.

    Why are we wasting so many man hours and resources on the development of armed autonomous systems? When we watch and read of today’s events at Virginia Tech. and reflect that some humans can kill indiscriminately, what hope have we of controlling robots?

    In his fiction Asimov provided us with the seminal Laws of Robotics, Canning and his colleagues should be attempting to apply those laws, not create new ones.

    (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Recent friendly fire “accidents” have shown how difficult it is for humans to apply rules of engagement, it is folly of the highest order to believe that robots will fare any better.

    When will we learn?

  3. Anthony Cowley

    The scary part about working with military applications of robotics is actually the subtlety of our relationship with our technology. I think many people would gain a better understanding of the situation if they understood that there is no such thing as a robot. That is, there is no point at which something becomes a robot. If a person launches a missile that automatically tracks a target, yet the missile strikes the wrong target, who’s fault is it? It could be said that a robotic missile chose to kill an innocent bystander.

    I think the most practical and realistic viewpoint is to say that a person releasing an autonomous warbot is, effectively, pulling the trigger. Robots don’t kill people; people kill people. If there is outrage about machines killing people, then we should spend a lot more time worrying about our own inclinations to unleash these weapons than about sets of moral codes for the machines to follow.

  4. Seth Finkelstein

    Between carpet-bombing and landmines, I can’t get worked-up over this issue. It’s purely playing to the audience’s Frankenstein complex – Oh no, the machines will go bad and attack us!

    Mines already are machines which can be said to make their own “decisions” to kill based on very simple algorithms – usually roughly “Is there somebody near me?”. More sophisticated algorithms won’t make anything worse.

    Sure, someday they’ll be something like the hound in Fahrenheit 451 whihc tacks someone by DNA fragments and tries to kill them. And a clever person will decoy the machine and get it to kill somoene else. That’s war.

  5. seamusmccauley

    A rule that warbots can only target other warbots amounts to a decision to squander a decisive technical advantage on the battlefield. It seems unlikely that military strategists will agree to limit the potential effectiveness of their forces in this way, much as it would have been unlikely in an earlier age for nations with machine guns to agree that the guns could only be deployed in the event that they were attacked by an army similarly armed.

Comments are closed.