Programming the moral robot

frank

The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:

The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:

  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world

Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.

We’re trying to reverse-engineer something that wasn’t engineered in the first place.

7 thoughts on “Programming the moral robot

  1. Daniel Cole

    And yet we’re still getting bludgeoned with the truism that philosophy is useless to science at best, and perniciously misleading at worst. If Hume could’ve just tried a little harder to nail that perfect algorithm, then the Treatise wouldn’t have needed to be so long and complicated. Morality is probably really simple and elegant, right? We probably just need to cut through the noise and all of that useless ambiguity.

    I’d love to think that it will be fun to watch engineers flounder trying to derive “ought” from “is” and the like, but since we’re talking about military robots and deadlines here, my amusement is giving way to a bout of nausea. Wait until they believe they’ve succeeded, then we’ll see what’s pernicious. At least philosophy didn’t instantiate all of its worst mistakes in autonomous killing machines.

    Wait a minute…I think I backed myself into a corner here : )

  2. yt75

    Technology is a monstrous book, always evolving, but always dead at the same time.

    Nietzsche was writing :
    “These Englishmen are no race of philosophers. Bacon signifies an attack on the spirit of philosophy in general; Hobbes, Hume, and Locke have been a debasement and a devaluing of the idea of a “philosopher” for more than a century. Kant raised himself and rose up in reaction against Hume. It was Locke of whom Schelling was entitled to say, “Je méprise Locke” [I despise Locke]. In the struggle with the English mechanistic dumbing down of the world, Hegel and Schopenhauer (along with Goethe) were unanimous – both of these hostile fraternal geniuses in philosophy, who moved away from each other towards opposite poles of the German spirit and in the process wronged each other, as only brothers can.13 What’s lacking in England, and what has always been missing, that’s something that semi-actor and rhetorician Carlyle understood well enough, the tasteless muddle-headed Carlyle, who tried to conceal under his passionate grimaces what he understood about himself, that is, what was lacking in Carlyle – a real power of spirituality, a real profundity of spiritual insight, in short, philosophy.”

    One can notice how right he was these days through the regressive and boring “technology thinkers” or something.

  3. Brutus

    Is a moral robot another attempt at a labor-saving device, which predictably creates more problems than it solves? Where is the strategic demand for such a beast, and why on earth would anyone trust it with life-and-death decisions?

    Decision-making in a computation framework generally involves a complicated decision tree, which is a series of yes/no questions leading to some simplified outcome, the pathway having potentially irresolvable recursion. It’s not anything close to AI, but it’s a step in that direction (though I believe we’re still light-years away from engineering such a thing). In contrast, humans don’t think according to linear designs and manage recursion pretty well under normal circumstances.

    On a philosophical level, we’re remaking the world for machines and seeking to become them as prosthetics and mind/computer interfaces evolve to the point we become cyborgs. Why aren’t we content to be merely human? Why the drive to become super- or posthuman?

  4. Nick Post author

    Why the drive to become super- or posthuman?

    “Human beings are ashamed to have been born instead of made.” – Günther Anders, 1956

    That’s the most concise explanation of contemporary Silicon Valley that I’ve come across.

  5. Seth

    Nick, you’re showing exactly the divide I keep talking about, but which I approach from the tech-positive side . Why be limited to what you were *born* with, when you can *make* something that exceeds it, including yourself? Do shoes mean you’re ashamed of your feet? Do eyeglasses mean you’re ashamed of your eyes? Yeah, Kurzweil and co take this to an extreme – but the other extreme is living in caves and eating raw meat, because anything else is technology (is fire OK? knives? spears? – why aren’t we content to be hunters with our bare hands?)

  6. Faza (TCM)

    Even before diving into the problems you’ve outlined, Nick (which I agree with), I see one fundamental issue which makes the matter a non-starter: a lack of definition of “morality” itself.

    What, at the end of the day, is “mores”, other than custom particular to a specific place, time and culture? To even begin to be “morally competent”, one must first at least be aware of the norms determining value judgements. These will always be somewhat arbitrary. For instance, what – if anything – would we propose as a differentiating factor for killing combatants vs. non-combatants? On a personal level, a “better them than me” mode of thinking is applicable, but not so when considering the death of a human vs. the destruction of a machine (and if ever a military autonomous machine starts thinking along those lines, we’ll be in serious trouble – to say nothing of the glorious failure of the aforementioned project).

    As with a lot of things, I cannot help thinking that the only way to do this is to implement some manner of “judgement key” and accept the consequences (and responsibility) for when it is implemented. Such a machine will not be “moral” in any realistic sense (which – in itself – seems impossible to achieve at present), but will rather be the expression of the “morality” of its creators and directors – as much as morality is possible in the context of military application.

  7. Timothy

    “Human beings are ashamed to have been born instead of made.” – Günther Anders, 1956
    The same, or similar, sensibility was inherent in “Frankenstein” circa 1816. In Mary Shelley’s view science was ‘Male Science.” The shame was attached to men not being able to create life without women. This shame goes back to at least the god Saturn, who had to eat his children to have them re-born from his head. It is inherent in “Pinocchio” and perhaps something of it remains in Robotics. Like the wooden boy built by a man, are these drones currently immoral robots that will magically become human if they learn morality?

Comments are closed.