Programming the moral robot

frank

The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:

The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:

  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world

Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.

We’re trying to reverse-engineer something that wasn’t engineered in the first place.

Overload, situational and ambient

872613462_c7643684e6_z

The following is a reposting of one of Rough Type’s greatest hits. It originally appeared in these pages on March 7, 2011.

“It’s not information overload. It’s filter failure.” That was the main theme of a thoughtful and influential talk that Clay Shirky gave at a technology conference back in 2008. It’s an idea that’s easy to like both because it feels intuitively correct and because it’s reassuring: better filters will help reduce information overload, and better filters are things we can actually build. Information overload isn’t an inevitable side effect of information abundance. It’s a problem that has a solution. So let’s roll up our sleeves and start coding.

There was one thing that bugged me, though, about Shirky’s idea, and it was this paradox: The quality and speed of our information filters have been improving steadily for a few centuries, and have been improving extraordinarily quickly for the last two decades, and yet our sense of being overloaded with information is stronger than ever. If, as Shirky argues, improved filters will reduce overload, then why haven’t they done so up until now? Why don’t we feel that information overload is subsiding as a problem rather than getting worse? The reason, I’ve come to believe, is that Shirky’s formulation gets it precisely backwards. Better filters don’t mitigate information overload; they intensify it. It would be more accurate to say: “It’s not information overload. It’s filter success.”

But let me back up a little, because it’s actually more complicated than that. One of the traps we fall into when we talk about information overload is that we’re usually talking about two very different things as if they were one thing. Information overload actually takes two forms, which I’ll call situational overload and ambient overload, and they need to be treated separately.

Situational overload is the needle-in-the-haystack problem: You need a particular piece of information – in order to answer a question of one sort or another – and that piece of information is buried in a bunch of other pieces of information. The challenge is to pinpoint the required information, to extract the needle from the haystack, and to do it as quickly as possible. Filters have always been pretty effective at solving the problem of situational overload. The introduction of indexes and concordances – made possible by the earlier invention of alphabetization – helped solve the problem with books. Card catalogues and the Dewey decimal system helped solve the problem with libraries. Train and boat schedules helped solve the problem with transport. The Reader’s Guide to Periodicals helped solve the problem with magazines. And search engines and other computerized navigational and organizational tools have helped solve the problem with online databases.

Whenever a new information medium comes along, we tend to quickly develop good filtering tools that enable us to sort and search the contents of the medium. That’s as true today as it’s ever been. In general, I think you could make a strong case that, even though the amount of information available to us has exploded in recent years, the problem of situational overload has continued to abate. Yes, there are still frustrating moments when our filters give us the hay instead of the needle, but for most questions most of the time, search engines and other digital filters, or software-based, human-powered filters like email or Twitter, are able to serve up good answers in an eyeblink or two.

Situational overload is not the problem. When we complain about information overload, what we’re usually complaining about is ambient overload. This is an altogether different beast. Ambient overload doesn’t involve needles in haystacks. It involves haystack-sized piles of needles. We experience ambient overload when we’re surrounded by so much information that is of immediate interest to us that we feel overwhelmed by the neverending pressure of trying to keep up with it all. We keep clicking links, keep hitting the refresh key, keep opening new tabs, keep checking email in-boxes and RSS feeds and Facebook notifications, keep scanning Amazon and Netflix recommendations – and yet the pile of interesting information never shrinks.

The cause of situational overload is too much noise. The cause of ambient overload is too much signal.

The great power of modern digital filters lies in their ability to make information that is of inherent interest to us immediately visible to us. The information may take the form of personal messages or updates from friends or colleagues, broadcast messages from experts or celebrities whose opinions or observations we value, headlines and stories from writers or publications we like, alerts about the availability of various other sorts of content on favorite subjects, or suggestions from recommendation engines – but it all shares the quality of being tailored to our particular interests. It’s all needles. And modern filters don’t just organize that information for us; they push the information at us as alerts, updates, streams. We tend to point to spam as an example of information overload. But spam is just an annoyance. The real source of information overload, at least of the ambient sort, is the stuff we like, the stuff we want. And as filters get better, that’s exactly the stuff we get more of.

It’s a mistake, in short, to assume that as filters improve they have the effect of reducing the information we have to look at. As today’s filters improve, they expand the information we feel compelled to take notice of. Yes, they winnow out the uninteresting stuff (imperfectly), but they deliver a vastly greater supply of interesting stuff. And precisely because the information is of interest to us, we feel pressure to attend to it. As a result, our sense of overload increases. This is not an indictment of modern filters. They’re doing precisely what we want them to do: find interesting information and make it visible to us. But it does mean that if we believe that improving the workings of filters will save us from information overload, we’re going to be very disappointed. The technology that creates the problem is not going to make the problem go away. If you really want a respite from information overload, pray for filter failure.

Image: Randy Sears.

From MOOCs to OCs

wreck

“When I called a MOOC a lousy product I wasn’t kidding,” says Sebastian Thrun, the prime mover of the modern MOOC movement and the vast hype that came to surround it, in a new interview at Pando Daily. The fatal flaw in the “classic MOOC,” Thrun now says, is that it was free. You can only have a decent MOOC if you get rid of the MO and just have the OC.

“It’s not a MOOC [anymore] because we end up charging for it,” Thrun says, in describing the new online courses offered by his company, Udacity, which require students to pay a fee to receive a “service layer” of mentorship. “I feel confident asking people for money because their money is better spent on this than doing a free course and dropping out after a week.”

But doesn’t charging tuition subvert the grand promise that free online courses would “democratize” higher education?

Replies Thrun: “All our material is still available for free. If you’re a student who can’t afford the service layer you can take the MOOC, on demand, at your own pace. If you’re affluent, we can do a much better job with you, we can make magic happen.”

The poor get the “lousy.” The affluent get the “magic.”

As history professor Jonathan Rees delicately puts it, “Pardon me while I go vomit.”

But Thrun deserves the last word: “I am a total friend of honesty.”

Image: “Wreck of School House,” from Library of Congress.

The end of the beginning

fork

“If we automate our judgment-making and execute it at web scale,” Google and other aggregators have long told us, “then we absolve ourselves of responsibility for our judgments.” To which the Court of Justice of the European Union today replied, “No, you don’t.”

 35. In this connection, it should be pointed out that the processing of personal data carried out in the context of the activity of a search engine can be distinguished from and is additional to that carried out by publishers of websites […]

38. Inasmuch as the activity of a search engine is therefore liable to affect significantly, and additionally compared with that of the publishers of websites, the fundamental rights to privacy and to the protection of personal data, the operator of the search engine as the person determining the purposes and means of that activity must ensure, within the framework of its responsibilities, powers and capabilities, that the activity meets the requirements of Directive 95/46 in order that the guarantees laid down by the directive may have full effect and that effective and complete protection of data subjects, in particular of their right to privacy, may actually be achieved.

39. Finally, the fact that publishers of websites have the option of indicating to operators of search engines, by means in particular of exclusion protocols such as ‘robot.txt’ or codes such as ‘noindex’ or ‘noarchive’, that they wish specific information published on their site to be wholly or partially excluded from the search engines’ automatic indexes does not mean that, if publishers of websites do not so indicate, the operator of a search engine is released from its responsibility for the processing of personal data that it carries out in the context of the engine’s activity.

That feels kind of seismic.

Image: Daniel Oines.

A complicated courtship

globes

In early April, two articles appeared in leading European newspapers — “Fear of Google” in Frankfurter Allgemeine Zeitung, and “Google, or the Road to Serfdom” in Le Monde — criticizing the consolidation of commercial and cultural power in the hands of Google and other large Internet companies. Shortly afterward, Eric Schmidt offered a rebuttal, in the form of an open letter to Europe published in FAZ. “On a continent in search of economic hope, the Internet represents the main motor of economic growth,” he wrote. On the cultural side of the equation, he went on, the Net was due equal praise: “Around the world, people admire Europe’s art, food, and lifestyle. The Internet makes these cultural treasures available to all.”

Rejecting any further regulation of his company, Schmidt argued that the online market should be allowed to develop unfettered. He pointed to Google’s recent advertising pact with Germany’s Axel Springer as a model for the kind of cooperative business approach that he believes will keep Europe from becoming an “innovation desert”:

It was a complicated courtship. For years, German publisher Axel Springer challenged us on issue after issue, from copyright to competition. I travelled to Germany numerous times to meet Springer executives to propose a different path – profitable partnership. I argued that through innovation we could build new business models and achieve mutual benefit from emerging mobile and social technologies. Late last year, we walked down the aisle and signed a multi-year partnership for automated advertising, covering both web and mobile. …

While many other European publishers including such marquee names as the Telegraph and the Guardian have signed similar partnerships, some publishers in Europe still seem to believe that the best way forward lies in calling for heavy-handed regulation, pushing for new copyright charges on links to their articles and calling for antitrust action against companies such as Facebook, Amazon and us. … If adopted, this approach creates serious economic dangers. Above all, it risks creating an innovation desert in Europe. Some companies will leave and, worse still, others will never get off the ground – blocked by rules designed to protect incumbents. I am convinced that a better, more prosperous model exists through cooperation and commercial agreements [such as] our path-breaking advertising deal with Axel Springer.

In a long, remarkable reply to Schmidt, also published in FAZ a week later, Axel Springer’s chief executive, Mathias Döpfner, offered a very different view of the “profitable partnership” between his company and Google:

We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You wrote it yourself in your book: “We believe that modern technology platforms, such as Google, Facebook, Amazon and Apple, are even more powerful than most people realize (…), and what gives them power is their ability to grow – specifically, their speed to scale. Almost nothing, short of a biological virus, can scale as quickly, efficiently or aggressively as these technology platforms and this makes the people who build, control, and use them powerful too.” … In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition, not only economic, but also political. It concerns our values, our understanding of the nature of humanity, our worldwide social order and, from our own perspective, the future of Europe. … It is we the people who have to decide whether or not we want what you are asking of us – and what price we are willing to pay for it.

Döpfner went on to question Google’s self-image as a cultural hero:

On the Internet, in the beautiful colorful Google world, so much seems to be free of charge: from search services up to journalistic offerings. In truth we are paying with our behavior –  with the predictability and commercial exploitation of our behavior. Anyone who has a car accident today, and mentions it in an e-mail, can receive an offer for a new car from a manufacturer on his mobile phone tomorrow. Terribly convenient. Today, someone surfing high-blood-pressure web sites, who automatically betrays his notorious sedentary lifestyle through his Jawbone fitness wristband, can expect a higher health insurance premium the day after tomorrow. Not at all convenient. Simply terrible. It is possible that it will not take much longer before more and more people realize that the currency of his or her own behavior exacts a high price: the freedom of self-determination. And that is why it is better and cheaper to pay with something very old fashioned – namely  money.

Google is the world’s most powerful bank – but dealing only in behavioral currency. Nobody capitalizes on their knowledge about us as effectively as Google. This is impressive and dangerous.

At the end of the month, Harvard business professor and Berkman Center associate Shoshana Zuboff provided another perspective on the exchange between Schmidt and Döpfner:

Six years ago I asked Eric Schmidt what corporate innovations Google was putting in place to ensure that its interests were aligned with its end users. Would it betray their trust? Back then his answer stunned me. He and Google’s founders control the super-voting class B stock. This allows them, he explained, to make decisions without regard to short-term pressure from Wall Street. Of course, it also insulates them from every other kind of influence. There was no wrestling with the creation of an inclusive, trustworthy, and transparent governance system.  There was no struggle to institutionalize scrutiny and feedback.  Instead Schmidt’s answer was the quintessence of absolutism: “trust me; I know best.” At that moment I knew I was in the presence of something new and dangerous whose effects reached beyond narrow economic contests and into the heart of everyday life.

Mr. Schmidt’s open letter to Europe shows evidence of such absolutism. Democratic oversight is characterized as “heavy-handed regulation.”  The “Internet”, “Web”,  and “Google” are referenced interchangeably, as if Goggle’s interests stand for the entire Web and Internet. That’s a magician’s sleight of hand intended to distract from the real issue. Google’s absolutist pursuit of its interests is now regarded by many as responsible for the Web’s fading prospects as an open information platform in which participants can agree on rules, rights, and choice.

“We are beyond the realm of economics here,” wrote Zuboff. “This is not merely a conversation about free  markets; it’s a conversation about free people”:

We often hear that our privacy rights have been eroded and secrecy has grown. But that way of framing things obscures what’s really at stake. Privacy hasn’t been eroded. It’s been expropriated. … Instead of many people having some privacy rights, nearly all the rights have been concentrated in the hands of a few.  On the one hand, we have lost the ability to choose what we keep secret, and what we share. On the other, Google, the NSA, and others in the new zone have accumulated privacy rights. How? Most of their rights have come from taking ours without asking. But they also manufactured new rights for themselves, the way a forger might print currency.  They assert a right to privacy with respect to their surveillance tactics and then exercise their choice to keep those tactics secret.

Finally – and this is key – the new concentration of privacy rights is institutionalized in the automatic undetectable functions of a global infrastructure that most of the world’s people also happen to think is essential for basic social participation. This turns ordinary life into the daily renewal of a 21st century Faustian pact.

A complicated courtship, indeed. Read the whole exchange. It’s fascinating.

Image: Roberto Baca.

The poetics of progress

Rift2

“I meet an American sailor,” writes Alexis de Tocqueville in his 1840 masterwork Democracy in America, “and I ask him why the vessels of his country are constituted so as not to last for long, and he answers me without hesitation that the art of navigation makes such rapid progress each day, that the most beautiful ship would soon become nearly useless if it lasted beyond a few years. In these chance words said by a coarse man and in regard to a particular fact, I see the general and systematic idea by which a great people conducts all things.”

Far more than a marketing ploy, planned obsolescence is an expression of a deep, romantic faith in technology. It’s a faith that Tocqueville saw as central to the American soul, argues Benjamin Storey in an illuminating essay in The New Atlantis:

For Tocqueville, technology is not a set of morally neutral means employed by human beings to control our natural environment. Technology is an existential disposition intrinsically connected to the social conditions of modern democratic peoples in general and Americans in particular. On this view, to be an American democrat is to be a technological romantic. Nothing is so radical or difficult to moderate as a romantic passion, and the Americans Tocqueville observed accepted only frail and minimal restraints on their technophilia. We have long since broken many of those restraints in our quest to live up to our poetic self-image. …

Democratic peoples, Tocqueville [writes], “imagine an extreme point where liberty and equality meet and merge,” and, in our less sober moments, we believe that technology can help us get there by so thoroughly vanquishing natural scarcity and the limits of human nature that we can eliminate unfreedom and inequality as such. We might be able to improve the human condition so far that what seemed in the past to be permanent facts of human life — ruling and being ruled, wealth and poverty, virtue and vice — can be left behind as we achieve the full realization of our democratic ideal of liberty and equality.

The glory of this view manifests itself in admirable technical skill and an outpouring of ingenious, if disposable, goods. But when embraced as a philosophy, a way of seeing the world, it turns destructive.

Not content with the obvious truth that our technical know-how has made us, on average, healthier and more prosperous than peoples of the past, we insist that it has also made us happier and better — indeed, that human happiness and virtue are technical problems, problems our rightly-celebrated practical know-how can settle, once and for all. Tocqueville saw how the terminology of commerce in the 1830s was coming to penetrate all aspects of American language, “the first instrument of thought.” As our technological utopian project advances, as our science enters further into the domain of the human heart and mind, we come to see our lives less in terms of joys, virtues, sins, and miseries and more in terms of chemical imbalances, hormones, good moods, and depressions — material problems susceptible to technological solutions, not moral challenges or existential conditions with which we must learn to live.

We are flawed not because we are flawed but because we were born into an insufficiently technologized world.

Image of Oculus Rift: Wikipedia.