Category Archives: Uncategorized

The luddite McLuhan

Marshall McLuhan was such a slyboots. He kills me. He continues to be known, of course, as the enthusiastic prophet of the coming electronic utopia, the guy who slathered intellectual grease on progress’s rails. The skeptical, sometimes dystopian, subtext of his work went largely unnoticed when he was alive, and it’s even more submerged today.

This weekend I was reading through Understanding Me, a collection of interviews with McLuhan, and I came upon this telling passage from a 1966 TV interview with the journalist Robert Fulford:

Fulford: What kind of world would you rather live in? Is there a period in the past or a possible period in the future you’d rather be in?

McLuhan: No, I’d rather be in any period at all as long as people are going to leave it alone for a while.

Fulford: But they’re not going to, are they?

McLuhan: No, and so the only alternative is to understand everything that’s going on, and then neutralize it as much as possible, turn off as many buttons as you can, and frustrate them as much as you can. I am resolutely opposed to all innovation, all change, but I am determined to understand what’s happening because I don’t choose just to sit and let the juggernaut roll over me. Many people seem to think that if you talk about something recent, you’re in favor of it. The exact opposite is true in my case. Anything I talk about is almost certain to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.

The Sun interview

I have the honor of being the designated interviewee in the March issue of The Sun magazine. The interview, by Arnie Cooper, covers a lot of ground, and it’s been posted in its entirety on The Sun’s site. Here’s a taste:

Cooper: Do you think computers have harmed our relationship with nature?

Carr: I certainly think they’ve gotten in the way of our relationship to nature. As we increasingly connect with the world through computer screens, we’re removing ourselves from direct sensory contact with nature. In other words, we’re learning to substitute symbols of reality for reality itself. I think that’s particularly true for children who’ve grown up surrounded by screens from a young age. You could argue that this isn’t necessarily something new, that it’s just a continuation of what we saw with other electronic media like radio or tv. But I do think it’s an amplification of those trends.

Cooper: What about the interactivity of the Internet? Isn’t it a step above the passivity that television engenders?

Carr: The interactivity of the Net brings a lot of benefits, which is one of the main reasons we spend so much time online. It lets us communicate with one another more efficiently, and it gives us a powerful new means of sharing our opinions, pursuing our interests and hobbies with others, and disseminating our creative works through, for instance, blogs, social networks, YouTube, and photo-publishing sites. Those benefits are real and shouldn’t be denigrated. But I’m wary of drawing sharp distinctions between “active” and “passive” media. Are we really “passive” when we’re immersed in a great novel or a great movie or listening to a great piece of music? I don’t think so. I think we’re deeply engaged, and our intellect is extremely active. When we view or read or listen to something meaningful, when we devote our full attention to it, we broaden and deepen our minds. The danger with interactive media is that they draw us away from quieter and lonelier pursuits. Interactivity is compelling because its rewards are so easy and immediate, but they’re often also superficial.

Secret agent moth

Elsewhere on the robotics front, the U.S. Defense Advanced Research Projects Agency (Darpa) is making good progress towards its goal of turning insects into remote-controlled surveillance and monitoring instruments. Three years ago, Darpa launched its Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) project, with the intent, as described by IEEE Spectrum, of creating “moths or other insects that have electronic controls implanted inside them, allowing them to be controlled by a remote operator. The animal-machine hybrid will transmit data from mounted sensors, which might include low-grade video and microphones for surveillance or gas sensors for natural-disaster reconnaissance. To get to that end point, HI-MEMS is following three separate tracks: growing MEMS-insect hybrids, developing steering electronics for the insects, and finding ways to harvest energy from the them to power the cybernetics.”

Papers presented this month at the IEEE International Solid-State Circuits Conference described breakthroughs that promise to help the agency fulfill all three goals. One group of researchers, from the Boyce Thompson Institute for Plant Research, has succeeded in inserting “silicon neural interfaces for gas sensors … into insects during the pupal phase.” Another group, affiliated with MIT, has created a “low-power ultrawide-band radio” and “a digital baseband processor.” Both are tiny and light enough to be attached to a cybernetic moth. The group has also developed a “piezoelectric energy-harvesting system that scavenges power from vibrations” as a moth beats its wings. The system may be able to supply the power required by the camera and transmitter.

Now, where the hell did I stick that can of Raid?

The artificial morality of the robot warrior

Great strides have been made in recent years in the development of combat robots. The US military has deployed ground robots, aerial robots, marine robots, stationary robots, and (reportedly) space robots. The robots are used for both reconnaissance and fighting, and further rapid advances in their design and capabilities can be expected in the years ahead. One consequence of these advances is that robots will gain more autonomy, which means they will have to act in uncertain situations without direct human instruction. That raises a large and thorny challenge: how do you program a robot to be an ethical warrior?

The Times of London this week pointed to an extensive report on military robots, titled Autonomous Military Robotics: Risk, Ethics, and Design, that was prepared in December for the US Navy by the Ethics & Emerging Technologies Group at the California State Polytechnic University. In addition to providing a useful overview of the state of the art in military robots, the report provides a fascinating examination of how software writers might go about programming what the authors call “artificial morality” into machines.

The authors explain why it’s imperative that we begin to explore robot morality:

Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways … Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity.

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The authors also note that “military robotics have already failed on the battlefield, creating concerns with their deployment (and perhaps even more concern for more advanced, complicated systems) that ought to be addressed before speculation, incomplete information, and hype fill the gap in public dialogue.” They point to a mysterious 2008 incident when “several TALON SWORDS units—mobile robots armed with machine guns—in Iraq were reported to be grounded for reasons not fully disclosed, though early reports claim the robots, without being commanded to, trained their guns on ‘friendly’ soldiers; and later reports denied this account but admitted there had been malfunctions during the development and testing phase prior to deployment.” They also report that in 2007 “a semi‐autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others.” These failures, along with some spectacular failures of robotic systems in civilian applications, raise “a concern that we … may not be able to halt some (potentially‐fatal) chain of events caused by autonomous military systems that process information and can act at speeds incomprehensible to us, e.g., with high‐speed unmanned aerial vehicles.”

In the section of the report titled “Programming Morality,” the authors describe some of the challenges of creating the software that will ensure that robotic warriors act ethically on the battlefield:

Engineers are very good at building systems to satisfy clear task specifications, but there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI …

The choices available to systems that possess a degree of autonomy in their activity and in the contexts within which they operate, and greater sensitivity to the moral factors impinging upon the course of actions available to them, will eventually outstrip the capacities of any simple control architecture. Sophisticated robots will require a kind of functional morality, such that the machines themselves have the capacity for assessing and responding to moral considerations. However, the engineers that design functionally moral robots confront many constraints due to the limits of present‐day technology. Furthermore, any approach to building machines capable of making moral decisions will have to be assessed in light of the feasibility of implementing the theory as a computer program.

After reviewing a number of possible approaches to programming a moral sense into machines, the authors recommend an approach that combines the imposition of “top-down” rules with the development of a capacity for “bottom-up” learning:

A top‐down approach would program rules into the robot and expect the robot to simply obey those rules without change or flexibility. The downside … is that such rigidity can easily lead to bad consequences when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule‐bound.

A bottom‐up approach, on the other hand, depends on robust machine learning: like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do. General, universal rules are eschewed. But this too becomes problematic, especially as the robot is introduced to novel situations: it cannot fall back on any rules to guide it beyond the ones it has amassed from its own experience, and if those are insufficient, then it will likely perform poorly as well.

As a result, we defend a hybrid architecture as the preferred model for constructing ethical autonomous robots. Some top‐down rules are combined with machine learning to best approximate the ways in which humans actually gain ethical expertise … The challenge for the military will reside in preventing the development of lethal robotic systems from outstripping the ability of engineers to assure the safety of these systems.

The development of autonomous robot warriors stirs concerns beyond just safety, the authors acknowledge:

Some have [suggested that] the rise of such autonomous robots creates risks that go beyond specific harms to societal and cultural impacts. For instance, is there a risk of (perhaps fatally?) affronting human dignity or cherished traditions (religious, cultural, or otherwise) in allowing the existence of robots that make ethical decisions? Do we ‘cross a threshold’ in abrogating this level of responsibility to machines, in a way that will inevitably lead to some catastrophic outcome? Without more detail and reason for worry, such worries as this appear to commit the ‘slippery slope’ fallacy. But there is worry that as robots become ‘quasi‐persons,’ even under a ‘slave morality’, there will be pressure to eventually make them into full‐fledged Kantian‐autonomous persons, with all the risks that entails. What seems certain is that the rise of autonomous robots, if mishandled, will cause popular shock and cultural upheaval, especially if they are introduced suddenly and/or have some disastrous safety failures early on.

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors. “Robots,” they write, “would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost.” Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

The avatar of my father

HORATIO: O day and night, but this is wondrous strange.

The Singularity – the prophesied moment when artificial intelligence leaps ahead of human intelligence, rendering man both obsolete and immortal – has been jokingly called “the rapture of the geeks.” But to Ray Kurzweil, the most famous of the Singularitarians, it’s no joke. In a profile in the current issue of Rolling Stone (not available online), Kurzweil describes how, in the wake of the Singularity, it will become possible not only to preserve living people for eternity (by uploading their minds into computers) but to resurrect the dead.

Kurzweil looks forward in particular to his reunion with his beloved father, Fredric, who died in 1970. “Kurzweil’s most ambitious plan for after the Singularity,” writes Rolling Stone’s David Kushner, “is also his most personal”:

Using technology, he plans to bring his dead father back to life. Kurzweil reveals this to me near the end of our conversation … In a soft voice, he explains how the resurrection would work. “We can find some of his DNA around his grave site – that’s a lot of information right there,” he says. “The AI will send down some nanobots and get some bone or teeth and extract some DNA and put it all together. Then they’ll get some information from my brain and anyone else who still remembers him.”

When I ask how exactly they’ll extract the knowledge from his brain, Kurzweil bristles, as if the answer should be obvious: “Just send nanobots into my brain and reconstruct my recollections and memories.” The machines will capture everything: the piggyback ride to the grocery store, the bedtime reading of Tom Swift, the moment he and his father rejoiced when the letter of acceptance from MIT arrived. To provide the nanobots with even more information, Kurzweil is safeguarding the boxes of his dad’s mementos, so the artificial intelligence has as much data as possible from which to reconstruct him. Father 2.0 could take many forms, he says, from a virtual-reality avatar to a fully functioning robot … “If you can bring back life that was valuable in the past, it should be valuable in the future.”

There’s a real poignancy to Kurzweil’s dream of bringing his dad back to life by weaving together strands of DNA and strands of memory. I could imagine a novel – by Ray Bradbury, maybe – constructed around his otherworldly yearning. Death makes strange even the most rational of minds.

Cloud gazing

For those of you who just can’t get enough of this cloud thing, here’s some weekend reading. Berkeley’s Reliable Adaptive Distributed Systems Laboratory – the RAD Lab, as it’s groovily known – has a new white paper, Above the Clouds: A Berkeley View of Cloud Computing, that examines the economics of the cloud model, from both a user’s and a supplier’s perspective, and lays out the opportunities and obstacles that will likely shape the development of the industry in the near to medium term. And, in the new issue of IEEE Spectrum, Randy Katz surveys the state of the art in the construction of cloud data centers.

Another little IBM deal

On August 12, 1981, 28 long years ago, IBM introduced its personal computer, the IBM PC. Hidden inside was an operating system called MS-DOS which the computing giant had licensed from a pipsqueak company named Microsoft. IBM didn’t realize it at the time, but the deal, which allowed Microsoft to maintain its ownership of the operating system and to license it to other companies, turned out to be the seminal event in defining the commercial landscape for the computing business throughout the ensuing PC era. IBM, through the deal, anointed Microsoft as the dominant company of that era.

Today, as a new era in computing dawns, IBM announced another deal, this time with Amazon Web Services, a pipsqueak in the IT business but an early leader in cloud computing. Under the deal, corporations and software developers will be able to run IBM’s commercial software in Amazon’s cloud. As the Register’s Timothy Prickett Morgan reports, “IBM announced that it would be deploying a big piece of its database and middleware software stack on Amazon’s Elastic Compute Cloud (EC2) service. The software that IBM is moving out to EC2 includes the company’s DB2 and Informix Dynamic Server relational databases, its WebSphere Portal and sMash mashup tools, and its Lotus Web Content Management program … The interesting twist on the Amazon-IBM deal is that Big Blue is going to let companies that have already bought software licenses run that software out on the EC2 cloud, once the offering is generally available.”

Prickett Morgan also notes, “If compute clouds want to succeed as businesses instead of toys, they have to run the same commercial software that IT departments deploy internally on their own servers. Which is why [the] deal struck between IBM and Amazon’s Web Services subsidiary is important, perhaps more so for Amazon than for Big Blue.”

It doesn’t seem like such a big deal, and it probably isn’t. But you never know. The licensing of MS-DOS seemed like small potatoes when it happened. Could the accidental kingmaker have struck again?

UPDATE: Dana Gardner speculates on the upshot.