The free arts and the servile arts

dancers

This post, published on February 22, 2009, is the first installment in Rough Type’s series “The Realtime Chronicles.”

I have taken it upon myself to mash up the words of Steve Gillmor, posted yesterday at TechCrunchIT, and the words of the priest and theologian Andrew Louth, published in 2003 at the Times Higher Education site:

Gillmor: We’re at the threshold of the realtime moment. The advent of a reasonably realtime message bus over public networks has changed something about the existing infrastructure in ways that are not yet important to a broad section of Internet dwellers. The numbers are adding up — 175 million Facebook users, tens of thousands of instant Twitter followers, constant texting and video chats among the teenage crowd.

The standard attack on realtime is that it is the new crack. We’re all addicted to our devices, to the flow of alerts, messages, and bite-sized information chunks. We no longer have time for blog posts, refreshing our Twitter streams for pointers to what our friends think is important. It’s the revenge of the short attention span brought on by 30-second television ads — the myth of multi-tasking spread across a sea of factoids that Nick Carr fears will destroy scholarship and ultimately thinking. Of course this is true and also completely irrelevant.

Louth: The medieval university was a place that made possible a life of thought, of contemplation. It emerged in the 12th century from the monastic and cathedral schools of the early Middle Ages where the purpose of learning was to allow monks to fulfil their vocation, which fundamentally meant to come to know God. Although knowledge of God might be useful in various ways, it was sought as an end in itself. Such knowledge was called contemplation, a kind of prayerful attention.

The evolution of the university took the pattern of learning that characterised monastic life – reading, meditation, prayer and contemplation – out of the immediate context of the monastery. But it did not fundamentally alter it. At its heart was the search for knowledge for its own sake. It was an exercise of freedom on the part of human beings, and the disciplines involved were to enable one to think freely and creatively. These were the liberal arts, or free arts, as opposed to the servile arts to which a man is bound if he has in mind a limited task.

In other words, in the medieval university, contemplation was knowledge of reality itself, as opposed to that involved in getting things done. It corresponded to a distinction in our understanding of what it is to be human, between reason conceived as puzzling things out and that conceived as receptive of truth. This understanding of learning has a history that goes back to the roots of western culture. Now, this is under serious threat, and with it our notion of civilisation.

Gillmor: My daughter told her mother today that her boyfriend was spending too much time on IM and video-chat, and not enough on getting his homework done. She actually said these words: “I told him you have to get away from the computer sometimes, turn it off, give yourself time to think.” This is the same daughter who will give up anything – makeup, TV, food — just as long as I don’t take her computer or iPhone away.

So realtime is the new crack, and even the naivest of our culture realizes it can eat our brains. But does that mean we will stop moving faster and faster? No. Does that mean we will give up our blackberries when we become president? No. Then what will happen to us?

Louth: Western culture, as we have known it from the time of classical Greece onwards, has always recognised that there is more to human life than a productive, well-run society. If that were not the case, then, as Plato sourly suggests, we might just as well be communities of ants or bees. But there is more than that, a life in which the human mind glimpses something beyond what it can achieve. This kind of human activity needs time in which to be undistracted and open to ideas.

Gillmor: The browser brought us an explosion of Web pages. The struggle became one of time and location; RSS and search to the rescue. The time from idea to publish to consumption approached realtime. The devices then took charge, widening the amount of time to consume the impossible flow. The Blackberry expanded work to all hours. The iPhone blurred the distinction between work and play. Twitter blurred personal and public into a single stream of updates. Facebook blurred real and virtual friendships. That’s where we are now.

Louth: Martin Heidegger made a distinction between the world that we have increasingly shaped to our purposes and the earth that lay behind all this, beyond human fashioning. The world is something we know our way around. But if we lose sight of the realm of the earth, then we have lost touch with reality. It was, for Heidegger, the role of the poet to preserve a sense of the earth, to break down our sense of security arising from familiarity with the world. We might think of contemplation, the dispassionate beholding of reality, in a similar way, preventing us from mistaking the familiar tangle of assumption and custom for reality, a tangle that modern technology and the insistent demands of modern consumerist society can easily bind into a tight web.

The Realtime Chronicles continues in these posts:

Real time is realtime

Realtime kills real space

More present than the present

The energy

How many tweets does an earthquake make?

A new chapter in the theory of messages

Deriving real value from the social graph

The stream

Twitter dot dash (reissue)

Hashmobs

The unripened word

2 minutes ago from Tweetie

The New York Real Times

The eternal conference call

Does my tweet look fat?

Raising the realtime child

The crystal stream

Nowness

New frontiers in social networking

Exile from realtime

What realtime is before it’s realtime

Worldstream of consciousness

Conversation points

Absence of Like

Automating the feels

Ambient tweetability

Pret-a-twitter and the bespoke tweet

Ambient reality

My computer, my doppeltweeter

The soma cloud

Facebook’s automated conscience

Jonathan Swift’s smartphone

The seconds are just packed

Chatbots are saints

Image: Sam Cox.

Secret agent moth

Elsewhere on the robotics front, the U.S. Defense Advanced Research Projects Agency (Darpa) is making good progress towards its goal of turning insects into remote-controlled surveillance and monitoring instruments. Three years ago, Darpa launched its Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) project, with the intent, as described by IEEE Spectrum, of creating “moths or other insects that have electronic controls implanted inside them, allowing them to be controlled by a remote operator. The animal-machine hybrid will transmit data from mounted sensors, which might include low-grade video and microphones for surveillance or gas sensors for natural-disaster reconnaissance. To get to that end point, HI-MEMS is following three separate tracks: growing MEMS-insect hybrids, developing steering electronics for the insects, and finding ways to harvest energy from the them to power the cybernetics.”

Papers presented this month at the IEEE International Solid-State Circuits Conference described breakthroughs that promise to help the agency fulfill all three goals. One group of researchers, from the Boyce Thompson Institute for Plant Research, has succeeded in inserting “silicon neural interfaces for gas sensors … into insects during the pupal phase.” Another group, affiliated with MIT, has created a “low-power ultrawide-band radio” and “a digital baseband processor.” Both are tiny and light enough to be attached to a cybernetic moth. The group has also developed a “piezoelectric energy-harvesting system that scavenges power from vibrations” as a moth beats its wings. The system may be able to supply the power required by the camera and transmitter.

Now, where the hell did I stick that can of Raid?

The artificial morality of the robot warrior

Great strides have been made in recent years in the development of combat robots. The US military has deployed ground robots, aerial robots, marine robots, stationary robots, and (reportedly) space robots. The robots are used for both reconnaissance and fighting, and further rapid advances in their design and capabilities can be expected in the years ahead. One consequence of these advances is that robots will gain more autonomy, which means they will have to act in uncertain situations without direct human instruction. That raises a large and thorny challenge: how do you program a robot to be an ethical warrior?

The Times of London this week pointed to an extensive report on military robots, titled Autonomous Military Robotics: Risk, Ethics, and Design, that was prepared in December for the US Navy by the Ethics & Emerging Technologies Group at the California State Polytechnic University. In addition to providing a useful overview of the state of the art in military robots, the report provides a fascinating examination of how software writers might go about programming what the authors call “artificial morality” into machines.

The authors explain why it’s imperative that we begin to explore robot morality:

Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways … Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity.

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The authors also note that “military robotics have already failed on the battlefield, creating concerns with their deployment (and perhaps even more concern for more advanced, complicated systems) that ought to be addressed before speculation, incomplete information, and hype fill the gap in public dialogue.” They point to a mysterious 2008 incident when “several TALON SWORDS units—mobile robots armed with machine guns—in Iraq were reported to be grounded for reasons not fully disclosed, though early reports claim the robots, without being commanded to, trained their guns on ‘friendly’ soldiers; and later reports denied this account but admitted there had been malfunctions during the development and testing phase prior to deployment.” They also report that in 2007 “a semi‐autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others.” These failures, along with some spectacular failures of robotic systems in civilian applications, raise “a concern that we … may not be able to halt some (potentially‐fatal) chain of events caused by autonomous military systems that process information and can act at speeds incomprehensible to us, e.g., with high‐speed unmanned aerial vehicles.”

In the section of the report titled “Programming Morality,” the authors describe some of the challenges of creating the software that will ensure that robotic warriors act ethically on the battlefield:

Engineers are very good at building systems to satisfy clear task specifications, but there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI …

The choices available to systems that possess a degree of autonomy in their activity and in the contexts within which they operate, and greater sensitivity to the moral factors impinging upon the course of actions available to them, will eventually outstrip the capacities of any simple control architecture. Sophisticated robots will require a kind of functional morality, such that the machines themselves have the capacity for assessing and responding to moral considerations. However, the engineers that design functionally moral robots confront many constraints due to the limits of present‐day technology. Furthermore, any approach to building machines capable of making moral decisions will have to be assessed in light of the feasibility of implementing the theory as a computer program.

After reviewing a number of possible approaches to programming a moral sense into machines, the authors recommend an approach that combines the imposition of “top-down” rules with the development of a capacity for “bottom-up” learning:

A top‐down approach would program rules into the robot and expect the robot to simply obey those rules without change or flexibility. The downside … is that such rigidity can easily lead to bad consequences when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule‐bound.

A bottom‐up approach, on the other hand, depends on robust machine learning: like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do. General, universal rules are eschewed. But this too becomes problematic, especially as the robot is introduced to novel situations: it cannot fall back on any rules to guide it beyond the ones it has amassed from its own experience, and if those are insufficient, then it will likely perform poorly as well.

As a result, we defend a hybrid architecture as the preferred model for constructing ethical autonomous robots. Some top‐down rules are combined with machine learning to best approximate the ways in which humans actually gain ethical expertise … The challenge for the military will reside in preventing the development of lethal robotic systems from outstripping the ability of engineers to assure the safety of these systems.

The development of autonomous robot warriors stirs concerns beyond just safety, the authors acknowledge:

Some have [suggested that] the rise of such autonomous robots creates risks that go beyond specific harms to societal and cultural impacts. For instance, is there a risk of (perhaps fatally?) affronting human dignity or cherished traditions (religious, cultural, or otherwise) in allowing the existence of robots that make ethical decisions? Do we ‘cross a threshold’ in abrogating this level of responsibility to machines, in a way that will inevitably lead to some catastrophic outcome? Without more detail and reason for worry, such worries as this appear to commit the ‘slippery slope’ fallacy. But there is worry that as robots become ‘quasi‐persons,’ even under a ‘slave morality’, there will be pressure to eventually make them into full‐fledged Kantian‐autonomous persons, with all the risks that entails. What seems certain is that the rise of autonomous robots, if mishandled, will cause popular shock and cultural upheaval, especially if they are introduced suddenly and/or have some disastrous safety failures early on.

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors. “Robots,” they write, “would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost.” Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

The avatar of my father

HORATIO: O day and night, but this is wondrous strange.

The Singularity – the prophesied moment when artificial intelligence leaps ahead of human intelligence, rendering man both obsolete and immortal – has been jokingly called “the rapture of the geeks.” But to Ray Kurzweil, the most famous of the Singularitarians, it’s no joke. In a profile in the current issue of Rolling Stone (not available online), Kurzweil describes how, in the wake of the Singularity, it will become possible not only to preserve living people for eternity (by uploading their minds into computers) but to resurrect the dead.

Kurzweil looks forward in particular to his reunion with his beloved father, Fredric, who died in 1970. “Kurzweil’s most ambitious plan for after the Singularity,” writes Rolling Stone’s David Kushner, “is also his most personal”:

Using technology, he plans to bring his dead father back to life. Kurzweil reveals this to me near the end of our conversation … In a soft voice, he explains how the resurrection would work. “We can find some of his DNA around his grave site – that’s a lot of information right there,” he says. “The AI will send down some nanobots and get some bone or teeth and extract some DNA and put it all together. Then they’ll get some information from my brain and anyone else who still remembers him.”

When I ask how exactly they’ll extract the knowledge from his brain, Kurzweil bristles, as if the answer should be obvious: “Just send nanobots into my brain and reconstruct my recollections and memories.” The machines will capture everything: the piggyback ride to the grocery store, the bedtime reading of Tom Swift, the moment he and his father rejoiced when the letter of acceptance from MIT arrived. To provide the nanobots with even more information, Kurzweil is safeguarding the boxes of his dad’s mementos, so the artificial intelligence has as much data as possible from which to reconstruct him. Father 2.0 could take many forms, he says, from a virtual-reality avatar to a fully functioning robot … “If you can bring back life that was valuable in the past, it should be valuable in the future.”

There’s a real poignancy to Kurzweil’s dream of bringing his dad back to life by weaving together strands of DNA and strands of memory. I could imagine a novel – by Ray Bradbury, maybe – constructed around his otherworldly yearning. Death makes strange even the most rational of minds.

Cloud gazing

For those of you who just can’t get enough of this cloud thing, here’s some weekend reading. Berkeley’s Reliable Adaptive Distributed Systems Laboratory – the RAD Lab, as it’s groovily known – has a new white paper, Above the Clouds: A Berkeley View of Cloud Computing, that examines the economics of the cloud model, from both a user’s and a supplier’s perspective, and lays out the opportunities and obstacles that will likely shape the development of the industry in the near to medium term. And, in the new issue of IEEE Spectrum, Randy Katz surveys the state of the art in the construction of cloud data centers.

Another little IBM deal

On August 12, 1981, 28 long years ago, IBM introduced its personal computer, the IBM PC. Hidden inside was an operating system called MS-DOS which the computing giant had licensed from a pipsqueak company named Microsoft. IBM didn’t realize it at the time, but the deal, which allowed Microsoft to maintain its ownership of the operating system and to license it to other companies, turned out to be the seminal event in defining the commercial landscape for the computing business throughout the ensuing PC era. IBM, through the deal, anointed Microsoft as the dominant company of that era.

Today, as a new era in computing dawns, IBM announced another deal, this time with Amazon Web Services, a pipsqueak in the IT business but an early leader in cloud computing. Under the deal, corporations and software developers will be able to run IBM’s commercial software in Amazon’s cloud. As the Register’s Timothy Prickett Morgan reports, “IBM announced that it would be deploying a big piece of its database and middleware software stack on Amazon’s Elastic Compute Cloud (EC2) service. The software that IBM is moving out to EC2 includes the company’s DB2 and Informix Dynamic Server relational databases, its WebSphere Portal and sMash mashup tools, and its Lotus Web Content Management program … The interesting twist on the Amazon-IBM deal is that Big Blue is going to let companies that have already bought software licenses run that software out on the EC2 cloud, once the offering is generally available.”

Prickett Morgan also notes, “If compute clouds want to succeed as businesses instead of toys, they have to run the same commercial software that IT departments deploy internally on their own servers. Which is why [the] deal struck between IBM and Amazon’s Web Services subsidiary is important, perhaps more so for Amazon than for Big Blue.”

It doesn’t seem like such a big deal, and it probably isn’t. But you never know. The licensing of MS-DOS seemed like small potatoes when it happened. Could the accidental kingmaker have struck again?

UPDATE: Dana Gardner speculates on the upshot.

The automatically updatable book

Your library has been successfully updated.
The next update is scheduled for 09:00 tomorrow.
Click this message to continue reading.

One of the things that happens when books and other writings start to be distributed digitally through web-connected devices like the Kindle is that their text becomes provisional. Automatic updates can be sent through the network to edit the words stored in your machine – similar to the way that, say, software on your PC can be updated automatically today. This can, obviously, be a very useful service. If you buy a tourist guide to a city and one of the restaurants it recommends goes out of business, the recommendation can easily be removed from all the electronic versions of the guide. So you won’t end up heading off to a restaurant that doesn’t exist – something that happens fairly regularly with printed guides, particularly ones that are a few years old. If the city guide is published only in electronic form through connected devices, the old recommendation in effect disappears forever – it’s erased from the record. It’s as though the recommendation was never made.

Which is okay for quidebooks, but what about for other books? If you look ahead, speculatively, to a time when more and more books start being published only in electronic versions and distributed through Kindles, smartphones, PCs, and other connected devices, does history begin to become as provisional as the text in the books? Stephanie at UrbZen sketches out the dark scenario:

Consider that for everything we gain with a Kindle—convenience, selection, immediacy—we’re losing something too. The printed word—physically printed, on paper, in a book—might be heavy, clumsy or out of date, but it also provides a level of permanence and privacy that no digital device will ever be able to match. In the past, restrictive governments had to ban whole books whose content was deemed too controversial, inflammatory or seditious for the masses. But then at least you knew which books were being banned, and, if you could get your hands on them, see why. Censorship in the age of the Kindle will be more subtle, and much more dangerous.

Consider what might happen if a scholar releases a book on radical Islam exclusively in a digital format. The US government, after reviewing the work, determines that certain passages amount to national security threat, and sends Amazon and the publisher national security letters demanding the offending passages be removed. Now not only will anyone who purchases the book get the new, censored copy, but anyone who had bought the book previously and then syncs their Kindle with Amazon—to buy another book, pay a bill, whatever—will, probably unknowingly, have the old version replaced by the new, “cleaned up” version on their device. The original version was never printed, and now it’s like it didn’t even exist. What’s more, the government now has a list of everyone who downloaded both the old and new versions of the book.

Stephanie acknowledges that this scenario may come off as “a crazy conspiracy theory spun by a troubled mind with an overactive imagination.” And maybe that’s what it is. Still, she’s right to raise the issue. The unanticipated side effects of new technologies often turn out to be their most important effects. Printed words are permanent. Electronic words are provisional. The difference is vast and the implications worth pondering.