Atomic balm

Is the worm turning? Are we tiring of fiddling with symbols on displays, watching the pixels flow? Are we beginning to yearn for stuff again? Are things the new thing? Genevieve Bell, a top Intel researcher, tells The Atlantic that she senses the answer is yes:

We’ve been in a decade of dematerialization, all the markers of identity. You and I, when we were younger, knew how to talk about ourselves, to ourselves and others, through physical stuff–music, the books on our shelves, photos. We’ve gone through a period where a lot of that content is dematerialized. It became virtual. You could send people playlists, but it’s not the same as having someone go through your record collection. It had a different sort of intimacy.

And it doesn’t surprise me that after 10 years of early-adoptive dematerialization, all the identity work and now the seduction of physical objects has come back in full force. Now it’s kind of a pendulum: we move between the virtual and the real a great deal. And we have historically–that’s hardly a new thing. I suspect that part of what we’re seeing with the Etsy maker and that whole spectrum is a kind of need for physical things because so much has become digital, and in fact, what’s being manifested in some of these places is really a reprise of physical stuff. Physicality has kind of come back.

How strong is the rematerialization countertrend? I don’t think we know yet. Probably less strong than Bell suggests, I’d guess. Still, it’s interesting to consider that, when it comes to the way we behave today, we don’t really know for sure what’s mere faddishness and what’s enduring. Sleep lightly, avatars.

Moral code

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

This is the type of scenario that NYU psychology professor Gary Marcus considers as he ponders the rapid approach of a time when “it will no longer be optional for machines to have ethical systems.” As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them? Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.

The military, which already operates automated killing machines, will likely be the first to struggle in earnest with the problem. Indeed, as Spencer Ackerman noted yesterday, the U.S. Department of Defense has just issued a directive that establishes rules “designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” One thing the Pentagon hopes to ensure is that, when autonomous weapons use force, “appropriate levels of human judgment” are incorporated into the decisions. But nowhere is the world more chaotic than in a war zone, and as fighting machines gain more sophistication and autonomy and are given more responsibility, “unintended engagements” will happen. Barring some major shift in strategy, a military robot or drone will eventually be in an ambiguous situation and have to make a split-second decision with lethal consequences. Shoot, or hold fire?

We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.

Reflections

The glass mirror, which began to be widely produced in the 16th century, tends to be characterized as a tool of self-love: one gazes at the image in the glass as Narcissus gazed at the reflection in the water. But, as Lewis Mumford suggests in his 1934 masterwork  Technics and Civilization, the mirror is better characterized as a tool of self-loathing:

The self in the mirror corresponds to the physical world that was brought to light by natural science in the same epoch: it was the self in abstracto, only part of the real self, the part that one can divorce from the background of nature and the influential presence of other men. But there is a value in this mirror personality that more naive cultures did not possess. If the image one sees in the mirror is abstract, it is not ideal or mythical: the more accurate the physical instrument, the more sufficient the light on it, the more relentlessly does it show the effects of age, disease, disappointment, frustration, slyness, covetousness, weakness — these come out quite as clearly as health and joy and confidence. Indeed, when one is completely whole and at one with the world one does not need the mirror: it is in the period of psychic disintegration that the individual personality turns to the lonely image to see what in fact is there and what he can hold on to; and it was in the period of cultural disintegration that men began to hold the mirror up to outer nature.

It is the vanity of neuroticism more than the vanity of narcissism that the mirror encourages.

Social networks like Facebook are also reflective media, but the image of us that they return, insistently, is very different from the one presented by the glass. What’s reflected by the network is not the part of the self “that one can divorce from … the influential presence of other men.” Rather, it is the part of the self that one cannot divorce from the social milieu. It is, in that sense, more “mythical” than physical. We project an idealized version of the self, formed for social consumption, and the reflection we receive, continually updated, reveals how the image was actually interpreted by society. We can then adjust the projection in response to the reflection, in hopes of bringing the reflection closer to the projected ideal. And so it goes. The “influential presence of other men” becomes inescapable. It is there, tangibly so, even when we are alone. The image reflected in the screen remains a lonely image, but it reflects not outer nature but outer society: the light of others’ eyes.

What we see in the mirror may be, literally and figuratively, dispiriting, but at least it sets us on firm ground. The glass can be monomaniacally cruel, but it is always monomaniacally fair. The screen’s disintegration of the self is more insidious, if only because what’s reflected never precisely matches what’s projected. There’s nothing to “hold on to,” in Mumford’s words. There’s nothing “there.”

Head-mounted displays for reality augmentation: a survey

“Head-mounted”: It’s a lovely term, and one we’ll be hearing more frequently as our mortal frame becomes scaffolding for gadgetry. Now seems a good time to take a glance at the state of the art in head mountables.

In a patent application, Microsoft revealed its plans for a “head-mounted display” — essentially a pair of eyeglasses fitted out with a microchip, a camera, a location sensor, a network connection, and some other stuff:

MicrosoftGlasses

Designed to be worn at “live events” — of both the “scripted” (eg, opera) and the “semi-random” (eg, baseball) varieties — the glasses will project an overlay of “supplemental information” on the proceedings, according to the application. For instance:

In FIG. 1B [below], supplemental information display elements 160, 162 and 164 is provided regarding the game in progress. Within a display device 150, a first display element of supplemental information 160 provides information regarding the pitcher 24. This indicates that the pitcher “Joe Smith” has a record of 14 wins and 900 losses, and a 2.23 earned run average (ERA).* Display element 162 provides supplemental information regarding the batter 23. The display element 162 illustrates that the batter’s name is Willie Randolph, that the batter has a 0.303 batting average, and a 0.755 slugging percentage. Display element 164 positioned at the bottom of the display indicates the score of the game. Various types of indicators and various types of supplemental information may be provided within the course of a live event. Each display element may contain all or a subset of information regarding the object.

Wow. That sounds almost as good as watching a game on TV. I’ve only recently come to realize that the great historical shortcoming of our leisure activities has been their lack of data intensity.

Microsoft’s head-mounted display is a variation on Google Glass, a more streamlined, multi-purpose piece of reality-augmentation eyeware that, circulating in prototype form, has already impressed the impressionable. In Google’s view of the future, a head-mounted display will insert a layer of media technology between, among many other things, mothers and their babies, enriching the traditionally data-parched maternal experience:

glass_session

Google and Microsoft are far from the only players in head-mounted displays. Earlier this year, Apple was granted a patent for “methods and apparatus for treating the peripheral area of a user’s field of view in a head mounted display, and thereby creating improved comfort and usability for head mounted displays.” Vuzix, a company that makes virtual-reality gear for gamers and the military, plans to introduce its Smart Glasses M100, an Android-based augmented-reality wearable for civilians, next spring. The device looks like a cross between a Bluetooth headset and a windshield wiper:

M100

And Olympus, the Japanese camera maker, also has a head-mounted display, the MEG4.0, in the works. It’s shaping up to be a particularly stylish number:

All these products spring from a common source, the fabled X-Ray Specs, which was the first head-mounted display explicitly designed for geek wish-fulfillment:

Head-mounted displays can augment reality in many ways, of course, and it’s worth remembering that some of the most information-rich examples predate the digital era:

Not even the Nez Perce war bonnet, though, can match the reality-augmenting power of the greatest head-mounted display ever created:

One question worth keeping in mind when evaluating the new crop of head-mounted devices is whether they will end up broadening the augmentational capacity of the human eye or narrowing it.

___________________

*14 wins and 900 losses, with a 2.23 ERA? That Joe Smith is one unlucky pitcher.

Photo credits: Mom and augmented baby, Google; M100, Vuzix; MEG4.0, Olympus; Chief Joseph, public domain; eyes, Paolo Neoz (Flickr).

· · · — — — · · ·

I’m generally upbeat about progress, particularly when it involves smartphones, but I confess that this New Scientist story, headlined “Knuckles and Nails Get Invite to the Touchscreen Party,” is sending some unpleasant electrical pulses through my nervous system. Seems there’s this Carnegie Mellon computer scientist named Chris Harrison who has “built a prototype smartphone that can distinguish between touches from the knuckle, fingertip and even fingernail.” The phone “listens for the acoustic and vibrational differences between the three different types of touch. A fingertip could select an object while a knuckle tap could work like the right-click on a computer mouse and open up a submenu, for example.”

“For example”: That sounds ominous, doesn’t it?

Harrison thinks that our current crop of digital devices woefully underutilize our digits. “A big problem with touchscreens right now is that they are very simplistic, relative to the capability of our hands,” he says. “We could do so much more.” He has started a company to commercialize the technology, and he’s in talks with smartphone manufacturers to incorporate his rap-and-tap sensor into future models.

I’m sure Harrison is right, physiologywise. Fingers are wonderful inventions, and we should make the most of them. But then I start to think of the sound of all those little fingernail taps and knuckle raps, and my thoughts grow dark. Have you ever sat — on a plane, say — next to one of those guys who like to tap their fingers incessantly? Oh God. The human brain, perversely, loves to amplify those irritating little taps until they take on the quality of a tympani crescendo. Now imagine that everywhere you go you encounter a touchscreen percussion section. No sooner do you enter a public place — or, hell, your own home — than your ears are pricked by all manner of random rhythmical patterns, which your brain, snapping to attention, dutifully amplifies. Ambient tinnitus! Everywhere! Nonstop!

I can hear the future. It sounds something like this:

Media and expression: theses in tweetform

1. The complexity of the medium is inversely proportional to the eloquence of the message.

2. Hypertext is a more conservative medium than text.

3. The best medium for the nonlinear narrative is the linear page.

4. Twitter is a more ruminative medium than Facebook.

5. The introduction of digital tools has never improved the quality of an art form.

6. The returns on interactivity quickly turn negative.

7. In the material world, doing is knowing; in media, the opposite is often true.

8. Facebook’s profitability is directly tied to the shallowness of its members: hence its strategy.

9. Increasing the intelligence of a network tends to decrease the intelligence of those connected to it.

10. The one new art form spawned by the computer – the videogame – is the computer’s prisoner.

11. Personal correspondence grows less interesting as the speed of its delivery quickens.

12. Programmers are the unacknowledged legislators of the world.

13. The album cover turned out to be indispensable to popular music.

14. The pursuit of followers on Twitter is an occupation of the bourgeoisie.

15. Abundance of information breeds delusions of knowledge among the unwary.

16. No great work of literature could have been written in hypertext.

17. The philistine appears ideally suited to the role of cultural impresario online.

18. Television became more interesting when people started paying for it.

19. Instagram shows us what a world without art looks like.

20. Online conversation is to oral conversation as a mask is to a face.

More

It’s screens all the way down

With the launch this weekend of Nintendo’s dual-screen Wii U, we seem to be crossing some new Rubicon of Virtuality. It’s not that the ability to control or augment one screen with another screen is new — you’ve been able to use a smartphone to control a TV for years — but the Wii U promises to take the two-screen lifestyle to a whole new level. We’re going to see, pretty much immediately, an explosion of innovation in the creation of experiences involving the simultaneous use of two screens. The explosion will begin in the world of videogames, but then it will spread outward, like a mushroom cloud, to many other realms.

In gaming, the incorporation of a little touchscreen monitor into a controller promises some big benefits — notably, in helping remedy the kludginess that has long characterized multiplayer action games on consoles — but it also marks, as game critic Chris Suellentrop points out, a capitulation to the tyranny of the screen. With the Wii U, Nintendo retreats from the original Wii interface, which was designed to bring a whole-body physicality to videogaming, in order to accommodate “the new mode of living that Apple’s iPhone and iPad have introduced.” We won’t be happy, it seems, until the screen wields total control over our eyes, our fingers, our minds — until its suzerainty extends to all the precincts of the cortex.

The computer screen has always been a powerful tool for dividing attention. It wraps us in a funhouse of sensory stimuli, indulges our primal instinct to shift our focus rapidly in response to changes in our environment. The dual-screen interface magnifies the effect; it divides the divisions, slices our fragmented consciousness into micro-strips. It’s the perfect interface for the natural-born scatterbrain.

You might think that the double-screen interface is, physiologically, about as far as we humans can go. We only have two hands, after all — not to mention a single field of vision. But I’m not so sure we’ve reached our limit just yet. Imagine playing a Wii U game while also wearing a Google Glass. A three-screen interface! It’s entirely doable. At that point, we’ll have pretty much augmented the reality out of reality. We’ll have rocketed our way into the astral plane.