Category Archives: Uncategorized

Charcoal, shale, cotton, tangerine, sky

sky

Those, I hear, are the official names of the colors that Google Glass will come in when the head-mounted computer is released, sometime in the next year or so, into what Larry Page this week called “the normal world.” Let me repeat those color names, because they’re beautiful and earthy and soothing:

Charcoal

Shale

Cotton

Tangerine

Sky

“More delicate than the historians’ are the map-makers’ colors,” wrote Elizabeth Bishop, and more delicate still are the marketers’.

It’s hard not to be reminded of the palette of the third generation of iMacs, released back in 2000:

Graphite

Indigo

Ruby

Sage

Snow

The Glass palette strikes me as even better, even more evocative. It may even surpass Simon & Garfunkel’s great herbal palette:

Parsley

Sage

Rosemary

Thyme

That’s a little too green-centric for a product line, anyway.

It does worry me just a little bit, though, that the Glass palette eschews green altogether. Is that a political statement? In fact, now that I think about it, the Glass palette places a disconcerting emphasis on fossil fuels. Charcoal? Shale? One can almost smell the carbon dioxide rising into Sky, almost see Cotton and Tangerine wilting in the heat. Maybe they should have included Tar Sands as a color option.

No, that would have been a downer. “Charcoal” has a much nicer lilt to it. Its emotional connotations diverge from its real-world denotations, in a way that nicely underscores both the semiotic and the marketing possibilities of reality augmentation.

What would be really cool is if the color of your Glass also determined the way the device augmented your reality. So if you wore Charcoal, you’d get this dark, goth view of the world, but if you sported Tangerine it would be like seeing existence through the eyes of a high-school cheerleader on game day. Cotton would put you into a super-mellow, slightly catatonic state of mind. Sky would give you a New Age perspective — all crystalline and feathery. Shale would be totally businesslike, the Joe Friday reality.

As for me, I’m going to hold out for Mushroom.

“IT Doesn’t Matter” at 10

My article “IT Doesn’t Matter” came out in the Harvard Business Review ten years ago this month. At Network World, Ann Bednarz has a retrospective about the article and the reaction to it as well as an interview with me.

After the article appeared, I tracked some of the reactions to it here. Many of the links, alas, have gone dead over the last 10 years, but the rundown still provides a sense of where IT stood back then, between the dot-com bust and the arrival of the cloud.

Bay Area talk: May 14

If you’re looking for something to do in San Francisco Tuesday evening, I will be having a discussion with Thomas Goetz, the former executive editor of Wired, at the Nourse Theatre at 7:30 pm. The event is part of the California Academy of Science’s “Conversations on Science” series, held in association with City Arts & Lectures. You can buy tickets and get more information here.

The Shallows: cartoon edition

As I was writing The Shallows, I kept thinking, “Man, if I could only draw, I’d bag all these freaking words and do this as a cartoon.” Now, thanks to the talented animators at Epipheo, my dream has been realized:

My favorite part is when I burn in videogame hell.

Speak no evil

ears

Slashdot notes that Google has filed for a patent on what it calls a “policy violation checker,” which comprises “methods and systems for identifying problematic phrases in an electronic document, such as an e-mail.” Here’s how it works:

A context of an electronic document may be detected. A textual phrase entered by a user is captured. The textual phrase is compared against a database of phrases previously identified as being problematic phrases. If the textual phrase matches a phrase in the database, the user is alerted via an in-line notification, based on the detected context of the electronic document.

“Problematic phrases,” Google explains, “include, but are not limited to, phrases that present policy violations, have legal implications, or are otherwise troublesome to a company, business, or individual.”

The patent application, which was published last week, sketches out various ways the service might work. For instance, the “in-line notification” could take the form of the immediate “underlining or highlighting” of the problematic or troublesome word or phrase as it’s typed. The notification could also be accompanied by “a hyperlink to a webpage.” The system could also use “machine learning techniques to identify problematic phrases without human intervention.” Most interesting of all, the system could be programmed “to alert a third party to a match between a textual phrase and a phrase in the database.” For instance, “if a user creates a text document, presentation, or other document with a problematic phrase, the policy violation checker may notify a member of the legal department of the existence of the document.”

One can imagine all sorts of immediate applications for a service that highlights and records “problematic phrases” as you type them. But it strikes me that the policy violation checker’s real potential will emerge only when Google perfects its neuronal interface — the one that Sergey Brin described as “a little version of Google that you just plug into your brain.” At that point, policy-violation checking could become preemptive. The moment a problematic thought entered your mind, you would be alerted to the looming transgression and the thought would be deleted before it even reaches the expression stage. No one else would need know the incident ever occurred, except, of course, the designated third party.

Photo via x-ray delta one.

The digital dualism of the rodent mind

rat

“It’s a really welcome addition to the growing field of rodent virtual reality.” So says Northwestern University neurobiologist Daniel Dombeck in commenting on a new study, published yesterday by Science, that compares what goes on in rats’ brains when they navigate digitally created spaces with what goes on in their noggins when they navigate the real world. Rats, like humans, have place cells, which are neurons that fire reliably at particular locations and, it’s believed, play a key role in the brain’s creation of cognitive maps. The study reveals that place cells are considerably less active in virtual reality (VR) than in the real world (RW):

When Mayank Mehta, a neurophysicist at the University of California (UC), Los Angeles, compared the activity of place cells in rats running along a real, linear track with place cell activity in the rats running in virtual reality, he saw some surprising differences. In the real world, about 45% of the rats’ place cells fired at some point along the track. In virtual reality, only 22% did. “Half of the neurons just shut up,” he says.

Individual place cells also behaved radically differently in VR than they do in RW:

On a real track, [a particular place cell] would fire when [the rat] had taken two steps away from the start [of the track], and then again when the animal reached the same spot on its return trip. But in virtual reality, something odd happened. Rather than firing a second time when the rat reached the same place on its return trip, [the cell] fired when the rat was two steps away from the opposite end of the track … That’s like the same place cell in your brain firing when you’ve taken two steps away from your door and then when you’ve taken two steps away from your car. Instead of encoding a position in absolute space, the place cell seems to be keeping track of the rat’s relative distance along the (virtual) track. [Mehta] says, “This never happens in the real world.”

Mehta thinks that the difference may stem from the lack of “proximal cues” — environmental smells, sounds, and textures that provide clues to location — in the digital world:

And considering that when those cues disappear, the rat’s cognitive map appears to change from one based on absolute space to one based on relative distance, proximal cues might be the key component to how those mental maps work in the real world.

Rats’ sensory perception of the world differs from that of people — rats don’t see very well, for instance — and that (among other things) makes it hard to know whether human brains react to VR and RW in the same way. But the study at least hints at the richness of our perception of the world — a richness that is very much embodied in our physical being even though it may be hidden from our conscious mind. To me, this raises an important but rarely heard question about so-called augmented reality (AR), particularly the use of computers to add an extra layer of visual information to our conscious perception of the world: Is augmented reality also diminished reality? In other words, by adding input to one (conscious) layer of perception, do you end up degrading other (conscious and/or unconscious) layers of perception?

Is RW + AR > RW or is RW + AR < RW?

And does it matter?

Photo by UCLA Neurology.

Calling Norman Bates

scoble selfie

First: The guys most excited about Google Glass are the same guys who install a gold-toned shower head onto chrome plumbing.

Second: Scoble looks particularly appealing when moist.

Third: I was starting to get really nervous that Glass might actually become a fashionable facial accessory, but, as Marcus Wohlsen suggests, this photo pretty much squelches that possibility.

Fourth: Scoble is my new hero.

Photo: Selfie* by Robert Scoble.

*(One hopes.)