Monthly Archives: August 2010

New frontiers in social networking

The big news this week is the launch of a National Science Foundation-funded study aimed at “developing the NeuroPhone system, the first Brain-Mobile phone Interface (BMI) that enables neural signals from consumer-level wireless electroencephalography (EEG) headsets worn by people as they go about their everyday lives to be interfaced to mobile phones and combined with existing sensor streams on the phone (e.g., accelerometers, gyroscopes, GPS) to enable new forms of interaction, communications and human behavior modeling.”

More precisely, the research, being conducted at Dartmouth College, is intended to accomplish several goals, including developing “new energy-efficient techniques and algorithms for low-cost wireless EEG headsets and mobile phones for robust sensing, processing and duty cycling of neural signals using consumer devices,” inventing “new learning and classifications algorithms for the mobile phone to extract and infer cognitively informative signals from EEG headsets in noisy mobile environments,” and actually deploying “networked NeuroPhone systems with a focus on real-time multi-party neural synchrony and the networking, privacy and sharing of neural signals between networked NeuroPhones.”

I’ve always thought that the big problem with existing realtime social networking systems, such as Facebook and Twitter, is that they require active and deliberate participation on the part of individual human nodes (or “beings”) – ie, typing out messages on keypads or other input devices – which not only introduces systemic delays incompatible with true realtime communication but also entails the possibility of the subjective distortion of status updates. NeuroPhones promise, by obviating the need for conscious human agency in the processing and transmission of updates, to bring us much closer to fulfilling the true realtime ideal, opening up enormous new opportunities not only in “human behavior modeling” but also in marketing.

Plus, “real-time multi-party neural synchrony” sounds like a lot of fun. I personally can’t wait to switch my NeuroPhone into vibration mode.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

UPDATE: Here is a picture of a prototype of the NeuroPhone:

neurophone.jpg

And here is a paper in which the researchers describe the project. They note, at the end, that “sniffing packets could take on a very new meaning if brain-mobile phone interfaces become widely used. Anyone could simply sniff the packets out of the air and potentially reconstruct the ‘thoughts’ of the user. Spying on a user and detecting something as simple as them thinking yes or no could have profound effects. Thus, securing brain signals over the air is an important challenge.”

The medium is the … squirrel!

A couple of weeks ago, MIT’s Nicholas Negroponte, chairman of the One Laptop per Child initiative, foretold the death of the printed book. Today, he foretells the death of book-reading: “I love the iPad, but my ability to read any long-form narrative has more or less disappeared, as I am constantly tempted to check e-mail, look up words or click through.”

The unread message

Five neuroscientists get into a raft. That might be the start of a mildly funny joke, but in this case it’s the premise of an article by Matt Richtel in today’s New York Times, the latest installment in the paper’s series on “computers and the brain.” Richtel accompanies the scientists as they float down a remote stretch of the San Juan River in Utah, beyond the reach of cell towers and wi-fi signals. The impetus for the trip was, Richtel reports, “to understand how heavy use of digital devices and other technology changes how we think and behave, and how a retreat into nature might reverse those effects.”

Two of the neuroscientists start the trip believing that the Net and related technologies can undermine people’s ability to pay attention, impeding deep thinking and even causing psychological problems. The other three are more sanguine about the effects of the technologies. To see what transpires, you’ll need to read the article.

The piece raises one particular idea that I found to be intriguing, and troubling. As the trip proceeds, the scientists begin to wonder “whether attention and focus can take a hit when people merely anticipate the arrival of more digital stimulation”:

“The expectation of e-mail seems to be taking up our working memory,” [Johns Hopkins professor Steven] Yantis says.

Working memory is a precious resource in the brain. The scientists hypothesize that a fraction of brain power is tied up in anticipating e-mail and other new information — and that they might be able to prove it using imaging.

“To the extent you have less working memory, you have less space for storing and integrating ideas and therefore less to do the reasoning you need to do,” says [University of Illinois professor Art] Kramer, floating nearby.

In The Shallows, I review a series of studies that indicate that the fast-paced delivery of messages and other information online overloads working memory, leading to a state of perpetual distractedness. In my research I didn’t come across the idea that the mere anticipation of receiving a fresh burst of information would also add to our cognitive load. But it makes sense. Research shows, for example, that office workers tend to glance at their email inbox 30 or more times an hour, which seems to me to be pretty clear evidence that even when we’re not reading messages we’re thinking about receiving messages – not just emails, but texts, Facebook updates, tweets, and so on.

This would also help explain why the Net continues to distract us even when we’re not online. Part of our mind is still thinking about that new message that might have just arrived in our inbox. What makes that hypothetical unread message particularly distracting is that it could actually be important. You won’t know until you’ve read it. Admit it: The suspense is killing you.

Brave New Google

In an interview published today in the Wall Street Journal, Google CEO Eric Schmidt lays out the next stage in his company’s ambitious plan to replace human agency with automated data processing, freeing us all from the nuisance of thinking:

“We’re trying to figure out what the future of search is,” Mr. Schmidt acknowledges. “I mean that in a positive way. We’re still happy to be in search, believe me. But one idea is that more and more searches are done on your behalf without you needing to type.”

“I actually think most people don’t want Google to answer their questions,” he elaborates. “They want Google to tell them what they should be doing next.”

Let’s say you’re walking down the street. Because of the info Google has collected about you, “we know roughly who you are, roughly what you care about, roughly who your friends are.” Google also knows, to within a foot, where you are. Mr. Schmidt leaves it to a listener to imagine the possibilities: If you need milk and there’s a place nearby to get milk, Google will remind you to get milk. It will tell you a store ahead has a collection of horse-racing posters, that a 19th-century murder you’ve been reading about took place on the next block.

Says Mr. Schmidt, a generation of powerful handheld devices is just around the corner that will be adept at surprising you with information that you didn’t know you wanted to know. “The thing that makes newspapers so fundamentally fascinating—that serendipity—can be calculated now. We can actually produce it electronically,” Mr. Schmidt says.

Awesome! I’ve always thought that the worst thing about serendipity was its randomness.

I hope Google will also be able to tell me the best candidate to vote for in elections. I find that such a burden.

Privacy matters

The Wall Street Journal has been running an important series about the collection and exploitation of personal information on the Net. As part of that series, it is featuring a debate today between me and the Cato Institute’s Jim Harper about online privacy – more particularly, the tradeoff between privacy and personalization.

My essay begins like this:

In a 1963 Supreme Court opinion, Chief Justice Earl Warren observed that “the fantastic advances in the field of electronic communication constitute a great danger to the privacy of the individual.” The advances have only accelerated since then, along with the dangers. Today, as companies strive to personalize the services and advertisements they provide over the Internet, the surreptitious collection of personal information is rampant. The very idea of privacy is under threat.

Most of us view personalization and privacy as desirable things, and we understand that enjoying more of one means giving up some of the other. To have goods, services and promotions tailored to our personal circumstances and desires, we need to divulge information about ourselves to corporations, governments or other outsiders.

This tradeoff has always been part of our lives as consumers and citizens. But now, thanks to the Net, we’re losing our ability to understand and control those tradeoffs — to choose, consciously and with awareness of the consequences, what information about ourselves we disclose and what we don’t. Incredibly detailed data about our lives is being harvested from online databases without our awareness, much less our approval …

And here’s the start of Harper’s piece:

If you surf the web, congratulations! You are part of the information economy. Data gleaned from your communications and transactions grease the gears of modern commerce. Not everyone is celebrating, of course. Many people are concerned and dismayed—even shocked—when they learn that “their” data are fuel for the World Wide Web.

Who is gathering the information? What are they doing with it? How might this harm me? How do I stop it?

These are all good questions. But rather than indulging the natural reaction to say “stop,” people should get smart and learn how to control personal information. There are plenty of options and tools people can use to protect privacy—and a certain obligation to use them. Data about you are not “yours” if you don’t do anything to control them. Meanwhile, learning about the information economy can make clear its many benefits …

Charlie bit my cognitive surplus

“You can say this for the technological revolution; it’s cut way down on television.” So writes Rebecca Christian in a column for the Telegraph Herald in Dubuque. She’s not alone in assuming that the increasing amount of time we devote to the web is reducing the time we spend watching TV. It’s a common assumption. And, like many common assumptions, it’s wrong. Despite the rise of digital media – or perhaps because of it – Americans are watching more TV than ever.

The Nielsen Company has been tracking media use for decades, and it reported last year that in the first quarter of 2009, the amount of time Americans spend watching TV hit its highest level ever – the average American was watching 156 hours and 24 minutes of TV a month. Now, Nielsen has come out with an update for the first quarter of 2010. Once again, TV viewing has hit a new record, with the average American now watching 158 hours and 25 minutes of TV a month, a gain of 2 hours in just the past twelve months. Although two-thirds of Americans now have broadband Internet access at home, TV viewing continues its seemingly inexorable rise.

And the Nielsen TV numbers actually understate our consumption of video programming, because the time we spend viewing video on our computers and cell phones is also going up. The average American with Internet access is now watching 3 hours and 10 minutes of video on Net-connected computers every month, Nielsen reports, and the average American with a video-capable cell phone is watching on additional 3 hours and 37 minutes of video on his or her phone every month. Not surprisingly, expanding people’s access to video programming increases their consumption of that programming. The spread of high-definition digital TVs and broadcasts appears to be another factor propelling TV viewing upward, says Nielsen.

What about the young? Surely, so-called “digital natives” are watching less TV, right? Nope. The young, too, continue to ratchet up their TV viewing. A recent study of media habits by Deloitte showed, in fact, that over the past year people in the 14-to-26 age bracket increased their TV watching by a greater percentage than any other age group. An extensive Kaiser Family Foundation study released earlier this year found that while young people appear to be spending a little less time in front of TV sets today than they did five years ago, that decline is offset by increased viewing of television programming on computers, cell phones, and iPods. Overall, “the proliferation of new ways to consume TV content has actually led to an increase of 38 minutes of daily TV consumption” by the young, reports Kaiser. Nielsen, too, finds that TV viewing continues to rise among children, teens, and young adults.

What about the rise of amateur media production, abetted by sites like YouTube? That trend, at least, must be shifting us away from media consumption. Wrong again. As Bradley Bloch explained in a recent Huffington Post article, the ease with which amateur media productions can be distributed online actually has the paradoxical effect of increasing people’s media consumption even more than it increases their media production. “Even if we count posting a LOLcat as a creative act,” observes Bloch, “there are many more people looking at LOLcats than there are creating them.” Bloch runs the numbers on one oft-viewed YouTube entertainment: “One of the most popular videos on YouTube, ‘Charlie bit my finger – again!’ depicting a boy sticking his fingers in his little brother’s mouth, has been viewed 211 million times. Something that took 56 seconds to create – and which was only intended to be seen by the boys’ godfather – has sucked up the equivalent of 1600 people working 40 hours a week for a year. Now that’s leverage.” By giving us easy and free access to millions of short-form video programs, the web allows us to cram ever more video-viewing into the nooks and crannies of our daily lives.

To give an honest accounting of the effects of the Net on media consumption, you need to add the amount of time that people spend consuming web media to the amount of time they already spend consuming TV and other traditional media. Once you do that, it becomes clear that the arrival of the web has not reduced the time people spend consuming media but increased it substantially. As consumption-oriented Internet devices, like the iPad, grow more popular, we will likely see an even greater growth in media consumption. The web, in other words, marks a continuation of a long-term cultural trend, not a reversal of it.

Take it away, Charlie: