Category Archives: Realtime

Exile from realtime

I’ve got a bad case of the shakes today, and it has nothing to do with the M-80s and bottle rockets going off into the wee hours last night. No, over the long weekend I was cast out of realtime. I had no warning, no time to prepare for my reentry into the drab old chronological order. I feel like a refugee living in a crappy tent in a muddy field on the outskirts of some godforsaken country. I know exactly how T. S. Eliot felt when he wrote “Ridiculous the sad waste time / Stretching before and after.”

What happened is that Google turned off its spigot of realtime results. I still see the “Realtime” option in the drop-down list of search options, but when I click on it it returns nothing. Just a horrifying whiteness, like a marble tombstone before the letters are carved. And the “Latest” option for arranging results that used to appear in the lefthand column of search tools has been replaced by “Past hour.” Past hour? Are you kidding me? Why not just say “Eternity”? I freaking lived in “Latest,” with its single page of perpetually updated results, punctuated by pithy little tweets from all manner of avatarial life. It was pure pre-algorithmic democracy, visceral as raw beef.

Now, the stream is dry.

Apparently this all stems from some tiff between Google and Twitter. The two Internet Goliaths – okay, one Goliath and one mini-Goliath – had a pact that allowed Google to stream tweets in its results, but that agreement went kaput on Saturday. So on Sunday morning Google put a cork in the firehose. And left me an exile from realtime.

Time itself is contingent on the vagaries of online competition. As flies to wanton boys we are to the Gods of the Net.

I take some solace from a statement that came out of the Plex on Sunday: “We’ve temporarily disabled google.com/realtime. We’re exploring how to incorporate our recently launched Google+ project into this functionality going forward, so stay tuned.”

Living in realtime is all about staying tuned. Staying tuned is the way we live today. Rest assured, Googlers, that I will keep hitting Refresh until “this functionality” returns. The alternative is too distressing to ponder. I need my Now.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

New frontiers in social networking

The big news this week is the launch of a National Science Foundation-funded study aimed at “developing the NeuroPhone system, the first Brain-Mobile phone Interface (BMI) that enables neural signals from consumer-level wireless electroencephalography (EEG) headsets worn by people as they go about their everyday lives to be interfaced to mobile phones and combined with existing sensor streams on the phone (e.g., accelerometers, gyroscopes, GPS) to enable new forms of interaction, communications and human behavior modeling.”

More precisely, the research, being conducted at Dartmouth College, is intended to accomplish several goals, including developing “new energy-efficient techniques and algorithms for low-cost wireless EEG headsets and mobile phones for robust sensing, processing and duty cycling of neural signals using consumer devices,” inventing “new learning and classifications algorithms for the mobile phone to extract and infer cognitively informative signals from EEG headsets in noisy mobile environments,” and actually deploying “networked NeuroPhone systems with a focus on real-time multi-party neural synchrony and the networking, privacy and sharing of neural signals between networked NeuroPhones.”

I’ve always thought that the big problem with existing realtime social networking systems, such as Facebook and Twitter, is that they require active and deliberate participation on the part of individual human nodes (or “beings”) – ie, typing out messages on keypads or other input devices – which not only introduces systemic delays incompatible with true realtime communication but also entails the possibility of the subjective distortion of status updates. NeuroPhones promise, by obviating the need for conscious human agency in the processing and transmission of updates, to bring us much closer to fulfilling the true realtime ideal, opening up enormous new opportunities not only in “human behavior modeling” but also in marketing.

Plus, “real-time multi-party neural synchrony” sounds like a lot of fun. I personally can’t wait to switch my NeuroPhone into vibration mode.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

UPDATE: Here is a picture of a prototype of the NeuroPhone:

neurophone.jpg

And here is a paper in which the researchers describe the project. They note, at the end, that “sniffing packets could take on a very new meaning if brain-mobile phone interfaces become widely used. Anyone could simply sniff the packets out of the air and potentially reconstruct the ‘thoughts’ of the user. Spying on a user and detecting something as simple as them thinking yes or no could have profound effects. Thus, securing brain signals over the air is an important challenge.”

Nowness

“Ripeness,” Shakespeare told us, “is all.” The Bard did not anticipate the realtime web. On the New Net, ripeness is nothing. Nowness is all, as David Gelernter tells us in his essay “Time to Start Taking the Web Seriously.” Web 2.0 was supposed to bring us a creative outpouring of “social production.” Instead it’s tossed us into the rapids of instant communication. The Web has become a vast multimedia telephone system, where everyone is on the same party line, exchanging millions of bite-sized updates and alerts with every tick of the clock. Google, Facebook, Twitter: the Net’s commercial giants are locked in a fierce competitive battle to speed up “the stream.”

The Net’s bias, Gelernter explains, is toward the fresh, the new, the now. Nothing is left to ripen. History gets lost in the chatter. But, he suggests, we can correct that bias. We can turn the realtime stream into a “lifestream,” tended by historians, along which the past will crystallize into rich, digital deposits of knowledge. We will leap beyond Web 2.0 to “the post-Web,” where all the views are long.

It’s a pretty vision. I wish I could believe it. There are times when human beings are able to correct the bias of a technology. There are other times when we make the bias of an instrument our own. Everything we’ve seen in the development of the Net over the past 20 years, and, indeed, in the development of mass media over the past 50 years, indicates that what we’re seeing today is an example of the latter phenomenon. We are choosing nowness over ripeness.

This post, which appeared originally, in a slightly different form, at Edge.org., is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

The crystal stream

David Gelernter peers into the ineffable nowness of realtime:

Nowness is one of the most important cultural phenomena of the modern age: the western world’s attention shifted gradually from the deep but narrow domain of one family or village and its history to the (broader but shallower) domains of the larger community, the nation, the world. The cult of celebrity, the importance of opinion polls, the decline in the teaching and learning of history, the uniformity of opinions and attitudes in academia and other educated elites — they are all part of one phenomenon. Nowness ignores all other moments but this. In the ultimate Internet culture, flooded in nowness like a piazza flooded in sea water, drenched in a tropical downpour of nowness, everyone talks alike, dresses alike, thinks alike.

And then, like his forerunner Vannevar Bush, he conjures up a future in which a technology is refashioned to solve the problem it created:

Once we understand the inherent bias in an instrument, we can correct it. The Internet has a large bias in favor of now. Using lifestreams (which arrange information in time instead of space), historians can assemble, argue about and gradually refine timelines of historical fact … Images, videos and text will accumulate around such streams. Eventually they will become shared cultural monuments in the Cybersphere. Before long, all personal, familial and institutional histories will take visible form in streams. A lifestream is tangible time: as life flashes past on waterskis across time’s ocean, a lifestream is the wake left in its trail. Dew crystallizes out of the air along cool surfaces; streams crystallize out of the Cybersphere along veins of time. As streams begin to trickle and then rush through the spring thaw in the Cybersphere, our obsession with “nowness” will recede, the dykes will be repaired and we will clean up the damaged piazza of modern civilization.

Around every technological bend lies utopia, where the streams are crystal and the levees never break.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

Raising the realtime child

Amazingly enough, tomorrow will mark the one-year anniversary of the start of Rough Type’s Realtime Chronicles. Time flies, and realtime flies like a bat out of hell.

Since I began writing the series, I have received innumerable emails and texts from panicked parents worried that they may be failing in what has become the central challenge of modern parenting: ensuring that children grow up to be well adapted to the realtime environment. These parents are concerned – and rightly so – that their kids will be at a disadvantage in the realtime milieu in which we all increasingly live, work, love, and compete for the small bits of attention that, in the aggregate, define the success, or failure, of our days. If maladapted to realtime existence, these parents understand, their progeny will end up socially ostracized, with few friends and even fewer followers. “Can we even be said to be alive,” one agitated young mother wrote me, “if our status updates go unread?” The answer, of course, is no. In the realtime environment, the absence of interactive stimuli, even for brief periods of “time,” may result in a state of reflective passivity indistinguishable from nonexistence. On a more practical level, a lack of realtime skills is sure to constrain a young person’s long-term job prospects. At best, he or she will be fated to spend his or her days involved in some form of manual labor, possibly even working out of doors with severely limited access to screens. At worst, he or she will have to find a non-tenure-track position in academia.

Fortunately, raising the realtime child is not difficult. The newborn human infant, after all, leads a purely realtime existence, immersed entirely in the “stream” of realtime alerts and stimuli. As long as the child is kept in the crosscurrents of the messaging stream from the moment of parturition – the biological womb replaced immediately with the wi-fi and/or 3G womb – adaptation to the realtime environment will likely be seamless and complete. It is only when a sense that time may consist of something other than the immediate moment is allowed to impinge on the child’s consciousness that maladaption to realtime becomes a possibility. Hence, the most pressing job for the parent is to ensure that the realtime child is kept in a device-rich networked environment at all times.

realtimekids.jpg

[photo credit: Wesley Fryer; CC BY 2.0]

It is also essential that the realtime child never be allowed to run a cognitive surplus. His or her mental accounts must always be kept in perfect balance, with each synaptical firing being immediately deployed for a well-defined chore, preferably involving the manipulation of symbols on a computer screen in a collaborative social-production exercise. If cognitive cycles are allowed to go to waste, the child may drift into an introspective “dream state” outside the flow of the realtime stream. It is wise to ensure that your iPhone is well-populated with apps suitable for children, as this will provide a useful backup should your child break, lose, or otherwise be separated from his or her own network-enabled devices. Printed books should in general be avoided, as they also tend to promote an introspective dream state, though multifunctional devices that include e-reading apps, such as Apple’s forthcoming iPad, are permissible.

The out-of-doors poses particular problems for the realtime child, as nature has in the past earned a reputation for inspiring states of introspectiveness and even contemplativeness in impressionable young people. (Some psychologists even suggest that looking out a window may be dangerous to the mental health of the realtime child.) Sometimes it is simply impractical to keep a child from interacting with the natural world. At these moments, it is all the more important that a child be outfitted with portable electronic devices, including music players, smartphones, and gaming instruments, in order to ensure no break in the digital stream. If you are not able to physically accompany your child on expeditions into the natural world, it is a good idea to send text messages to your child every few minutes just to be on the safe side. The establishment of Twitter accounts for children is also highly recommended.

bloggedchild.jpg

[photo credit: Robert Scoble; CC BY 2.0]

The challenges of keeping your child in a realtime environment can be trying, but remember: history is on your side. The realtime environment becomes increasingly ubiquitous with each passing day. It is also important to remember that one of the great joys of modern parenthood is documenting your realtime infant’s or toddler’s special moments through texts, tweets, posts, uploaded photos, and YouTube clips. The realtime child presents ideal messaging-fodder for the realtime parent.

Realtime is a journey that you and your child take together. Every moment is unique because every moment is disconnected from both the one that precedes it and the one that follows it. Realtime is a state of perpetual renewal and unending and undifferentiated stimulus. The joy of infancy continues forever.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

Does my tweet look fat?

As the velocity of communication approaches realtime, language compresses.

Think about it. When people originally started talking about Twitter, the first thing they’d always mention was the 140-character limit that the service imposes on tweets. So short! Who can say anything in 140 lousy characters? Crazy!

And it’s true that when a person who is used to longer forms of writing starts emitting tweets, keeping to just 140 characters can be a challenge. You actually have to think a bit about how to squeeze your thoughts to fit the format. It doesn’t take long, though, for a twitterer to adapt to the new medium, and once you’re fully adapted something funny happens. The sense that 140 characters is a constraint not only disappears, but 140 characters starts to seem, well, long. Your own tweets shrink, and it becomes kind of annoying when somebody actually uses the full 140 characters. Jeez, I’m going to skip that tweet. It’s too long.

The same thing has happened, of course, with texting. Who sends a 160-character text? A 160-character text would feel downright Homeric. And that’s what a 140-character tweet is starting to feel like, too.

I think our alphabetic system of writing may be doomed. It doesn’t work well with realtime communication. That’s why people are forced to use all sorts of abbreviations and symbols – the alphabet’s just too damn slow. In the end, I bet we move back to a purely hieroglyphic system of writing, with the number of available symbols limited to what can fit onto a smartphone keypad. Honestly, I think that communicating effectively in realtime requires no more than 25 or 30 units of meaning.

Give me 30 glyphs and a URL shortener, and I’m good.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

The eternal conference call

What goes around comes around, if always a little faster.

Remember when we first started using email, back in the foggy depths of the twentieth century? The great thing about email, everyone said and everyone believed, was that it was an asynchronous communications medium. (Yes, that’s how we used to talk.) Email cured the perceived shortcomings of telephone calls, which dominated our work lives. The ring of your phone would butt into whatever you happened to be doing at that moment, and you had no choice but to answer the damn thing (it might be your boss or your client, after all), and then you had no choice but to respond immediately to whatever the person on the another end was saying or asking. The telephone was realtime and it was synchronous, and those were bad things. One of the major roles of the traditional secretary was to add a buffer to the endless stream of phone calls: paying someone to screen your calls was a kludgy way to make a synchronous medium act sort of like an asynchronous one.

When voicemail entered the scene, people cheered at first, but it actually only made matters worse. The phone became an even more demanding medium. The voicemail light was always blinking, and when you listened to a voicemail, you felt compelled to respond immediately. There was a reason we called it “voicemail hell.”

And don’t even get me started about conference calls.

Email delivered us from the telephone’s realtime stream. Suddenly, we controlled, individually, our main communications medium, rather than vice versa. We could choose when to read our email, and, more important, we could choose when to respond – and whom to respond to. The buffer was built into the technology. Even taking just a few minutes to think about a message often led to a more thoughtful response than an immediate, halfbaked phone reply. After email took hold in offices, you always had a few doofus laggards who continued to rely on the phone and voicemail. They were widely despised: synchronous dinosaurs lumbering through the pleasant pastures of asynchronous Internet communication.

But email also did something else, the consequences of which we didn’t fully foresee. It dramatically reduced the transaction costs of personal communication. You had to think at least a little bit before placing a phone call, not just because it might cost you a few cents but because you knew you were going to interrupt the other person. Is this really necessary, or can it wait? Email removed that calculation from the equation. Everything was worth an email. (As direct marketers and spammers also soon discovered.) And there was the wonderful CC field and the even more wonderful Reply All button. Broadcasting, cumbersome with the phone, became easy with email.

Goodbye voicemail hell. Welcome to email hell.

Turns out, we were mistaken about email all along. Asynchrony was never actually a good thing. It was simply an artifact of a paucity of bandwidth. Or so we’re told today, as the realtime stream – texts, tweets, Facebook updates – o’erbrims its banks, and out on the horizon rises the all-consuming Wave. In “Wave New World,” an article in the current edition of Time, Lev Grossman writes:

Keep in mind that until the mid-1990s, when e-mail went mainstream, the network environment was very different. Bandwidth was a scarce resource. You had your poky modem and liked it. Which is why e-mail was created in the image of the paper-postal system: tiny squirts of electronic text. But now we’re rolling in bandwidth … And yet we’re still passing one another little electronic notes. Google Wave rips up that paradigm and embraces the power of the networked, collaborative, postpaper world.

Jessica Vascellaro makes a similar point in heralding “the end of the email era” in today’s Wall Street Journal:

We all still use email, of course. But email was better suited to the way we used to use the Internet—logging off and on, checking our messages in bursts. Now, we are always connected, whether we are sitting at a desk or on a mobile phone. The always-on connection, in turn, has created a host of new ways to communicate that are much faster than email, and more fun. Why wait for a response to an email when you get a quicker answer over instant messaging? [Email] seems boring compared to services like Google Wave.

The flaw of synchronous communication has been repackaged as the boon of realtime communication. Asynchrony, once our friend, is now our enemy. The transaction costs of interpersonal communication have fallen below zero: It costs more to leave the stream than to stay in it. The approaching Wave promises us the best of both worlds: the realtime immediacy of the phone call with the easy broadcasting capacity of email. Which is also, as we’ll no doubt come to discover, the worst of both worlds. Welcome to the conference call that never ends. Welcome to Wave hell.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.