Category Archives: Realtime

Ambient tweetability

I have seen the future, and it is not Bruce Springsteen. It is the inline tweet:


When Twitter came along, back in 2006, it seemed like a godsend. It made our lives so much easier. Media sharing became a snap. No longer did you have to go through the tedious process of writing a blog post and formulating links. Goodbye to all that “a href=” crap and those soul-draining <> whatchamacallits. You grabbed a snippet, and you tweeted it out to the world. It was almost like a single fluid movement. I don’t know precisely how many keystrokes Twitter has saved humanity, but I’m pretty sure that the resulting expansion of cognitive surplus is non-trivial.

Since then, though, we have become more fully adapted to the realtime environment and, [inlinetweet prefix=”” tweeter=”” suffix=””]frankly, tweeting has come to feel kind of tedious itself[/inlinetweet]. It’s not the mechanics of the actual act of tweeting so much as the mental drain involved in (a) reading the text of an article and (b) figuring out which particular textual fragment is the most tweet-worthy. That whole pre-tweeting cognitive process has become a time-sink.

That’s why the arrival of the inline tweet — the readymade tweetable nugget, prepackaged, highlighted, and activated with a single click — is such a cause for celebration. The example above comes from a C.W. Anderson piece posted today by the Nieman Journalism Lab. “When is news no longer what is new but what matters?” Who wouldn’t want to tweet that? It’s exceedingly pithy. The New York Times has also begun to experiment with inline tweets, and it’s already seeing indications that the inclusion of prefab tweetables increases an article’s overall tweet count. I think the best thing about the inline tweet is that you no longer have to read, or even pretend to read, what you tweet before you tweet it. Assuming you trust the judgment of a publication’s in-house tweet curator, or tweet-curating algorithm, you can just look for the little tweety bird icon, give the inline snippet a click, and be on your way. [inlinetweet prefix=”” tweeter=”” suffix=””]Welcome to linking without thinking![/inlinetweet]

[an afterthought]

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Automating the feels


It’s been hard not to feel a deepening of the soul as the palette of online emotion signifiers has expanded from sparse typographic emoticons to colorful and animated emoji. Some cynics believe that emotions have no place in the realtime stream, but in fact the stream is full of feels, graphically expressed, fully machine-readable, and entailing minimal latency drain. Evan Selinger puts the emoji trend into perspective:

The mood graph has arrived, taking its place alongside the social graph (most commonly associated with Facebook), citation-link graph and knowledge graph (associated with Google), work graph (LinkedIn and others), and interest graph (Pinterest and others). Like all these other graphs, the mood graph will enable relevance, customization, targeting; search, discovery, structuring; advertising, purchasing behaviors, and more.

The arrival of the mood graph comes at the same time that facial-recognition and eye-tracking apps are beginning to blossom. The camera, having looked outward so long, is finally turning inward. Vanessa Wong notes the release, by the online training firm Mindflash, of FocusAssist for the iPad, which

uses the tablet’s camera to track a user’s eye movements. When it senses that you’ve been looking away for more than a few seconds (because you were sending e-mails, or just fell asleep), it pauses the [training] course, forcing you to pay attention—or at least look like you are—in order to complete it.

The next step is obvious: automating the feels. Whenever you write a message or update, the camera in your smartphone or tablet will “read” your eyes and your facial expression, precisely calculate your mood, and append the appropriate emoji. Not only does this speed up the process immensely, but it removes the requirement for subjective self-examination and possible obfuscation. Automatically feeding objective mood readings into the mood graph helps purify and enrich the data even as it enhances the efficiency of the realtime stream. For the three parties involved in online messaging—sender, receiver, and tracker—it’s a win-win-win.

Some people feel a certain existential nausea when contemplating these trends. Selinger, for one, is wary of some of the implications of the mood graph:

The more we rely on finishing ideas with the same limited words (feeling happy) and images (smiley face) available to everyone on a platform, the more those pre-fabricated symbols structure and limit the ideas we express. … [And] drop-down expression makes us one-dimensional, living caricatures of G-mail’s canned responses — a style of speech better suited to emotionless computers than flesh-and-blood humans. As Marshall McLuhan observed, just as we shape our tools, they shape us too. It’s a two-way street.

Robinson Meyer, meanwhile, finds himself “creeped out” by FocusAssist:

FocusAssist forces people to perform a very specific action with their eyeballs, on behalf of “remote organizations,” so that they may learn what the organization wants them to learn. Forcing a human’s attention through algorithmic surveillance: It’s the stuff of A Clockwork Orange. …

How long until a feature like FocusAssist is rebranded as AttentionMonitor and included in a MOOC, or a University of Phoenix course? How long until an advertiser forces you to pay attention to its ad before you can watch the video that follows? And how long, too, until FocusAssist itself is used outside of the context it was designed for?

All worthy concerns, I’m sure, but I sense they arrive too late. We need to remember what Norbert Wiener wrote more than sixty years ago:

I have spoken of machines, but not only of machines having brains of brass and thews of iron. When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine.

The raw material now encompasses emotion as well as flesh and blood. If you have an emotion that is unencapsulated in an emoji and unread by an eye-tracking app—that fails to  become an element of the machine—did you really feel it? Probably not. At least by automating this stuff, you’ll always know you felt something.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Absence of Like


We have already suggested, in an earlier installment of The Realtime Chronicles, that “that our new transcendentalism is one in which individual human operatives, acting in physical isolation as nodes on a network, achieve the unity of an efficient cybernetic system through the optimized exchange of parsimonious messages over a universal realtime bus.” To recapitulate: this idea draws on both (1) Norbert Wiener’s observation, in The Human Use of Human Beings, that

society can only be understood through a study of the messages and the communication facilities which belong to it; and … in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part

and (2) the following, more recent observation by Vanessa Grigoriadis, made in a 2009 article in New York magazine:

This is the promise of Facebook, the utopian hope for it: the triumph of fellowship; the rise of a unified consciousness; peace through superconnectivity, as rapid bits of information elevate us to the Buddha mind, or at least distract us from whatever problems are at hand. In a time of deep economic, political, and intergenerational despair, social cohesion is the only chance to save the day, and online social networks like Facebook are the best method available for reflecting—or perhaps inspiring—an aesthetic of unity.

There has long been, among a certain set of fussy Internet intellectuals, a sense of dissatisfaction with, if not outright hostility toward, Facebook’s decision to offer the masses a “Like” button for purposes of automated affiliation signaling without also offering a “Dislike” button for purposes of automated dis-affiliation signaling. This controversy, if that’s not too strong a word,  bubbled up again recently when Good Morning America reported that  Facebook “soon plans to roll out ways to better understand why you don’t like something in your News Feed.” This was immediately misconstrued, in the popular realtime media, to mean that Facebook was going to introduce some type of Dislike button. “We’re Getting Close to a Facebook ‘Dislike’ Button,” blurted Huffpo. Nonsense. All that our dominant supranational social network is doing is introducing a human-to-machine messaging system that will better enable the automated identification and eradication of offensive content. It’s just part of the necessary work of cleansing the stream of disturbing material that has the potential to disrupt the emerging “aesthetic of unity.”

The pro-Dislike crowd, in addition to being on the wrong side of history, don’t really understand the nature and functioning of the Like button. They believe it offers no choice, that it is a unitary decision mechanism, a switch forever stuck in the On position. Nothing could be further from the truth. The Like button, in actuality,  provides us with a binary choice: one may click the button, or one may leave the button unclicked. The choice is not between Like and Dislike but rather between Like and Absence of Like, the latter being a catch-all category of non-affiliation encompassing not only Dislike but also Not Sure and No Opinion and Don’t Care and Ambivalent and Can’t Be Bothered and Not in the Mood to Deal with This at the Moment and I Hate Facebook — the whole panoply, in other words, of states of non-affiliation with particular things or beings. By presenting a clean binary choice — On/Off; True/False — the Like button serves the overarching goal of bringing human communication and machine communication into closer harmony. By encapsulating the ambiguities of affect and expression that plague the kludgy human brain and its messaging systems into a single “state” (Absence of Like), the Like button essentially rids us of these debilitating ambiguities and hence tightens our cohesion with machines and with one another.

Consider the mess that would be made if Facebook were to offer us both a Like and a Dislike button. We would no longer have a clean binary choice. We would have three choices: click the Like button, click the Dislike button, or leave both buttons unclicked. Such ternarity has no place in a binary system. And that’s the best-case scenario. Imagine if we were allowed to click both the Like and the Dislike button simultaneously, leaving our mind in some kind of non-discrete, non-machine-readable state. One doesn’t even want to contemplate the consequences. The whole system might well seize up. In short: the Like button provides us with a binary affiliation choice that rids affiliation of ambiguity and promotes  the efficient operation of the cybernetic system underpinning and animating the social graph.

Isolating Dislike as a choice would also, as others have pointed out, have the problematic result of introducing negativity into the stream, hence muddying the waters in a way that would threaten the aesthetic of unity and perpetuate the “economic, political, and intergenerational despair” that accompanies active dis-affiliation. Here, too, we see the wisdom of folding the state of Dislike into the broader state of Absence of Like as a step toward the eventual eradication of the state of Dislike. Optimizing the cybernetic system is a process of diminishing the distinction between human information processing and machine information processing. So-called humanists may rebel, but they are slaves to the states of ambiguity and despair that are artifacts of a hopelessly flawed and convoluted system of internal and external messaging that predates the establishment of the universal realtime bus.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Photo of women programming ENIAC from OUP.

Conversation points


Though rigorously formal, machine communication is characterized by a lack of courtesy. When computers converse, they dispense with pleasantries, with digressions about family and weather, with all manner of roundaboutness. They stick, with a singlemindedness that, in a traditional human context, would almost seem a form of violence, to the protocol. Realtime messaging allows no time for fussy niceties. Anything that reduces efficiency threatens the network. One must get on with it. One must stay on point.

I say traditional human context because there is a real question as to the continued viability of that context as more human conversation moves onto the universal realtime bus. As we tune ourselves to the rhythms of the machine, can we afford the inefficiencies of courtesy? Nick Bilton, in a recent New York Times piece, argues that “social norms just don’t make sense to people drowning in digital communication.” We owe it to each other, he suggests, to optimize the efficiency of our interpersonal communications, to switch from the conversational mode of old to the machine mode of now. What defined politeness in the past—the use of “hello” and “goodbye,” of “dear” and “yours,” even of first and last names—now defines impoliteness, as such customary niceties “waste” the time of the recipient of the message. More than that, though, the demand for optimal efficiency needs to set the tone, writes Bilton, for all conversation. We shouldn’t ask a person about tomorrow’s weather forecast, since that information is readily available online. We shouldn’t ask a stranger for directions, since directions are readily available through Google Maps. We shouldn’t use a voice call when an email will do, and we shouldn’t use an email when a text will do. Bilton quotes Baratunde Thurston: “I have decreasing amounts of tolerance for unnecessary communication because it is a burden and a cost.”

Fuddy-duddys reacted with horror to Bilton’s column. One reader, calling Bilton a “sociopath,” wrote, “While I applaud The Times’s apparent effort to reach out to children, you go too far when you give them a platform on your pages to express their opinions, which have all the hallmarks of immaturity and gracelessness of their age group.” But Bilton has a point. I think most of us have experienced the annoyance that attends an email or text that contains the single word “Thanks!” It does feel like an unnecessary interruption, a little extra time-suck in a world of time-suckiness.

But there’s a blind spot in Bilton’s view. The big question isn’t, “Are conversational pleasantries becoming unnecessary and even annoying?” The answer to that is, “Yeah.” The big question is, “What does it say about us that we’re coming to see conversational pleasantries as unnecessary and even annoying?” What does it mean to be intolerant of “unnecessary communication,” even when it involves those closest to you? In a response to Bilton, Evan Selinger pointed out that it’s a mistake to judge “etiquette norms” by standards of efficiency: “They’re actually about building thoughtful and pro-social character.” Demanding efficient communication on the part of others reflects, Selinger went on, a “selfish desire to dictate the terms of a relationship.” There is a kind of sociopathology at work when we begin to judge conversations by the degree to which they intrude on our personal efficiency. We turn socializing into an extension of economics.

It’s hard to blame the net. The trend toward demanding efficiency in our social lives has been building for a long time. Indeed, the best response to Bilton came from Theodor Adorno in his 1951 book Minima Moralia:

The practical orders of life, while purporting to benefit man, serve in a profit economy to stunt human qualities, and the further they spread the more they sever everything tender. For tenderness between people is nothing other than awareness of the possibility of relations without purpose … If time is money, it seems moral to save time, above all one’s own, and such parsimony is excused by consideration for others. One is straightforward. Every sheath interposed between men in their transactions is a disturbance to the functioning of the apparatus, in which they are not only incorporated but with which they proudly identify themselves.

What are Bilton and Thurston doing but identifying themselves with the apparatus of communication?

To dispense with courtesy, to treat each other with “familiar indifference,” to send messages “without address or signature”: these are all, Adorno wrote, “random symptoms of a sickness of contact.” Lacking all patience for circuitous conversation, for talk that flows without practical purpose, we assume the view that “the straight line [is] the shortest distance between two people, as if they were points.”

Adorno saw a budding “brutality” behind the growing emphasis on efficiency in personal communications. That may be going too far. But we do seem to risk a numbing of our facility for tenderness and generosity when we come to see aimless chatter and unnecessary pleasantries as no more than burdens and costs, drains on our precious time. “In text messages,” writes Bilton, “you don’t have to declare who you are, or even say hello.” For the efficiency-minded, that would certainly seem to constitute progress in the media of correspondence. But, in this case, allowing the mechanism of communication to determine the terms of communication could also be seen as a manifestation of what Adorno termed “an ideology for treating people as things.”

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

Photo by Jo@net.

Worldstream of consciousness

Yale computer scientist David Gelernter sketches, on a napkin, the future of everything:


I sketched almost the exact same thing on a napkin one Saturday night 35 years ago while listening to a Country Joe and the Fish album.

Gelernter also verbalizes the concept in a Wired piece:

By adding together every timestream on the net — including the private lifestreams that are just beginning to emerge — into a single flood of data, we get the worldstream: a way to picture the cybersphere as a whole. … Instead of today’s static web, information will flow constantly and steadily through the worldstream into the past. … What people really want is to tune in to information. Since many millions of separate lifestreams will exist in the cybersphere soon, our basic software will be the stream-browser: like today’s browsers, but designed to add, subtract, and navigate streams. … Stream-browsers will help us tune in to the information we want by implementing a type of custom-coffee blender: We’re offered thousands of different stream “flavors,” we choose the flavors we want, and the blender mixes our streams to order.

Executive summary:

Jamba Juice + Starbucks + SiriusXM = Future of Culture

Once you get past the mumbo-jumbo, this all sounds like old news. “Today’s static web”? The stream replaced the page as the web’s dominant metaphor a few years ago. Gelernter’s vision is the Zuckerbergian personal-timeline view of the web, in which every person sits at the center of his or her own little cyber-universe as swirls of custom-fit information stream in and then turn into “the past.” And it’s the Google Now “search without searching” vision of continuous, preemptive delivery of relevant info. “Finally, the web — soon to become the cybersphere — will no longer resemble a chaotic cobweb,” concludes Gelernter. “Instead, billions of users will spin their own tales, which will merge seamlessly into an ongoing, endless narrative” — all funneled through “the same interface.” It’s not so much post-web as anti-web. Imagine Whitman’s “Song of Myself” as a media production, with tracking and ads.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

The posthastism post

Realtime makes history of primetime. Hours before the competition unfolds on TV, I already know that Gaby takes the all-around gold, that Viktoria cries in despair, and that Aly loses the bronze to Aliya on a technicality. The report arrives before the event, mediawise. Prediction: The next Summer Olympics will be broadcast on the History Channel.

Language, like time, warps back on itself, and so we have a new movement — or is it an antimovement? — called posthastism, which in prerealtime, when we had time to think, would probably have been called postposthastism. From an interview with Hans Ulrich Obrist:

In his talk at Tate Modern last week, Tino Sehgal talked a lot about slowness, and how it was a key aspect of the way he engages with the world in his work. As someone known for your hyper-productivity, how do you relate to this idea of slowness?

I’m interested in resisting the homogenization of time: so it’s a matter of making it faster and slower. For art, slowness has always been very important. The experience of seeing art slows us down. Actually, we have just founded a movement with Shumon Basar and Joseph Grima last week called posthastism, where we go beyond haste. Joseph Grima was in Malta, and he had this sudden feeling of posthaste. Shumon and I picked up on it and we had a trialogue, which went on for a week on Blackberry messenger. Posthastism. [Reading from a sheet of paper hastily brought in by his research assistant] As Joseph said: “Periphery is the new epicenter,” “post-Fordism is still hastism because it’s immaterial hastism, which could lead now’s posthastism.” One more thing to quote is “delays are revolutions,” which was a good exhibition title.

Was Joseph Grima really feeling posthaste in Malta or was he experiencing its opposite? “Posthaste” comes from an instruction written on letters a few centuries ago: “Haste, Post, Haste.” Which meant: Get it there quicker than quick. Run, mailman, run! We’ve always yearned for realtime, even when messages moved at footspeed. But now we really have it. #hastetwitterhaste seems unnecessary — an immateriality in an age of immaterial hastism.

Obrist is right, though: realtime is homogenized time and hence needs to be resisted. So sign me up for posthastism, posthaste. “Delays are revolutions”: that’s a slogan I can march under. My manifesto:

— Never respond to a text until at least 24 hours have passed.

— Wait four days or more before replying to an email.

— Tweet about things that happened a month ago.

— Stop your Facebook Timeline at the turn of the last century.

— Watch the Olympics on NBC after dinner.

The revolution, it turns out, will be televised. On tape delay. Viva primetime!

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

What realtime is before it’s realtime

They say that there’s a brief interlude, measured in milliseconds, between the moment a thought arises in the cellular goop of our brain and the moment our conscious mind becomes aware of that thought. That gap, they say, swallows up our free will and all its attendant niceties. After the fact, we pretend that something we think of as “our self” came up with something we think of as “our thought,” but that’s all just make-believe. In reality, they say, we’re mere automatons, run by some inscrutable Oz hiding behind a synaptical curtain.

The same thing goes for sensory perception. What you see, touch, hear, smell are all just messages from the past. It takes time for the signals to travel from your sensory organs to your sense-making brain. Milliseconds. You live, literally, in the past.

Now is then. Always.

As the self-appointed chronicler of realtime, as realtime’s most dedicated cyber-scribe, I find this all unendurably depressing. The closer our latency-free networks and devices bring us to realtime, the further realtime recedes. The net trains us to think not in years or seasons or months or weeks or days or hours or even minutes. It trains us to think in seconds and fractions of seconds. Google says that if it takes longer than the blink of an eye for a web page to load, we’re likely to bolt for greener pastures. Microsoft says that if a site lags 250 milliseconds behind competing sites, it can kiss its traffic goodbye. The SEOers know the score (even if they don’t know the tense):

Back in 1999 the acceptable load time for a site is 8 seconds. It decreased to 4 seconds in 2004, and 2 seconds in 2009. These are based on the study of the behavior of the online shoppers. Our expectations already exceed the 2-second rule, and we want it faster. This 2012, we’re going sub-second.

And yet, as we become more conscious of each passing millisecond, it becomes harder and harder to ignore the fact that we’re always a moment behind the real, that what we imagine to be realtime is really just pseudorealtime. A fraud.

They say a man never steps into the same stream twice. But that same man will never step into a web stream even once. It’s long gone by the time he becomes conscious of his virtual toe hitting the virtual flow. That tweet/text/update/alert you read so hungrily? It may as well be an afternoon newspaper tossed onto your front stoop by some child-laborer astride a banana bike. It’s yesterday.

But there’s hope. The net, Andrew Keen reports on the eve of Europe’s Le Web shindig, is about to get, as the conference’s official theme puts it, “faster than realtime.” What does that mean? The dean of social omnipresence, Robert Scoble, explains: “It’s when the server brings you a beer before you ask for it because she already knows what you drink!” Le Web founder Loic Le Meur says to Keen, “We’ve arrived in the future”:

Online apps are getting to know us so intimately, he explained, that we can know things before they happen. To illustrate his point, Le Meur told me about his use of Highlight, a social location app which offers illuminating data about nearby people who have signed up for the network like – you guessed it – the digitally omniscient Robert Scoble. Highlight enabled Le Meur to literally know the future before it happened because, he says, it is measuring our location all of the time. “I opened the door before he was there because I knew he was coming,” Le Meur told me excitedly about a recent meeting that he had in the real world with Scoble.

I opened the door before he was there because I knew he was coming. I could repeat that sentence to myself endlessly – it’s that beautiful. And it’s profound. Our apps will anticipate our synapses. Our apps will deliver our pre-conscious thoughts to our consciousness before they’ve even become pre-conscious thoughts. The net will out-Oz Oz. Life will become redundant, but that seems a small price to pay for a continuous preview of real realtime.

Le Meur states the obvious to Keen:

We have “no choice but to fully embrace” today’s online products, Le Meur told me about technology which he describes as “unheralded” in history.

We’ve never had any choice. Choice is an illusion. But now, as our gadgets tap into pre-realtime on our behalf, we’ll at least know of the choice we never really made before we’ve even had the chance to not really make it. Yes, indeed. We’ve arrived in the future, and the future isn’t even there yet. But, like Scoble, it’s about to show up.

Now where the hell’s that beer?

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.