Category Archives: Realtime

The soma cloud

soma

“The computer could program the media to determine the given messages a people should hear in terms of their overall needs, creating a total media experience absorbed and patterned by all the senses. … By such orchestrated interplay of all media, whole cultures could now be programmed in order to improve and stabilize their emotional climate.” —Marshall McLuhan, 1969

“The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure — thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion.” —Kramer et al., 2014

“I’m excited to announce that we’ve agreed to acquire Oculus VR, the leader in virtual reality technology. … This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.” —Mark Zuckerberg, 2014

The strategy behind the Oculus acquisition has become much clearer to me over the last week. Haters gonna hate, worrywarts gonna worry, but [inlinetweet prefix="" tweeter="" suffix=""]I for one am looking forward to Facebook’s Oculus Rift experiments[/inlinetweet]. Once the company is able to manipulate “entire experiences and adventures,” rather than just bits and pieces of text, the realtime engineering of a more harmonious and stabilized emotional climate may well become possible. I predict that the next great opportunity in wearables lies in finger-mountables — in particular, the Oculus Networked Mood Ring. We’ll all wear them, as essential Rift peripherals, and they’ll all change color simultaneously, depending on the setting that Zuck dials into the Facebook Soma Cloud.

I know, I know: this is all just blue-sky dreaming for now. But as the poet said, in dreams begin realities.

At least I think that’s what he said.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Image: detail of cover of paperback edition of Brave New World.

4 Comments

Filed under Realtime

My computer, my doppeltweeter

socialnetwork

Broadway, as you’ll recall, was the nickname of the fellow that 50 Cent hired to ghost his tweets. “The energy of it is all him,” Broadway said of the simulated stream he produced for his boss. Or, as Baudrillard put it: “Ecstasy of information: simulation. Truer than true.”

Now that we’re all microcelebrities, we need to democratize Broadway. No mortal can keep up with Twitter, Facebook, Instagram, Tumblr, LinkedIn, Snapchat, etc., all by himself/herself. [inlinetweet prefix="" tweeter="" suffix=""]There’s just not enough realtime in the day[/inlinetweet]. We all need a doppeltweeter to channel our energy.

Since the ability to clone Broadway is still three or four years out, Google is stepping into the breach by automating the maintenance of one’s social media presence. The company, as the BBC reports, was earlier this week granted a patent for “automated generation of suggestions for personalized reactions in a social network.” The description of the anticipated service is poetic:

A suggestion generation module includes a plurality of collector modules, a credentials module, a suggestion analyzer module, a user interface module and a decision tree. The plurality of collector modules are coupled to respective systems to collect information accessible by the user and important to the user from other systems such as e-mail systems, SMS/MMS systems, micro blogging systems, social networks or other systems. The information from these collector modules is provided to the suggestion analyzer module. The suggestion analyzer module cooperates with the user interface module and the decision tree to generate suggested reactions or messages for the user to send.

Translation: At this point, we have so much information on you that we know you better than you know yourself, so you may as well let us do your social networking for you.

Google notes that the automation of personal messaging will help people avoid embarrassing social faux pas:

Many users use online social networking for both professional and personal uses. Each of these different types of use has its own unstated protocol for behavior. It is extremely important for the users to act in an adequate manner depending upon which social network on which they are operating. For example, it may be very important to say “congratulations” to a friend when that friend announces that she/he has gotten a new job. This is a particular problem as many users subscribe to many social different social networks. With an ever increasing online connectivity and growing list of online contacts and given the amount of information users put online, it is possible for a person to miss such an update.

A computer will generate a personal “congratulations!” note to send to a friend, and upon the reception of the note, the friend’s computer will respond with a personal “thanks!” note, which will trigger the generation of a “no problem!” note. I think this is getting very close to the social networking system Mark Zuckerberg has always dreamed about. When confronted with an unstated protocol for behavior, it’s best to let the suggestion analyzer module do the talking.

Beyond the practical stream-management benefits, there’s a much bigger story here. The Google message-automation service promises to at last close the realtime loop: A computer running personalization algorithms will generate your personal messages. These computer-generated messages, once posted or otherwise transmitted, will be collected online by other computers and used to refine your personal profile. Your refined personal profile will then feed back into the personalization algorithms used to generate your messages, resulting in a closer fit between your  computer-generated messages and your computer-generated persona. And around and around it goes until a perfect stasis between self and expression is achieved. The thing that you once called “you” will be entirely out of the loop at this point, of course, but that’s for the best. Face it: you were never really very good at any of this anyway.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found hereImage from Google patent filing.

9 Comments

Filed under Realtime

Ambient Reality

ThingOneThingTwo

People are forever buttonholing me on the street and saying, “Nick, what comes after realtime?” It’s a good question, and I happen to know the answer: Ambient Reality. Ambient Reality is the ultimate disruption, as it alters the actual fabric of the universe. We begin living in the prenow. Things happen before they happen. “Between the desire / And the spasm,” wrote T. S. Eliot, “Falls the Shadow.” In Ambient Reality, the Shadow goes away. Spasm precedes desire. In fact, it’s all spasm. We enter what I call Uninterrupted Spasm State, or USS.

In “How the Internet of Things Changes Everything,” a new and seemingly machine-written article in Foreign Affairs, two McKinsey consultants write of “the interplay” between “the most disruptive technologies of the coming decade: the mobile Internet and the Internet of Things.” The “mobile-ready Internet of Things,” as they term it, will have “a profound, widespread, and transformative impact on how we live and work.” For instance, “by combining a digital camera in a wearable device with image-recognition software, a shopper can automatically be fed comparative pricing information based on the image of a product captured by the camera.” That’s something to look forward to, but the McKinseyites are missing the big picture. They underestimate the profundity, the ubiquity, and the transformativeness of the coming disruption. In Ambient Reality, there is no such thing as “a shopper.” Indeed, the concept of “shopping” becomes anachronistic. Goods are delivered before the urge to buy them manifests itself in the conscious mind. Demand is ambient, as are pricing comparisons. They become streams in the cloud.

EBay strategist John Sheldon gets closer to the truth when he describes, in a new Wired piece, the concept of “ambient commerce”:

Imagine setting up a rule in Nike+, he says, to have the app order you a new pair of shoes after you run 300 miles. … Now consider an even more advanced scenario. A shirt has a sensor that detects moisture. And you find yourself stuck out in the rain without an umbrella. Not too many minutes after the downpour starts, a car pulls up alongside you. A courier steps out and hands you an umbrella — or possibly a rain jacket, depending on what rules you set up ahead of time for such a situation.

I ask you: Are there no bounds to the dreams of our innovators?

Comments Wired‘s Marcus Wohlsen, “Though it might be hard to believe, the logistics of delivering that umbrella are likely more complex than the math behind detecting the water.” That is indeed hard to believe.

But even these scenarios fail to capture the full power of Ambient Reality. They assume some agency is required on the part of the consumer. One has to “set up a rule” about the lifespan of one’s sneakers. One has to pre-program a choice between umbrella and rain jacket. In Ambient Reality, no such agency is required. Personal decisions are made prenow, by communications among software-infused things. The sensors in your feet and in your sneakers are in constant communication not only with each other but with the cloud. When a new pair of sneakers is required, the new pair is automatically printed on your 3-D printer at home. The style of the sneakers is chosen algorithmically based on your past behavior as well as contemporaneous neural monitoring. Choice is ambient. As for that “courier” who “steps out and hands you an umbrella” after the onset of precipitation, that’s just plain retrograde. The required consumer good will be delivered before the rain starts by an unmanned drone delivery aircraft. The idea that humans will be involved in delivery chores is ridiculous. In Ambient Reality, human effort will be restricted to self-actualization—in other words, ambient consumption. That’s the essence of USS.

I hardly need mention that, once the shower has passed, the drone will retrieve the umbrella in order to deliver it to another person facing an imminent rain event. All assets will be shared to optimize utilization. Think how rarely you use your umbrella today: that’s a sign of how broken society is.

We are on the verge, says Wohlsen, of “a utopian future in which running out of toilet paper at the wrong time will never, ever happen again.” That’s very true, but the never-run-out-of-toilet-paper utopia is actually a transitional utopia. [inlinetweet prefix="" tweeter="" suffix=""]In the ultimate utopia of Ambient Reality, there will be no need for toilet paper.[/inlinetweet] But I’ll leave that for a future post.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

12 Comments

Filed under Realtime

Prêt-à-twitter and the bespoke tweet

bespoke

A quick afterthought on that last post: I still think that the inline tweet is the future, but it strikes me that the currently emerging method of inline tweeting, which I have taken to calling prêt-à-twitter, is far from ideal. Who wants to get caught tweeting the same lousy tweet that everyone else is tweeting? It’s tacky. I mean: Attention, Wal-Mart Shoppers!

No, it just won’t do. We need to go, as quickly as possible, from prêt-à-twitter to the bespoke tweet. Here’s how I imagine it working: a publication captures personal data on its readers’ habits and literary/intellectual/political sensibilities (or procures said data from Facebook or maybe Twitter itself), and then, using some kind of simple text-parsing algorithm, it personalizes the inline tweets that are offered to each reader. [inlinetweet prefix="" tweeter="" suffix=""]When a reader alights on an article, he or she gets his or her own custom-tailored tweetables[/inlinetweet]. That gives the reader a little distinctiveness in the marketplace of ideas. It’s also much more discreet. With bespoke inlines, you’re not broadcasting the fact that you didn’t actually read the piece you’re tweeting. [inlinetweet prefix="" tweeter="" suffix=""]Your little peccadillo stays between you and the algorithm[/inlinetweet].

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

5 Comments

Filed under Realtime

Ambient tweetability

I have seen the future, and it is not Bruce Springsteen. It is the inline tweet:

tweetability

When Twitter came along, back in 2006, it seemed like a godsend. It made our lives so much easier. Media sharing became a snap. No longer did you have to go through the tedious process of writing a blog post and formulating links. Goodbye to all that “a href=” crap and those soul-draining <> whatchamacallits. You grabbed a snippet, and you tweeted it out to the world. It was almost like a single fluid movement. I don’t know precisely how many keystrokes Twitter has saved humanity, but I’m pretty sure that the resulting expansion of cognitive surplus is non-trivial.

Since then, though, we have become more fully adapted to the realtime environment and, [inlinetweet prefix="" tweeter="" suffix=""]frankly, tweeting has come to feel kind of tedious itself[/inlinetweet]. It’s not the mechanics of the actual act of tweeting so much as the mental drain involved in (a) reading the text of an article and (b) figuring out which particular textual fragment is the most tweet-worthy. That whole pre-tweeting cognitive process has become a time-sink.

That’s why the arrival of the inline tweet — the readymade tweetable nugget, prepackaged, highlighted, and activated with a single click — is such a cause for celebration. The example above comes from a C.W. Anderson piece posted today by the Nieman Journalism Lab. “When is news no longer what is new but what matters?” Who wouldn’t want to tweet that? It’s exceedingly pithy. The New York Times has also begun to experiment with inline tweets, and it’s already seeing indications that the inclusion of prefab tweetables increases an article’s overall tweet count. I think the best thing about the inline tweet is that you no longer have to read, or even pretend to read, what you tweet before you tweet it. Assuming you trust the judgment of a publication’s in-house tweet curator, or tweet-curating algorithm, you can just look for the little tweety bird icon, give the inline snippet a click, and be on your way. [inlinetweet prefix="" tweeter="" suffix=""]Welcome to linking without thinking![/inlinetweet]

[an afterthought]

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

3 Comments

Filed under Realtime

Automating the feels

redwedding

It’s been hard not to feel a deepening of the soul as the palette of online emotion signifiers has expanded from sparse typographic emoticons to colorful and animated emoji. Some cynics believe that emotions have no place in the realtime stream, but in fact the stream is full of feels, graphically expressed, fully machine-readable, and entailing minimal latency drain. Evan Selinger puts the emoji trend into perspective:

The mood graph has arrived, taking its place alongside the social graph (most commonly associated with Facebook), citation-link graph and knowledge graph (associated with Google), work graph (LinkedIn and others), and interest graph (Pinterest and others). Like all these other graphs, the mood graph will enable relevance, customization, targeting; search, discovery, structuring; advertising, purchasing behaviors, and more.

The arrival of the mood graph comes at the same time that facial-recognition and eye-tracking apps are beginning to blossom. The camera, having looked outward so long, is finally turning inward. Vanessa Wong notes the release, by the online training firm Mindflash, of FocusAssist for the iPad, which

uses the tablet’s camera to track a user’s eye movements. When it senses that you’ve been looking away for more than a few seconds (because you were sending e-mails, or just fell asleep), it pauses the [training] course, forcing you to pay attention—or at least look like you are—in order to complete it.

The next step is obvious: automating the feels. Whenever you write a message or update, the camera in your smartphone or tablet will “read” your eyes and your facial expression, precisely calculate your mood, and append the appropriate emoji. Not only does this speed up the process immensely, but it removes the requirement for subjective self-examination and possible obfuscation. Automatically feeding objective mood readings into the mood graph helps purify and enrich the data even as it enhances the efficiency of the realtime stream. For the three parties involved in online messaging—sender, receiver, and tracker—it’s a win-win-win.

Some people feel a certain existential nausea when contemplating these trends. Selinger, for one, is wary of some of the implications of the mood graph:

The more we rely on finishing ideas with the same limited words (feeling happy) and images (smiley face) available to everyone on a platform, the more those pre-fabricated symbols structure and limit the ideas we express. … [And] drop-down expression makes us one-dimensional, living caricatures of G-mail’s canned responses — a style of speech better suited to emotionless computers than flesh-and-blood humans. As Marshall McLuhan observed, just as we shape our tools, they shape us too. It’s a two-way street.

Robinson Meyer, meanwhile, finds himself “creeped out” by FocusAssist:

FocusAssist forces people to perform a very specific action with their eyeballs, on behalf of “remote organizations,” so that they may learn what the organization wants them to learn. Forcing a human’s attention through algorithmic surveillance: It’s the stuff of A Clockwork Orange. …

How long until a feature like FocusAssist is rebranded as AttentionMonitor and included in a MOOC, or a University of Phoenix course? How long until an advertiser forces you to pay attention to its ad before you can watch the video that follows? And how long, too, until FocusAssist itself is used outside of the context it was designed for?

All worthy concerns, I’m sure, but I sense they arrive too late. We need to remember what Norbert Wiener wrote more than sixty years ago:

I have spoken of machines, but not only of machines having brains of brass and thews of iron. When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine.

The raw material now encompasses emotion as well as flesh and blood. If you have an emotion that is unencapsulated in an emoji and unread by an eye-tracking app—that fails to  become an element of the machine—did you really feel it? Probably not. At least by automating this stuff, you’ll always know you felt something.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

4 Comments

Filed under Realtime

Absence of Like

eniac1

We have already suggested, in an earlier installment of The Realtime Chronicles, that “that our new transcendentalism is one in which individual human operatives, acting in physical isolation as nodes on a network, achieve the unity of an efficient cybernetic system through the optimized exchange of parsimonious messages over a universal realtime bus.” To recapitulate: this idea draws on both (1) Norbert Wiener’s observation, in The Human Use of Human Beings, that

society can only be understood through a study of the messages and the communication facilities which belong to it; and … in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part

and (2) the following, more recent observation by Vanessa Grigoriadis, made in a 2009 article in New York magazine:

This is the promise of Facebook, the utopian hope for it: the triumph of fellowship; the rise of a unified consciousness; peace through superconnectivity, as rapid bits of information elevate us to the Buddha mind, or at least distract us from whatever problems are at hand. In a time of deep economic, political, and intergenerational despair, social cohesion is the only chance to save the day, and online social networks like Facebook are the best method available for reflecting—or perhaps inspiring—an aesthetic of unity.

There has long been, among a certain set of fussy Internet intellectuals, a sense of dissatisfaction with, if not outright hostility toward, Facebook’s decision to offer the masses a “Like” button for purposes of automated affiliation signaling without also offering a “Dislike” button for purposes of automated dis-affiliation signaling. This controversy, if that’s not too strong a word,  bubbled up again recently when Good Morning America reported that  Facebook “soon plans to roll out ways to better understand why you don’t like something in your News Feed.” This was immediately misconstrued, in the popular realtime media, to mean that Facebook was going to introduce some type of Dislike button. “We’re Getting Close to a Facebook ‘Dislike’ Button,” blurted Huffpo. Nonsense. All that our dominant supranational social network is doing is introducing a human-to-machine messaging system that will better enable the automated identification and eradication of offensive content. It’s just part of the necessary work of cleansing the stream of disturbing material that has the potential to disrupt the emerging “aesthetic of unity.”

The pro-Dislike crowd, in addition to being on the wrong side of history, don’t really understand the nature and functioning of the Like button. They believe it offers no choice, that it is a unitary decision mechanism, a switch forever stuck in the On position. Nothing could be further from the truth. The Like button, in actuality,  provides us with a binary choice: one may click the button, or one may leave the button unclicked. The choice is not between Like and Dislike but rather between Like and Absence of Like, the latter being a catch-all category of non-affiliation encompassing not only Dislike but also Not Sure and No Opinion and Don’t Care and Ambivalent and Can’t Be Bothered and Not in the Mood to Deal with This at the Moment and I Hate Facebook — the whole panoply, in other words, of states of non-affiliation with particular things or beings. By presenting a clean binary choice — On/Off; True/False — the Like button serves the overarching goal of bringing human communication and machine communication into closer harmony. By encapsulating the ambiguities of affect and expression that plague the kludgy human brain and its messaging systems into a single “state” (Absence of Like), the Like button essentially rids us of these debilitating ambiguities and hence tightens our cohesion with machines and with one another.

Consider the mess that would be made if Facebook were to offer us both a Like and a Dislike button. We would no longer have a clean binary choice. We would have three choices: click the Like button, click the Dislike button, or leave both buttons unclicked. Such ternarity has no place in a binary system. And that’s the best-case scenario. Imagine if we were allowed to click both the Like and the Dislike button simultaneously, leaving our mind in some kind of non-discrete, non-machine-readable state. One doesn’t even want to contemplate the consequences. The whole system might well seize up. In short: the Like button provides us with a binary affiliation choice that rids affiliation of ambiguity and promotes  the efficient operation of the cybernetic system underpinning and animating the social graph.

Isolating Dislike as a choice would also, as others have pointed out, have the problematic result of introducing negativity into the stream, hence muddying the waters in a way that would threaten the aesthetic of unity and perpetuate the “economic, political, and intergenerational despair” that accompanies active dis-affiliation. Here, too, we see the wisdom of folding the state of Dislike into the broader state of Absence of Like as a step toward the eventual eradication of the state of Dislike. Optimizing the cybernetic system is a process of diminishing the distinction between human information processing and machine information processing. So-called humanists may rebel, but they are slaves to the states of ambiguity and despair that are artifacts of a hopelessly flawed and convoluted system of internal and external messaging that predates the establishment of the universal realtime bus.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Photo of women programming ENIAC from OUP.

8 Comments

Filed under Realtime