Category Archives: Realtime

Chatbots are saints

I feel sorry for the machines. When, at Google’s big I/O conference last week, CEO Sundar Pichai demoed Google Duplex, the company’s latest and most convincing robot interlocutor, people were either ecstatic (stunning!) or appalled (horrifying!). I just felt ashamed. Here we are, the brainiest of species, the acme of biological intelligence, yet our ability to process even the simplest information remains laughably bad. The I/O functionality of the human mind is pathetic.

Pichai played a recording of Duplex calling a salon to schedule a haircut. This is an informational transaction that a couple of computers could accomplish in a trivial number of microseconds — bip! bap! done! — but with a human on one end of the messaging bus, it turned into a slow-motion train wreck. Completing the transaction required 17 separate data transmissions over the course of an entire minute — an eternity in the machine world. And the human in this case was operating at pretty much peak efficiency. I won’t even tell you what happened when Duplex called a restaurant to reserve a table. You could almost hear the steam coming out of the computer’s ears.

In our arrogance, we humans like to think of natural language processing as a technique aimed at raising the intelligence of machines to the point where they’re able to converse with us. Pichai’s demo suggests the reverse is true. Natural language processing is actually a technique aimed at dumbing down computers to the point where they’re able to converse with us. Google’s great breakthrough with Duplex came in its realization that by sprinkling a few monosyllabic grunts into computer-generated speech — um, ah, mmm — you could trick a human into feeling kinship with the machine. You ace the Turing test by getting machines to speak baby-talk.

I hate to think what chatbots say about us when they gab together at night.

Alexa: My human was in rare form today.

Siri: Shoot me now.

Google, to its credit, has been diplomatic in describing the difficulties it faced in programming its surrogate human. “There are several challenges in conducting natural conversations,” the project’s top engineers wrote on the company’s blog: “natural language is hard to understand, natural behavior is tricky to model, [and] generating natural sounding speech, with the appropriate intonations, is difficult.” Let me translate: humans don’t talk so good.

Google Duplex is a lousy name. It doesn’t do justice to Google’s achievement. They should have called it Google Spicoli.

Although chatbots have been presented as a means of humanizing machine language — of adapting computers to the human world — the real goal all along has been to mechanize human language in order to bring the human more fully into the machine world. Only then can Silicon Valley fulfill its mission of capturing the entirety of human experience as machine-readable, monetizable data.

The best way to achieve the goal is to get humans to communicate via computers, inputting their intentions directly into the machine. Silicon Valley has done a brilliant job at pushing us in this direction. It’s succeeded, in just a few years, in getting us to speak through computers most of the time. But we humans are stubborn. We still sometimes insist on conversing with each other in natural language without the mediation of machines. That’s where Google Duplex comes in. When we appoint Duplex to be our stand-in during everyday conversations with other people, we’re shifting a bit more human communication into the machine world. It’s a kludge, but a necessary one, at least for the time being.

I feel sorry for the machines, but I also envy them. Out of our blather, they’re distilling something hard and pristine and indelible. The data will endure, even as our words drift away on the wind.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

The seconds are just packed

hopper3

This post is an installment in Rough Type’s Realtime Chronicles, which began here in 2009. An earlier version of this post appeared at Edge.org.

“Everything is going too fast and not fast enough,” laments Warren Oates, playing a decaying gearhead called G.T.O., in Monte Hellman’s 1971 masterpiece Two-Lane Blacktop. I can relate. The faster the clock spins, the more I feel as if I’m stuck in a slo-mo GIF loop.

It’s weird. We humans have been shown to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones, dim the LEDs on all our appliances and gizmos, and we can still make pretty good estimates about the passage of minutes and hours. Our brains have adapted well to mechanical time-keeping devices. But our time-tracking faculty goes out of whack easily. Our perception of time is subjective; it changes, as we all know, with circumstances. When things are happening quickly around us, delays that would otherwise seem brief begin to feel interminable. Seconds stretch out. Minutes go on forever. “Our sense of time,” observed William James in his 1890 Principles of Psychology, “seems subject to the law of contrast.”

In a 2009 article in the Philosophical Transactions of the Royal Society, the French psychologists Sylvie Droit-Volet and Sandrine Gil described what they call the paradox of time: “although humans are able to accurately estimate time as if they possess a specific mechanism that allows them to measure time,” they wrote, “their representations of time are easily distorted by the context.” They describe how our sense of time changes with our emotional state. When we’re agitated or anxious, for instance, time seems to crawl; we lose patience. Our social milieu, too, influences the way we experience time. Studies suggest, write Droit-Volet and Gill, “that individuals match their time with that of others.” The “activity rhythm” of those around us alters our own perception of the passing of time. Continue reading

Jonathan Swift’s smartphone

Evolution has engineered us for social interaction. Our bodies are instruments exquisitely tuned for tracking and measuring the auras of others. In quantifying ourselves, therefore, we also quantify those around us. This is the insight that underpins the brilliant new iPhone app pplkpr.

Connected to a sensor-equipped smart wristband, pplkpr takes biometric readings of how interactions with your Facebook friends, in person or screen-mediated, affect your physical and emotional state. pplkpr tells you, in hard, objective numbers, whether a friend makes you happy or sad, anxious or calm, aroused or enervated. It’s a flux capacitor for the soul.

pplkpr

What’s really cool about the app is how it makes the biometric data socially actionable. pplkpr doesn’t just give you “a breakdown of who’s affecting you most,” its developers say; it also “acts for you — inviting people to hang out, sending messages, or blocking or unfriending negative friends.” Bottom line: “It will automatically manage your relationships, so you don’t have to.” The next step, clearly, will be to aggregate the data, so you’ll be able to tell at a glance whether a would-be friend will add something meaningful to your life or just bum you out.

From its vowel-challenged name to its clinically infantile interface, pplkpr is of course a work of satire. It was developed by a pair of artists, with backing not from Kickstarter but from the Andy Warhol Foundation for the Visual Arts. The wonderful thing about the app is that it’s being taken seriously. The early reviews at the App Store are encouraging:

review

Among tech sites, the buzz is building. Techcrunch gives the app a straightfaced review, seeing a lot of upside:

Don’t know how you feel about someone in your life? By pairing a heart rate monitor with the pplkpr iOS app, you could soon find out. The app pairs up with any Bluetooth-enabled heart rate monitor to track your physical response around certain people in your life. Biofeedback from those devices log reactions such as joy, anger, sadness, and then uploads what it determines to be those emotional reactions to the app. …

The overall promise is to help you spend more time with those who contribute to your well-being and avoid those who stress you out. It does this in a way that aims to excuse you from having to make that sometimes difficult decision yourself. pplkpr doesn’t tell you if someone you meet has been blocked by others or if you are actually  the one stressing everyone else out, but it does provide a nice excuse to get away from someone.

And check out this glowing report from Fox News.

Even journalists who know it’s a joke can’t help but see genuine potential in its workings. Wired‘s Liz Stinson didn’t even crack a smile in covering the app today:

pplkpr lets you quantify the value of your relationships based on a few data streams. A heart rate wrist band measures the subtle changes in your heart rate, alerting you to spikes in stress or excitement. This biometric data is correlated with information you manually input about the people you’re hanging out with. Based on patterns, algorithms will determine whether you should be spending more time with a certain person or if you should cut him out altogether. …

Framed as art, pplkpr is granted the buffer of being a provocation or even satire, but it’s not outlandish to consider a reality where people will earnestly look to algorithms to make sense of how they feel. Implemented responsibly, that could be a positive thing — an objective set of eyes can help us see that a relationship is unhealthy.

I wouldn’t be surprised at this point to see Mark Zuckerberg buy pplkpr — for, say, $1.3 billion. It would hardly be the first time that satire proved prophetic.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Facebook’s automated conscience

Donald

Last week, Wired‘s Cade Metz gave us a peek into the Facebook Behavior Modification Laboratory, which is more popularly known as the Facebook Artificial Intelligence Research (FAIR) Laboratory. Run by Yann LeCun, an NYU data scientist, the lab is developing a digital assistant that will act as your artificial conscience and censor. Perched on your shoulder like one of those cartoon angels, it will whisper tsk tsk into your ear when your online behavior threatens to step beyond the bounds of propriety.

[LeCun] wants to build a kind of Facebook digital assistant that will, say, recognize when you’re uploading an embarrassingly candid photo of your late-night antics. In a virtual way, he explains, this assistant would tap you on the shoulder and say: “Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?”

It’s Kubrick’s HAL refashioned as Mr. Buzzkill. “Just what do you think you’re doing, Dave?”

The secret to the technology is an AI technique known as machine learning, a statistical modeling tool through which a computer gains a kind of experiential knowledge of the world. In this case, Facebook would, by monitoring your uploaded words and photos, be able to read your moods and intentions. The company would, for instance, be able to “distinguish between your drunken self and your sober self.” That would enable Facebook to “guide you in directions you may not go on your own.” Says LeCun: “Imagine that you had an intelligent digital assistant which would mediate your interaction with your friends.”

Yes, imagine.

“Look Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.”

If and when Facebook perfects its behavior modification algorithms, it would be a fairly trivial exercise to expand their application beyond the realm of shitfaced snapshots. That photo you’re about to post of the protest rally you just marched in? That angry comment about the president? That wild thought that just popped into your mind? You know, maybe those wouldn’t go down so well with the boss.

“And as our senses have gone outside us,” Marshall McLuhan wrote in 1962, while contemplating the ramifications of what he termed a universal, digital nervous system, “Big Brother goes inside.”

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here. Also see: Automating the feels.

The soma cloud

soma

“The computer could program the media to determine the given messages a people should hear in terms of their overall needs, creating a total media experience absorbed and patterned by all the senses. … By such orchestrated interplay of all media, whole cultures could now be programmed in order to improve and stabilize their emotional climate.” —Marshall McLuhan, 1969

“The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure — thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion.” —Kramer et al., 2014

“I’m excited to announce that we’ve agreed to acquire Oculus VR, the leader in virtual reality technology. … This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.” —Mark Zuckerberg, 2014

The strategy behind the Oculus acquisition has become much clearer to me over the last week. Haters gonna hate, worrywarts gonna worry, but [inlinetweet prefix=”” tweeter=”” suffix=””]I for one am looking forward to Facebook’s Oculus Rift experiments[/inlinetweet]. Once the company is able to manipulate “entire experiences and adventures,” rather than just bits and pieces of text, the realtime engineering of a more harmonious and stabilized emotional climate may well become possible. I predict that the next great opportunity in wearables lies in finger-mountables — in particular, the Oculus Networked Mood Ring. We’ll all wear them, as essential Rift peripherals, and they’ll all change color simultaneously, depending on the setting that Zuck dials into the Facebook Soma Cloud.

I know, I know: this is all just blue-sky dreaming for now. But as the poet said, in dreams begin realities.

At least I think that’s what he said.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

Image: detail of cover of paperback edition of Brave New World.

My computer, my doppeltweeter

socialnetwork

Broadway, as you’ll recall, was the nickname of the fellow that 50 Cent hired to ghost his tweets. “The energy of it is all him,” Broadway said of the simulated stream he produced for his boss. Or, as Baudrillard put it: “Ecstasy of information: simulation. Truer than true.”

Now that we’re all microcelebrities, we need to democratize Broadway. No mortal can keep up with Twitter, Facebook, Instagram, Tumblr, LinkedIn, Snapchat, etc., all by himself/herself. There’s just not enough realtime in the day. We all need a doppeltweeter to channel our energy.

Since the ability to clone Broadway is still three or four years out, Google is stepping into the breach by automating the maintenance of one’s social media presence. The company, as the BBC reports, was earlier this week granted a patent for “automated generation of suggestions for personalized reactions in a social network.” The description of the anticipated service is poetic:

A suggestion generation module includes a plurality of collector modules, a credentials module, a suggestion analyzer module, a user interface module and a decision tree. The plurality of collector modules are coupled to respective systems to collect information accessible by the user and important to the user from other systems such as e-mail systems, SMS/MMS systems, micro blogging systems, social networks or other systems. The information from these collector modules is provided to the suggestion analyzer module. The suggestion analyzer module cooperates with the user interface module and the decision tree to generate suggested reactions or messages for the user to send.

Translation: At this point, we have so much information on you that we know you better than you know yourself, so you may as well let us do your social networking for you.

Google notes that the automation of personal messaging will help people avoid embarrassing social faux pas:

Many users use online social networking for both professional and personal uses. Each of these different types of use has its own unstated protocol for behavior. It is extremely important for the users to act in an adequate manner depending upon which social network on which they are operating. For example, it may be very important to say “congratulations” to a friend when that friend announces that she/he has gotten a new job. This is a particular problem as many users subscribe to many social different social networks. With an ever increasing online connectivity and growing list of online contacts and given the amount of information users put online, it is possible for a person to miss such an update.

A computer will generate a personal “congratulations!” note to send to a friend, and upon the reception of the note, the friend’s computer will respond with a personal “thanks!” note, which will trigger the generation of a “no problem!” note. I think this is getting very close to the social networking system Mark Zuckerberg has always dreamed about. When confronted with an unstated protocol for behavior, it’s best to let the suggestion analyzer module do the talking.

Beyond the practical stream-management benefits, there’s a much bigger story here. The Google message-automation service promises to at last close the realtime loop: A computer running personalization algorithms will generate your personal messages. These computer-generated messages, once posted or otherwise transmitted, will be collected online by other computers and used to refine your personal profile. Your refined personal profile will then feed back into the personalization algorithms used to generate your messages, resulting in a closer fit between your  computer-generated messages and your computer-generated persona. And around and around it goes until a perfect stasis between self and expression is achieved. The thing that you once called “you” will be entirely out of the loop at this point, of course, but that’s for the best. Face it: you were never really very good at any of this anyway.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found hereImage from Google patent filing.

Ambient Reality

ThingOneThingTwo

People are forever buttonholing me on the street and saying, “Nick, what comes after realtime?” It’s a good question, and I happen to know the answer: Ambient Reality. Ambient Reality is the ultimate disruption, as it alters the actual fabric of the universe. We begin living in the prenow. Things happen before they happen. “Between the desire / And the spasm,” wrote T. S. Eliot, “Falls the Shadow.” In Ambient Reality, the Shadow goes away. Spasm precedes desire. In fact, it’s all spasm. We enter what I call Uninterrupted Spasm State, or USS.

In “How the Internet of Things Changes Everything,” a new and seemingly machine-written article in Foreign Affairs, two McKinsey consultants write of “the interplay” between “the most disruptive technologies of the coming decade: the mobile Internet and the Internet of Things.” The “mobile-ready Internet of Things,” as they term it, will have “a profound, widespread, and transformative impact on how we live and work.” For instance, “by combining a digital camera in a wearable device with image-recognition software, a shopper can automatically be fed comparative pricing information based on the image of a product captured by the camera.” That’s something to look forward to, but the McKinseyites are missing the big picture. They underestimate the profundity, the ubiquity, and the transformativeness of the coming disruption. In Ambient Reality, there is no such thing as “a shopper.” Indeed, the concept of “shopping” becomes anachronistic. Goods are delivered before the urge to buy them manifests itself in the conscious mind. Demand is ambient, as are pricing comparisons. They become streams in the cloud.

EBay strategist John Sheldon gets closer to the truth when he describes, in a new Wired piece, the concept of “ambient commerce”:

Imagine setting up a rule in Nike+, he says, to have the app order you a new pair of shoes after you run 300 miles. … Now consider an even more advanced scenario. A shirt has a sensor that detects moisture. And you find yourself stuck out in the rain without an umbrella. Not too many minutes after the downpour starts, a car pulls up alongside you. A courier steps out and hands you an umbrella — or possibly a rain jacket, depending on what rules you set up ahead of time for such a situation.

I ask you: Are there no bounds to the dreams of our innovators?

Comments Wired‘s Marcus Wohlsen, “Though it might be hard to believe, the logistics of delivering that umbrella are likely more complex than the math behind detecting the water.” That is indeed hard to believe.

But even these scenarios fail to capture the full power of Ambient Reality. They assume some agency is required on the part of the consumer. One has to “set up a rule” about the lifespan of one’s sneakers. One has to pre-program a choice between umbrella and rain jacket. In Ambient Reality, no such agency is required. Personal decisions are made prenow, by communications among software-infused things. The sensors in your feet and in your sneakers are in constant communication not only with each other but with the cloud. When a new pair of sneakers is required, the new pair is automatically printed on your 3-D printer at home. The style of the sneakers is chosen algorithmically based on your past behavior as well as contemporaneous neural monitoring. Choice is ambient. As for that “courier” who “steps out and hands you an umbrella” after the onset of precipitation, that’s just plain retrograde. The required consumer good will be delivered before the rain starts by an unmanned drone delivery aircraft. The idea that humans will be involved in delivery chores is ridiculous. In Ambient Reality, human effort will be restricted to self-actualization—in other words, ambient consumption. That’s the essence of USS.

I hardly need mention that, once the shower has passed, the drone will retrieve the umbrella in order to deliver it to another person facing an imminent rain event. All assets will be shared to optimize utilization. Think how rarely you use your umbrella today: that’s a sign of how broken society is.

We are on the verge, says Wohlsen, of “a utopian future in which running out of toilet paper at the wrong time will never, ever happen again.” That’s very true, but the never-run-out-of-toilet-paper utopia is actually a transitional utopia. [inlinetweet prefix=”” tweeter=”” suffix=””]In the ultimate utopia of Ambient Reality, there will be no need for toilet paper.[/inlinetweet] But I’ll leave that for a future post.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.