Automating the feels

redwedding

It’s been hard not to feel a deepening of the soul as the palette of online emotion signifiers has expanded from sparse typographic emoticons to colorful and animated emoji. Some cynics believe that emotions have no place in the realtime stream, but in fact the stream is full of feels, graphically expressed, fully machine-readable, and entailing minimal latency drain. Evan Selinger puts the emoji trend into perspective:

The mood graph has arrived, taking its place alongside the social graph (most commonly associated with Facebook), citation-link graph and knowledge graph (associated with Google), work graph (LinkedIn and others), and interest graph (Pinterest and others). Like all these other graphs, the mood graph will enable relevance, customization, targeting; search, discovery, structuring; advertising, purchasing behaviors, and more.

The arrival of the mood graph comes at the same time that facial-recognition and eye-tracking apps are beginning to blossom. The camera, having looked outward so long, is finally turning inward. Vanessa Wong notes the release, by the online training firm Mindflash, of FocusAssist for the iPad, which

uses the tablet’s camera to track a user’s eye movements. When it senses that you’ve been looking away for more than a few seconds (because you were sending e-mails, or just fell asleep), it pauses the [training] course, forcing you to pay attention—or at least look like you are—in order to complete it.

The next step is obvious: automating the feels. Whenever you write a message or update, the camera in your smartphone or tablet will “read” your eyes and your facial expression, precisely calculate your mood, and append the appropriate emoji. Not only does this speed up the process immensely, but it removes the requirement for subjective self-examination and possible obfuscation. Automatically feeding objective mood readings into the mood graph helps purify and enrich the data even as it enhances the efficiency of the realtime stream. For the three parties involved in online messaging—sender, receiver, and tracker—it’s a win-win-win.

Some people feel a certain existential nausea when contemplating these trends. Selinger, for one, is wary of some of the implications of the mood graph:

The more we rely on finishing ideas with the same limited words (feeling happy) and images (smiley face) available to everyone on a platform, the more those pre-fabricated symbols structure and limit the ideas we express. … [And] drop-down expression makes us one-dimensional, living caricatures of G-mail’s canned responses — a style of speech better suited to emotionless computers than flesh-and-blood humans. As Marshall McLuhan observed, just as we shape our tools, they shape us too. It’s a two-way street.

Robinson Meyer, meanwhile, finds himself “creeped out” by FocusAssist:

FocusAssist forces people to perform a very specific action with their eyeballs, on behalf of “remote organizations,” so that they may learn what the organization wants them to learn. Forcing a human’s attention through algorithmic surveillance: It’s the stuff of A Clockwork Orange. …

How long until a feature like FocusAssist is rebranded as AttentionMonitor and included in a MOOC, or a University of Phoenix course? How long until an advertiser forces you to pay attention to its ad before you can watch the video that follows? And how long, too, until FocusAssist itself is used outside of the context it was designed for?

All worthy concerns, I’m sure, but I sense they arrive too late. We need to remember what Norbert Wiener wrote more than sixty years ago:

I have spoken of machines, but not only of machines having brains of brass and thews of iron. When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine.

The raw material now encompasses emotion as well as flesh and blood. If you have an emotion that is unencapsulated in an emoji and unread by an eye-tracking app—that fails to  become an element of the machine—did you really feel it? Probably not. At least by automating this stuff, you’ll always know you felt something.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.

4 thoughts on “Automating the feels

  1. CS Clark

    7. Never show dismay! Never show resentment! A single flicker of the eyes could give you away.

    10 Top Tips For Passing Udacity Courses, Huffington Post, Retrieved: 13:00, 26th April, 2017

  2. grizzlymarmot

    Emoji often seem to be used to recover from a Freudian slip. In that sense, ignoring them is the best way to determine how people really feel.
    On the other hand what if the choice of Emoji were to be the Freudian slip. I sense a naming opportunity! “Emojid – an ideogrammitcal error that reveals the repressed feelings of the message sender.”

Comments are closed.