The crew at the Rough Type Curation Lab nearly blew their deadline this year, but at the last moment they’ve coughed up a mixtape for the final evening of 2013. (Spotify required.) It opens with a full-body assault from Supergrass. And then it gets intense. You want to time this so that “The Midnight Choir” ends and “Left of the Dial” begins right at the stroke of twelve. Yes, the Curation Lab thinks at that level of temporal granularity.
Another bulletin from the near future:
“I don’t have a microchip in my head – yet,” says the man charged with transforming Google’s relations with the technology giant’s human users. But Scott Huffman does envisage a world in which Google microphones, embedded in the ceiling, listen to our conversations and interject verbal answers to whatever inquiry is posed.
Ceilings with ears. A dream come true.
It’s clear now that Google and Microsoft have to bury the hatchet, if only to collaborate on a system combining the Microsoft Nudge Bra with the Google Ambient Nag. So when the Nudge Bra picks up a stress-related eating urge, the Ambient Nag will be able to say something like, “Do you really want those Twizzlers?”
The voice from the ceiling is only the beginning. Eventually, Huffman suggests, the Ambient Nag will become indistinguishable from the voice of your conscience:
Google believes it can ultimately fulfil people’s data needs by sending results directly to microchips implanted into its user’s brains. … “If you think hard enough about certain words they can be picked up by sensors fairly easily. It’ll be interesting to see how that develops,” Mr Huffman said.
No one will ever get pudgy again.
If there’s one product category ripe for disruptive innovation, it’s lingerie. So it comes as no real surprise that Microsoft researchers have developed a smart bra. The self-quantifying garment is designed, write the researchers, to “perform emotion detection in a mobile, wearable system” as a means of triggering “just-in-time interventions to support behavior modification for emotional eating.”
The smart bra is outfitted with sensors that measure a woman’s stress level by tracking her heart rate, respiration, skin conductance, and body movements. The data is streamed from the bra to a behavior-modification smartphone app, called EmoTree, and then uploaded to “a Microsoft Azure Cloud” for storage and, one assumes, ad personalization purposes.
Here’s a schematic look at how the Microsoft Nudge Bra is wired:
The researchers provide an example of how the smart bra might be used to deliver behavioral nudges at opportune moments:
Sally has been home from work for a few hours, and she finds herself rather bored. An application on Sally’s mobile phone has also detected that she is bored by reading her physiological state through wearable sensors. Since this mobile application has previously learned that Sally is most susceptible to emotional eating when she is bored, the application provides an intervention to distract Sally and hopefully prevent her from eating at that moment.
I’m not sure this is exactly what Donna Haraway had in mind when she wrote her cyborg manifesto. There doesn’t seem to be much confusion of boundaries involved in a bra-based weight-management app.
Early tests of the smart bra were not altogether successful, it must be said. The device’s short battery life “resulted in participants having to finagle with their wardrobe throughout the day.” Another drawback of the breast-centric form factor is that it’s far from gender-neutral. Its usefulness is restricted to the female anatomy. “We tried to do the same thing for men’s underwear,” reported one of the researchers, “but it was too far away [from the heart].” That has always been a problem. Still, one can imagine other forms of behavior modification that may be facilitated by underpants sensors.
Google Glass, clearly, is just the visible tip of the approaching iceberg. One can only hope that these new underwearables will, when they finally come to market, be equipped with a vibrate mode.
Broadway, as you’ll recall, was the nickname of the fellow that 50 Cent hired to ghost his tweets. “The energy of it is all him,” Broadway said of the simulated stream he produced for his boss. Or, as Baudrillard put it: “Ecstasy of information: simulation. Truer than true.”
Now that we’re all microcelebrities, we need to democratize Broadway. No mortal can keep up with Twitter, Facebook, Instagram, Tumblr, LinkedIn, Snapchat, etc., all by himself/herself. There’s just not enough realtime in the day . We all need a doppeltweeter to channel our energy.
Since the ability to clone Broadway is still three or four years out, Google is stepping into the breach by automating the maintenance of one’s social media presence. The company, as the BBC reports, was earlier this week granted a patent for “automated generation of suggestions for personalized reactions in a social network.” The description of the anticipated service is poetic:
A suggestion generation module includes a plurality of collector modules, a credentials module, a suggestion analyzer module, a user interface module and a decision tree. The plurality of collector modules are coupled to respective systems to collect information accessible by the user and important to the user from other systems such as e-mail systems, SMS/MMS systems, micro blogging systems, social networks or other systems. The information from these collector modules is provided to the suggestion analyzer module. The suggestion analyzer module cooperates with the user interface module and the decision tree to generate suggested reactions or messages for the user to send.
Translation: At this point, we have so much information on you that we know you better than you know yourself, so you may as well let us do your social networking for you.
Google notes that the automation of personal messaging will help people avoid embarrassing social faux pas:
Many users use online social networking for both professional and personal uses. Each of these different types of use has its own unstated protocol for behavior. It is extremely important for the users to act in an adequate manner depending upon which social network on which they are operating. For example, it may be very important to say “congratulations” to a friend when that friend announces that she/he has gotten a new job. This is a particular problem as many users subscribe to many social different social networks. With an ever increasing online connectivity and growing list of online contacts and given the amount of information users put online, it is possible for a person to miss such an update.
A computer will generate a personal “congratulations!” note to send to a friend, and upon the reception of the note, the friend’s computer will respond with a personal “thanks!” note, which will trigger the generation of a “no problem!” note. I think this is getting very close to the social networking system Mark Zuckerberg has always dreamed about. When confronted with an unstated protocol for behavior, it’s best to let the suggestion analyzer module do the talking.
Beyond the practical stream-management benefits, there’s a much bigger story here. The Google message-automation service promises to at last close the realtime loop: A computer running personalization algorithms will generate your personal messages. These computer-generated messages, once posted or otherwise transmitted, will be collected online by other computers and used to refine your personal profile. Your refined personal profile will then feed back into the personalization algorithms used to generate your messages, resulting in a closer fit between your computer-generated messages and your computer-generated persona. And around and around it goes until a perfect stasis between self and expression is achieved. The thing that you once called “you” will be entirely out of the loop at this point, of course, but that’s for the best. Face it: you were never really very good at any of this anyway.
Rebecca Greenfield reports on the arrival of “extreme baby monitoring.” For a few hundred bucks, new parents will soon be able to outfit their putative bundles of joy with a variety of sensors—ankle monitors, “smart diapers,” even a networked onesie that sends respiration, temperature, and other data feeds to smartphones—that enable “a big-data approach to parenting.” Comments Greenfield, “By gathering information on your kid’s poop, sleep, and eating schedules, the idea goes, you can engineer a happier, healthier baby.” This does seem like an advance on the technology strategy I deployed in baby-rearing, which involved a pacifier and a martini.*
As a case in point, Greenfield tells the story of Yasmin Lucero, who meticulously tracked a variety of data on her baby Elle. Elle wasn’t a great sleeper—she cried a lot in her crib—and Yasmin hoped that Big Baby Data would unlock the reasons underlying the problem and point to a solution: “She wanted answers: Did she put Elle to bed too early? Too late? Give her too many naps? Parsing data, she thought, would help her figure it out.”
So, after months of grueling data collection and graphing, what did Big Data reveal? Absolutely nothing. “Per the data, Elle was just fussy.”
A waste of time? Not at all: “The results suggested Yasmin couldn’t engineer better naps, as she’d hoped. Just knowing that, however, made her feel better. ‘If you come to the conclusion that you have no control, then it’s okay to relax and just do whatever is convenient for you at the moment,’ she explained.” Let this be an inspiration to Big Data marketers. Large-scale data analysis may be a waste of time and money, but that doesn’t make it any less necessary. After all, how will you know that Big Data has nothing to tell you if you don’t invest in it?
Come to think of it, as a marketing strategy this would also work quite well for Ouija Boards and the I Ching.
*Important legal notice: The baby gets the pacifier, the parent gets the martini.
[UPDATE 11/21: The FAA's new report on flight automation and safety has been released and can be read here.]
At an aviation conference held in Milan in November of 2010, Kathy Abbott, a top human-factors researcher with the Federal Aviation Administration, gave what she described as an early look at a major new report on flight automation. The findings Abbott presented were disturbing. An FAA review of flights between 2001 and 2009 implicated automation-related problems in a large percentage of crashes and dangerous incidents during those years. The Wall Street Journal‘s Andy Pasztor summed up Abbott’s remarks:
The study’s conclusions buttress the idea that a significant percentage of airline pilots rely excessively on computerized cockpit aids. Such adherence to computer-assisted piloting — and the confusion that can result when pilots fail to properly keep up with computer changes — increasingly are considered major factors in airliner crashes world-wide. … The errors included inappropriate control inputs by pilots and incorrect responses when trying to recover from aircraft upsets. … Focusing too much on manipulating flight-control computers, according to Ms. Abbott, often “distracts from managing the flight path of the airplane.”
The FAA indicated that the final report on the new research, which was intended as a follow-up to the agency’s landmark 1996 study on cockpit automation, would likely be released in 2011. It never appeared. It didn’t appear in 2012, either. During this time, I began to research the human impacts of computer automation (research that led to my article in the current Atlantic and that forms the basis of my next book, The Glass Cage). I made a couple of attempts to interview Abbott, which were politely but curtly rebuffed. I sensed that the FAA knew its research would be controversial, and it was being meticulous in preparing the report and its rollout.
In what seemed like a preview of the report’s conclusions, the agency released in January of this year a “Safety Alert for Operators”—SAFO 13002—that urged airlines to get their pilots to do more manual flying. Drawing on the ongoing research, the alert contained a warning:
Modern aircraft are commonly operated using autoflight systems (e.g., autopilot or autothrottle/autothrust). Unfortunately, continuous use of those systems does not reinforce a pilot’s knowledge and skills in manual flight operations. Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations. However, continuous use of autoflight systems could lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state.
Now, at long last, the FAA appears ready to release its full report, according to an article by Pasztor in today’s Journal. Pasztor has read a draft of the nearly 300-page document, and he summarizes its main thrust:
Relying too heavily on computer-driven flight decks — and problems that result when crews fail to properly keep up with changes in levels of automation — now pose the biggest threats to airliner safety world-wide, the study concluded. The results can range from degraded manual-flying skills to poor decision-making to possible erosion of confidence among some aviators when automation abruptly malfunctions or disconnects during an emergency.
The report includes the observation that, thanks to flight automation, pilots have grown “accustomed to watching things happen … instead of being proactive.” The pilot’s new role as “a manager of systems” can intrude on the actual flying of the plane.
None of this is unexpected. By now, there is a very large body of research on flight automation, dating back a couple of decades, that clearly demonstrates the risk of skill erosion as dependency on computers grows. The FAA study promises to play a vital role in bringing this research into the public eye. Beyond the important implications for the aviation profession, it will serve as a timely and general warning about the risks of relying too much on software, both in our work lives and our personal lives.
Image: Airbus A380 “glass cockpit.”
Babbage reports on an intriguing new study that links the landscape we’re in (or looking at) to the time scale of our thoughts:
Sitting in his remote cottage, baby son slumbering by his side, Samuel Taylor Coleridge pondered the little one’s future in “Frost at Midnight”. A study published in the Proceedings of the Royal Society suggests his “abstruser musings” were not that unusual, given his alentours. Mark van Vugt, of VU University in Amsterdam, and his colleagues found that country scenery of the sort Coleridge beheld inspires people to think about the future; concrete cityscapes encourage quick decisions aimed at immediate rewards.
To reach that conclusion Dr van Vugt and his team randomly assigned 47 participants either to look at three city photographs, or three country photographs, for two minutes each. After that participants were asked to pick between €100 ($135) now or a larger sum, which grew in €10 increments up to €170, in 90 days’ time. Those beholding natural landscapes made the switch to deferred gratification at a sum, known as the indifference point, that was 10% below those who scanned cityscapes. The same was true when another 43 volunteers were asked either to walk in an actual forest outside Amsterdam or in the city’s commercial area of Zuidas.
This reminds me of the work that’s been done on “attention restoration theory,” which posits a link between landscape and attentiveness. I described one relevant study in The Shallows:
A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous, and mentally fatiguing, series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were then divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.
The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”
I don’t find the results of these studies surprising. They match up pretty well with my own experience. What makes them valuable, I think, is the way they remind us that our minds are part of the world—something that’s easy to forget.
Image: detail from Constable’s “Landscape with Clouds.”