Category Archives: Uncategorized

Underwearables

reality augmentation

If there’s one product category ripe for disruptive innovation, it’s lingerie. So it comes as no real surprise that Microsoft researchers have developed a smart bra. The self-quantifying garment is designed, write the researchers, to “perform emotion detection in a mobile, wearable system” as a means of triggering “just-in-time interventions to support behavior modification for emotional eating.”

The smart bra is outfitted with sensors that measure a woman’s stress level by tracking her heart rate, respiration, skin conductance, and body movements. The data is streamed from the bra to a behavior-modification smartphone app, called EmoTree, and then uploaded to “a Microsoft Azure Cloud” for storage and, one assumes, ad personalization purposes.

Here’s a schematic look at how the Microsoft Nudge Bra is wired:

microsoftbra

The researchers provide an example of how the smart bra might be used to deliver behavioral nudges at opportune moments:

Sally has been home from work for a few hours, and she finds herself rather bored. An application on Sally’s mobile phone has also detected that she is bored by reading her physiological state through wearable sensors. Since this mobile application has previously learned that Sally is most susceptible to emotional eating when she is bored, the application provides an intervention to distract Sally and hopefully prevent her from eating at that moment.

I’m not sure this is exactly what Donna Haraway had in mind when she wrote her cyborg manifesto. There doesn’t seem to be much confusion of boundaries involved in a bra-based weight-management app.

Early tests of the smart bra were not altogether successful, it must be said. The device’s short battery life “resulted in participants having to finagle with their wardrobe throughout the day.” Another drawback of the breast-centric form factor is that it’s far from gender-neutral. Its usefulness is restricted  to the female anatomy. “We tried to do the same thing for men’s underwear,” reported one of the researchers, “but it was too far away [from the heart].” That has always been a problem. Still, one can imagine other forms of behavior modification that may be facilitated by underpants sensors.

Google Glass, clearly, is just the visible tip of the approaching iceberg. One can only hope that these new underwearables will, when they finally come to market, be equipped with a vibrate mode.

Big Data: essential even when useless

emma is

Rebecca Greenfield reports on the arrival of “extreme baby monitoring.” For a few hundred bucks, new parents will soon be able to outfit their putative bundles of joy with a variety of sensors—ankle monitors, “smart diapers,” even a networked onesie that sends respiration, temperature, and other data feeds to smartphones—that enable “a big-data approach to parenting.” Comments Greenfield, “By gathering information on your kid’s poop, sleep, and eating schedules, the idea goes, you can engineer a happier, healthier baby.” This does seem like an advance on the technology strategy I deployed in baby-rearing, which involved a pacifier and a martini.*

As a case in point, Greenfield tells the story of Yasmin Lucero, who meticulously tracked a variety of data on her baby Elle. Elle wasn’t a great sleeper—she cried a lot in her crib—and Yasmin hoped that Big Baby Data would unlock the reasons underlying the problem and point to a solution: “She wanted answers: Did she put Elle to bed too early? Too late? Give her too many naps? Parsing data, she thought, would help her figure it out.”

So, after months of grueling data collection and graphing, what did Big Data reveal? Absolutely nothing. “Per the data, Elle was just fussy.”

A waste of time? Not at all: “The results suggested Yasmin couldn’t engineer better naps, as she’d hoped. Just knowing that, however, made her feel better. ‘If you come to the conclusion that you have no control, then it’s okay to relax and just do whatever is convenient for you at the moment,’ she explained.” Let this be an inspiration to Big Data marketers. Large-scale data analysis may be a waste of time and money, but that doesn’t make it any less necessary. After all, how will you know that Big Data has nothing to tell you if you don’t invest in it?

Come to think of it, as a marketing strategy this would also work quite well for Ouija Boards and the I Ching.

*Important legal notice: The baby gets the pacifier, the parent gets the martini.

The FAA’s automation report

800px-Airbus_A380_cockpit

[UPDATE 11/21: The FAA’s new report on flight automation and safety has been released and can be read here.]

At an aviation conference held in Milan in November of 2010, Kathy Abbott, a top human-factors researcher with the Federal Aviation Administration, gave what she described as an early look at a major new report on flight automation. The findings Abbott presented were disturbing. An FAA review of flights between 2001 and 2009 implicated automation-related problems in a large percentage of crashes and dangerous incidents during those years. The Wall Street Journal‘s Andy Pasztor summed up Abbott’s remarks:

The study’s conclusions buttress the idea that a significant percentage of airline pilots rely excessively on computerized cockpit aids. Such adherence to computer-assisted piloting — and the confusion that can result when pilots fail to properly keep up with computer changes — increasingly are considered major factors in airliner crashes world-wide. … The errors included inappropriate control inputs by pilots and incorrect responses when trying to recover from aircraft upsets. … Focusing too much on manipulating flight-control computers, according to Ms. Abbott, often “distracts from managing the flight path of the airplane.”

The FAA indicated that the final report on the new research, which was intended as a follow-up to the agency’s landmark 1996 study on cockpit automation, would likely be released in 2011. It never appeared. It didn’t appear in 2012, either. During this time, I began to research the human impacts of computer automation (research that led to my article in the current Atlantic and that forms the basis of my next book, The Glass Cage). I made a couple of attempts to interview Abbott, which were politely but curtly rebuffed. I sensed that the FAA knew its research would be controversial, and it was being meticulous in preparing the report and its rollout.

In what seemed like a preview of the report’s conclusions, the agency released in January of this year a “Safety Alert for Operators”—SAFO 13002—that urged airlines to get their pilots to do more manual flying. Drawing on the ongoing research, the alert contained a warning:

Modern aircraft are commonly operated using autoflight systems (e.g., autopilot or autothrottle/autothrust). Unfortunately, continuous use of those systems does not reinforce a pilot’s knowledge and skills in manual flight operations. Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations. However, continuous use of autoflight systems could lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state.

Now, at long last, the FAA appears ready to release its full report, according to an article by Pasztor in today’s Journal. Pasztor has read a draft of the nearly 300-page document, and he summarizes its main thrust:

Relying too heavily on computer-driven flight decks — and problems that result when crews fail to properly keep up with changes in levels of automation — now pose the biggest threats to airliner safety world-wide, the study concluded. The results can range from degraded manual-flying skills to poor decision-making to possible erosion of confidence among some aviators when automation abruptly malfunctions or disconnects during an emergency.

The report includes the observation that, thanks to flight automation, pilots have grown “accustomed to watching things happen … instead of being proactive.” The pilot’s new role as “a manager of systems” can intrude on the actual flying of the plane.

None of this is unexpected. By now, there is a very large body of research on flight automation, dating back a couple of decades, that clearly demonstrates the risk of skill erosion as dependency on computers grows. The FAA study promises to play a vital role in bringing this research into the public eye. Beyond the important implications for the aviation profession, it will serve as a timely and general warning about the risks of relying too much on software, both in our work lives and our personal lives.

Image: Airbus A380 “glass cockpit.”

The mind in the landscape

constable

Babbage reports on an intriguing new study that links the landscape we’re in (or looking at) to the time scale of our thoughts:

Sitting in his remote cottage, baby son slumbering by his side, Samuel Taylor Coleridge pondered the little one’s future in “Frost at Midnight”. A study published in the Proceedings of the Royal Society suggests his “abstruser musings” were not that unusual, given his alentours. Mark van Vugt, of VU University in Amsterdam, and his colleagues found that country scenery of the sort Coleridge beheld inspires people to think about the future; concrete cityscapes encourage quick decisions aimed at immediate rewards.

To reach that conclusion Dr van Vugt and his team randomly assigned 47 participants either to look at three city photographs, or three country photographs, for two minutes each. After that participants were asked to pick between €100 ($135) now or a larger sum, which grew in €10 increments up to €170, in 90 days’ time. Those beholding natural landscapes made the switch to deferred gratification at a sum, known as the indifference point, that was 10% below those who scanned cityscapes. The same was true when another 43 volunteers were asked either to walk in an actual forest outside Amsterdam or in the city’s commercial area of Zuidas.

This reminds me of the work that’s been done on “attention restoration theory,” which posits a link between landscape and attentiveness. I described one relevant study in The Shallows:

A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous, and mentally fatiguing, series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were then divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.

The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”

I don’t find the results of these studies surprising. They match up pretty well with my own experience. What makes them valuable, I think, is the way they remind us that our minds are part of the world—something that’s easy to forget.

Image: detail from Constable’s “Landscape with Clouds.”

Is software de-skilling programmers?

eclipse1

One of the themes of “The Great Forgetting,” my essay in the new Atlantic, is the spread of de-skilling into the professional work force. Through the nineteenth and twentieth centuries, the mechanization of industry led to the de-skilling of many manual trades, turning craftsmen into machine operators. As software automates intellectual labor, there are signs that a similar trend is influencing white collar workers, from accountants to lawyers.

Software writers themselves don’t seem immune from the new de-skilling wave. The longtime Google programmer Vivek Haldar, responding to my essay on his personal blog, writes of the danger of de-skilling inherent in modern integrated development environments (IDEs) like Eclipse and Visual Studio. IDEs automate many routine coding tasks, and as they’ve grown more sophisticated they’ve taken on higher-level tasks like restructuring, or “refactoring,” code:

Modern IDEs are getting “helpful” enough that at times I feel like an IDE operator rather than a programmer. They have support for advanced refactoring. Linters can now tell you about design issues and code smells. The behavior all these tools encourage is not “think deeply about your code and write it carefully”, but “just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.”

Haldar is not dismissing the benefits of IDEs, which, he argues, can lead to “a cleaner codebase” as well as greater productivity. His comments point to the essential tension that has always characterized technological de-skilling: the very real benefits of labor-saving technology come at the cost of a loss of human talent. The hard challenge is knowing where to draw the line—or just realizing that there is a line to be drawn.

Photo by Nathan Bergey.

Tote that barge, tweet that tweet

shepherd

John Maynard Keynes believed that labor-saving technology would eventually create a utopia of leisure. (The date he had in mind was 2030.) Relieved of our narrow, demeaning jobs, we’d enjoy a wealth of pastimes. Marx, earlier, had a similar dream: “In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.” Sign me up!

Ian Bogost suggests that what modern technology might be creating is a kind of parody of that utopia — a Rube Goldbergian treadmill of small, neverending tasks. The regulation of online production is turning us into jittery information-processing generalists, jacks of all media trades. We’re all “hyperemployed,” whether we’re earning a decent wage or not:

Increasingly, online life in general feels like this. The endless, constant flow of email, notifications, direct messages, favorites, invitations. After that daybreak email triage, so many other icons on your phone boast badges silently enumerating their demands. Facebook notifications. Twitter @-messages, direct messages. Tumblr followers, Instagram favorites, Vine comments. Elsewhere too: comments on your blog, on your YouTube channel … New messages in the forums you frequent. Your Kickstarter campaign updates. Your Etsy shop. Your Ebay watch list. And then, of course, more email. Always more email. …

Even if there is more than a modicum of exploitation at work in the hyperemployment economy, the despair and overwhelm of online life doesn’t derive from that exploitation—not directly anyway. Rather, it’s a type of exhaustion cut of the same sort that afflicts the underemployed as well … The economic impact of hyperemployment is obviously different from that of underemployment, but some of the same emotional toll imbues both: a sense of inundation, of being trounced by demands whose completion yields only their continuance, and a feeling of resignation that any other scenario is likely or even possible.

They gave us utopia, but they forgot the fishing rods.

Peak ebook?

I’ve been documenting the recent, surprisingly sharp decline in ebook sales growth. The falloff has continued through the first half of this year, with ebooks now showing clear signs of “stagnating” at about 25 percent of the overall U.S. book market, according to Digital Book World: “Once thought destined to reach 50% or 80% of all book buying and reading in the U.S., ebooks have stalled out on their way up to higher altitude.”

DBW bases that conclusion on a new study by the Book Industry Study Group, a publishing trade association, which uses data from Nielsen Book Research. The study shows that “for the past year or so, the share of all new ebooks sold — both in units and dollars — has been flat at about 30% and just under 15%, respectively.” A DBW chart drawn from the Nielsen numbers indicates that e-books actually lost some market share during the second quarter of this year (a trend also seen in recent sales reports from the Association of American Publishers):

ebook-share-of-new-books-sold

 

The Nielsen data also reveals “a slow decline in the number of people who exclusively buy e-books.” Comments Nielsen’s Jo Henry: “It is clear from four annual research surveys that e-books are in the later stages of the innovation curve and have settled into reasonably predictable consumption patterns.”

Maybe this is just an anomaly and ebooks will eventually gain a second wind and start taking more share from printed books. Right now, though, it’s looking as though there’s a Gutenberg Firewall — and that ebooks have hit it.