“How Smartphones Hijack Our Minds”: sources

I draw on several studies in my Wall Street Journal essay “How Smartphones Hijack Our Minds.” Here are citations and links for anyone who would like to delve more deeply into the subject.

Three articles written or cowritten by Adrian Ward, formerly at the University of Colorado at Boulder and now at the University of Texas at Austin, were particularly valuable:

Ward, Duke, Gneezy, Bos, “Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity,” Journal of the Association for Consumer Research, 2017.

Ward, “Supernormal: How the Internet Is Changing Our Memories and Our Minds,” Psychological Inquiry, 2013.

Wegner, Ward, “How Google Is Changing Your Brain,” Scientific American, 2013.

Other studies cited, in the order mentioned:

Stothart, Mitchum, Yehnert, “The Attentional Cost of Receiving a Cell Phone Notification,” Journal of Experimental Psychology: Human Perception and Performance, 2015.

Clayton, Leshner, Almond, “The Extended iSelf: The Impact of iPhone Separation on Cognition, Emotion, and Physiology,” Journal of Computer-Mediated Communication, 2015.

Thornton, Faires, Robbins, Rollins, “The Mere Presence of a Cell Phone May Be Distracting: Implications for Attention and Task Performance,” Social Psychology, 2014. (I refer in particular to the second of two experiments described in this paper.)

Lee, Kim, McDonough, Mendoza, Kim, “The Effects of Cell Phone Use and Emotion-Regulation Style on College Students’ Learning,” Applied Cognitive Psychology, 2017.

Beland, Murphy, “Ill Communication: Technology, Distraction & Student Performance,” Labour Economics, 2016.

Przybylski, Weinstein, “Can You Connect with Me Now? How the Presence of Mobile Communication Technology Influences Face-to-Face Conversation Quality,” Journal of Social and Personal Relationships, 2013.

Misra, Cheng, Genevie, Yuan, “The iPhone Effect: The Quality of In-Person Social Interactions in the Presence of Mobile Devices,” Environment and Behavior, 2016.

Sparrow, Liu, Wegner, “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips,” Science, 2011.

William James’s observation that “the art of remembering is the art of thinking” comes from a lecture collected in the book Talks to Teachers on Psychology and to Students on Some of Life’s Ideals.

Cynthia Ozick’s reference to data as “memory without history” can be found in her essay “T.S. Eliot at 101,” published in the New Yorker in 1989.

Finally, at the start of the essay, I refer to Apple data showing that the average iPhone owner uses the device 80 times a day. This was disclosed in an Apple security presentation by Ivan Krstić last year. The figure refers to the number of times a device is unlocked during a day. Since it’s possible to check notifications without unlocking the phone, the figure likely understates the number of times people actually look at their phones during the day.

The amazing, mind-eating smartphone

In “How Smartphones Hijack Our Minds,” an essay in the Weekend Review section of the Wall Street Journal, I examine recent research into the ways smartphones influence our cognition and perception — even when we’re not using the devices.

Here’s a taste:

Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object in the environment that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communication devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It’s an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what [Adrian] Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it’s part of the surroundings — which it always is. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library, and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That’s what a smartphone represents to us. No wonder we can’t take our minds off it.

Read on.

Image: Modes Rodriguez.

Mattel and Google: a double standard for AI toys?

Mattel yesterday pulled the plug on Aristotle, a planned smart-speaker-cum-baby-monitor developed by the company’s Nabi unit. The product had generated controversy since it was announced in January, with lawmakers, pediatricians, and child advocates raising concerns about how the device would collect data on and influence the behavior of children. The Washington Post summed up the concerns:

For one, the existence of a home hub for kids raised questions about data privacy for a vulnerable population. It also triggered broader concerns about how quickly companies are marketing products to parents without understanding how technology could affect early childhood development.

Congressmen Edward Markey and Joe Barton fired off a letter to Mattel CEO (and former Google exec) Margaret Georgiadis last week suggesting that Aristotle raises “serious privacy concerns as Mattel can build an in-depth profile of children and their family. It appears that never before has a device had the capability to so intimately look into the life of a child.” The letter sparked a new round of criticism in the press, with Jezebel calling Aristotle “creepy as hell” and Buzzfeed quoting a child privacy advocate arguing, “We shouldn’t be using kids as AI experiments … If we don’t know what the effect is, then we shouldn’t be putting that in children’s bedrooms.”

The letter and its fallout seem to be what prompted Mattel to announce yesterday that it wouldn’t go forward with the device.

At the very same moment Mattel was killing off Aristotle, Google was promoting new “kid friendly” accounts for its line of Google Home smart speakers:

We’re making Google Home more fun for the whole family, with 50+ new experiences for you to try out. Learn something new, or imagine with storytime. There are also plenty of fun activities; go on an adventure with Mickey Mouse, identify your alter ego with the Justice League D.C. Super heroes, or play Freeze Dance in your living room. These experiences will be supported by Family Link accounts on the Assistant, letting parents create accounts for their children under 13.

Even infants and toddlers can now be registered for Google accounts, allowing the company’s AI chatbot, Assistant, to collect data on them, talk with them, and tailor experiences for them. “We automatically collect and store certain information about the services your child uses and how your child uses them,” Google notes, deep in its privacy policies, “like when your child saves a picture in Google Photos, enters a query in Google Search, creates a document in Google Drive, talks to the Google Assistant, or watches a video in YouTube Kids.”

It’s hard for me to see much difference between Aristotle and Google Home with Family Link. Both raise concerns about children’s privacy, both allow companies to develop in-depth profiles of kids and their families, and both entail “using kids as AI experiments” without any clear understanding of how their development will be affected. Yet while the press hammered Mattel, it treated the Google news as benign, if not praiseworthy. “Google is making Home better for families and kids,” declared TechCrunch. “Google Assistant will tell your kids a bedtime story,” wrote Engadget. Buzzfeed chirped:

Home is now more kid-friendly, too. It can understand the way kids talk better, and includes more kid-friendly games, like “Which fruit are you?”. New commands include: “Hey Google, let’s learn”, or “let’s play a game,” or “tell me a story.” Google is also partnering with Disney to create kid-first experiences.

When a toy company tries to put a listening device into a kid’s bedroom, it’s creepy. When a tech giant does the same thing, it’s cool.

Facebook Rules

When television emerged as a fledgling medium in the middle years of the last century, it already had, in the form of the Federal Communications Commission, the Communications Act of 1934, and various other laws and precedents, a framework for regulating its content. The formal restrictions on the broadcasting of obscene, indecent, profane, prurient, and violent material, combined with the sensitivities of mainstream advertisers, defined the boundaries of Prime Time television through the fifties, sixties, and much of the seventies — until the spread of cable programming changed everything.

When the internet emerged as a medium in the 1990s, it was free of any such regulatory framework restricting its content. Indeed, an anything-goes ethos was as essential to the nature and ideals of the net as the family-friendly ethos was to the nature and ideals of TV during its formative decades. The net, in other words, escaped the sanitized Prime Time phase.

Or did it?

Today, Facebook released a set of “content guidelines for monetization” that might have been written by FCC bureaucrats in the 1950s. Among other things, the Facebook rules prohibit or restrict:

  • “Content that depicts family entertainment characters engaging in violent, sexualized, or otherwise inappropriate behavior, including videos positioned in a comedic or satirical manner.”
  • “Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education.”
  • “Content that is incendiary, inflammatory, demeaning or disparages people, groups, or causes.”
  • “Content that is depicting threats or acts of violence against people or animals, [including] excessively graphic violence in the course of video gameplay.”
  • “Content where the focal point is nudity or adult content, including depictions of people in explicit or suggestive positions, or activities that are overly suggestive or sexually provocative.”
  • “Content that features coordinated criminal activity, drug use, or vandalism.”
  • “Content that depicts overly graphic images, blood, open wounds, bodily fluids, surgeries, medical procedures, or gore that is intended to shock or scare.”
  • “Content depicting or promoting the excessive consumption of alcohol, smoking, or drug use.”
  • “Inappropriate language.”

I’m not sure Petticoat Junction would have made it through that gauntlet.

Rather than being imposed by government fiat, Facebook is imposing these content restrictions on itself in response to growing public concerns about the net’s anything-goes ethos and, in particular, to advertisers’ growing worries about what Facebook VP Carolyn Everson terms “brand safety.” The fact that the rules allow little or no room for editorial judgment — is this image exploitative or journalistic? — reveals what happens when a tech firm becomes a media hub.

Some will welcome the sweeping new restrictions on content. Others will be appalled. What they make clear, though, is that the internet, as most experience it, has entered a new era, spurred by the consolidation of traffic into a handful of sites and apps run by companies whose fortunes hinge on their ability to keep advertisers happy. The internet is reliving the history of television, but in reverse. First came Anything Goes. Now comes Prime Time.

Image: George Carlin, circa 1972.

The internet as an innocent fraud

The paperback edition of Utopia Is Creepy is out today, September 12, from W. W. Norton & Company. Collecting seventy-nine of the best posts from Rough Type as well as sixteen essays and reviews I published between 2008 and 2016, the book, says Time, “punches a hole in Silicon Valley cultural hubris.”

Here’s an excerpt from the introduction:

“The most unfree souls go west, and shout of freedom.”
–D. H. Lawrence, Studies in Classic American Literature

The greatest of America’s homegrown religions — greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology — is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his 1833 testament The Paradise within the Reach of All Men. By fulfilling its “mechanical purposes,” he wrote, the United States would turn itself into a new Eden, a “state of superabundance” where “there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations,” not to mention “vegetables of infinite variety and appearance.”

Similar predictions proliferated throughout the nineteenth and twentieth centuries, and in their visions of “technological majesty,” as the critic and historian Perry Miller wrote, we find the true American sublime. We may blow kisses to agrarians like Jefferson and tree-huggers like Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

The internet, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for America’s spiritual yearnings and tropes. “What better way,” wrote Cal State philosopher Michael Heim in 1991, “to emulate God’s knowledge than to generate a virtual world constituted by bits of information?” In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting “the second coming of the computer,” replete with gauzy images of “cyberbodies drift[ing] in the computational cosmos” and “beautifully-laid-out collections of information, like immaculate giant gardens.” The millenarian rhetoric swelled with the arrival of Web 2.0. “Behold,” proclaimed Kevin Kelly in an August 2005 Wired cover story: We are entering a “new world,” powered not by God’s grace but by the web’s “electricity of participation.” It would be a paradise of our own making, “manufactured by users.” History’s databases would be erased, humankind rebooted. “You and I are alive at this moment.”

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, venture capitalist Marc Andreessen sent out a rhapsodic series of tweets — he called it a “tweetstorm” — announcing that computers and robots were about to liberate us all from “physical need constraints.” Echoing John Adolphus Etzler (and also Karl Marx), he declared that “for the first time in history” humankind would be able to express its full and true nature: “We will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.” The only thing he left out was the vegetables.

Such prophesies might be dismissed as the prattle of overindulged rich guys, but for one thing: They’ve shaped public opinion. By spreading a utopian view of technology, a view that defines progress as essentially technological, they’ve encouraged people to switch off their critical faculties and give Silicon Valley entrepreneurs and financiers free rein in remaking culture to fit their commercial interests. If, after all, the technologists are creating a world of superabundance, a world without work or want, their interests must be indistinguishable from society’s. To stand in their way, or even to question their motives and tactics, would be self-defeating. It would serve only to delay the wonderful inevitable.

The Silicon Valley line has been given an academic imprimatur by theorists from universities and think tanks. Intellectuals spanning the political spectrum, from Randian right to Marxian left, have portrayed the computer network as a technology of emancipation. The virtual world, they argue, provides an escape from repressive social, corporate, and governmental constraints; it frees people to exercise their volition and creativity unfettered, whether as entrepreneurs seeking riches in the marketplace or as volunteers engaged in “social production” outside the marketplace. “This new freedom,” wrote law professor Yochai Benkler in his influential 2006 book The Wealth of Networks, “holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.” Calling it a revolution, he went on, is no exaggeration.

Benkler and his cohorts had good intentions, but their assumptions were bad. They put too much stock in the early history of the web, when its commercial and social structures were inchoate, its users a skewed sample of the population. They failed to appreciate how the network would funnel the energies of the people into a centrally administered, tightly monitored information system organized to enrich a small group of businesses and their owners.

The network would indeed generate a lot of wealth, but it would be wealth of the Adam Smith sort—and it would be concentrated in a few hands, not widely spread. The culture that emerged on the network, and that now extends deep into our lives and psyches, is characterized by frenetic production and consumption — smartphones have made media machines of us all — but little real empowerment and even less reflectiveness. It’s a culture of distraction and dependency. That’s not to deny the benefits of having easy access to an efficient, universal system of information exchange. It is to deny the mythology that has come to shroud the system. And it is to deny the assumption that the system, in order to provide its benefits, had to take its present form.

Late in his life, the economist John Kenneth Galbraith coined the term “innocent fraud.” He used it to describe a lie or a half-truth that, because it suits the needs or views of those in power, is presented as fact. After much repetition, the fiction becomes common wisdom. “It is innocent because most who employ it are without conscious guilt,” Galbraith wrote. “It is fraud because it is quietly in the service of special interest.” The idea of the computer network as an engine of liberation is an innocent fraud.

A longer excerpt from the introduction appears in Aeon. The paperback edition of Utopia Is Creepy is available now at your local independent bookstore or from Barnes & Noble, Amazon, or Powell’s.

Oracles of the countertop

I have an op-ed in today’s New York Times about how domestic robots, which we always assumed would resemble ourselves when they entered our homes, have instead arrived in the form of chatbot-powered smart speakers. The shift from the Jetsons’ embodied Rosie to Amazon’s disembodied Alexa says something important about our times, I suggest. The piece begins:

From the moment we humans first imagined having mechanical servants at our beck and call, we’ve assumed they would be constructed in our own image. Outfitted with arms and legs, heads and torsos, they would perform everyday tasks that we’d otherwise have to do ourselves. Like the indefatigable maid Rosie on The Jetsons, the officious droid C-3PO in Star Wars and the tortured “host” Dolores Abernathy in Westworld, the robotic helpmates of popular culture have been humanoid in form and function.

It’s time to rethink our assumptions. A robot invasion of our homes is underway, but the machines — so-called smart speakers like Amazon Echo, Google Home and the forthcoming Apple HomePod — look nothing like what we expected. Small, squat and stationary, they resemble vases or cat food tins more than they do people.

Echo and its ilk do, however, share a crucial trait with their imaginary forebears: They illuminate the times. Whatever their shape, robots tell us something important about our technologies and ourselves. …

Read on.

Image: still from the 1940 film “Leave It to Roll-Oh.”