Gray area: Google’s truthiness problem

I never realized that the guy who wrote Straw Dogs is the very same guy who wrote Men Are from Mars, Women Are from Venus. Sometimes it boggles the mind what you discover through Google:

The absurdity of confusing these two writers is priceless, but Google’s cavalier willingness to allow its algorithms to publish misinformation and nonsense does raise important questions, both epistemological and ethical. Is it OK to run an AI when you know that it will spread falsehoods to the public — and on a massive scale? Is it OK to treat truth as collateral damage in the supposed march of progress? Google in its early days illuminated the possibilities of search algorithms. Now it seems determined to reveal their limits.

Update: Google is even cavalier when it comes to issuing death notices.

Being there

I have an essay, “The World Beyond the Screen,” in the catalog for the current exhibition Being There at the Louisiana Museum of Modern Art in Denmark. The essay begins:

The paradox of modern media is that in opening the world to us, it removes us from the world. To enjoy the informational and recreational bounties of the networked screen, whether it’s a television, a personal computer, or a smartphone, we have to withdraw from our immediate physical and social surroundings. Our mind, with its faculties of attention and perception, has to shift its focus to the simulated, or virtual, images and experiences served up by the technological system. To speak of our time as one of great “connectivity” is to see only half the picture. We disconnect to connect.

The ability of the human mind to remove itself, purposely, from its physical surroundings and enter an artificial environment is, so far as we can tell, a talent unique to our species. And it’s a talent that seems elementally important to us, as individuals and as a society. It’s what unlocks the realm of imagination, and it’s what gives rise to much of what’s most valuable and enduring in culture. When we speak of being “transported” by a work of literature or art, we’re acknowledging the power of the mind to wander beyond the here-and-now, beyond the physical and temporal bounds of immediate reality.

Yet our ability to remove ourselves from reality also brings peril. . . .

The exhibition runs until February 25th of next year. Copies of the catalog are available at the museum’s shop.

How smartphones hijack our minds

So you bought that new iPhone. If you’re like the typical owner, you’ll be pulling your phone out and using it some 80 times a day, according to data Apple collects. That means you’ll be consulting the glossy little rectangle nearly 30,000 times over the coming year. Your new phone, like your old one, will become your constant companion and trusty factotum — your teacher, secretary, confessor, guru. The two of you will be inseparable.

The smartphone is something new in the world. We keep the gadget within reach more or less around the clock, and we use it in countless ways, consulting its apps and checking its messages and heeding its alerts scores of times a day. The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.

We love our phones for good reasons. It’s hard to think of another product that has provided so many useful functions in such a handy form. But while our phones offer convenience and diversion, they also breed anxiety. Their extraordinary usefulness gives them an unprecedented hold on our attention and a vast influence over our thinking and behavior. So what happens to our minds when we allow a single tool such dominion over our perception and cognition?

Scientists have begun exploring that question — and what they’re discovering is both fascinating and troubling. Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.

So what happens to our minds
when we allow a single tool such dominion
over our perception and cognition?

Adrian Ward, a cognitive psychologist and marketing professor at the University of Texas at Austin, has been studying the way smartphones and the internet affect our thoughts and judgments for a decade. In his own work, as well as that of others, he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.

A 2015 Journal of Experimental Psychology study found that when people’s phones beep or buzz while they’re in the middle of a challenging task, their focus wavers, and their work gets sloppier — whether they check the phone or not. Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.

What the earlier research didn’t make clear is whether smartphones differ from the many other sources of distraction that crowd our lives. Dr. Ward suspected that our attachment to our phones has grown so intense that their mere presence might diminish our intelligence. Two years ago, he and three colleagues — Kristen Duke and Ayelet Gneezy from the University of California, San Diego, and Disney Research behavioral scientist Maarten Bos — began an ingenious experiment to test his hunch.

The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.

The results were striking. In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.

In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.

A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.

In an April article in the Journal of the Association for Consumer Research, Dr. Ward and his colleagues wrote that the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.” Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking. The fact that most of us now habitually keep our phones “nearby and in sight,” the researchers noted, only magnifies the mental toll.

Dr. Ward’s findings are consistent with other recently published research. In a similar but smaller 2014 study in the journal Social Psychology, psychologists at the University of Southern Maine found that people who had their phones in view, albeit turned off, during two demanding tests of attention and cognition made significantly more errors than did a control group whose phones remained out of sight. (The two groups performed about the same on a set of easier tests.)

In another study, published in Applied Cognitive Psychology this year, researchers examined how smartphones affected learning in a lecture class with 160 students at the University of Arkansas at Monticello. They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly. A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.

Imagine combining a mailbox, a newspaper, a TV,
a radio, a photo album, a public library
and a boisterous party attended by
everyone you know, and then compressing them all
into a single, small, radiant object.
That is what a smartphone represents to us.

It isn’t just our reasoning that takes a hit when phones are around. Social skills and relationships seem to suffer as well. Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying. In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.” The downsides were strongest when “a personally meaningful topic” was being discussed. The experiment’s results were validated in a subsequent study by Virginia Tech researchers, published in 2016 in the journal Environment and Behavior.

The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware of. But the findings shouldn’t be a surprise. Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communications devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.

The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds. Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours, and we thank them by buying millions of the gadgets and downloading billions of the apps every year. Even prominent Silicon Valley insiders, such as Apple design chief Jonathan Ive and veteran venture capitalist Roger McNamee, have begun to voice concerns about the possible ill effects of their creations. Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”

As strange as it might seem, people’s knowledge
and understanding may actually dwindle
as gadgets grant them easier access
to online data stores.

A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple. The way a media device is designed and used exerts at least as much influence over our minds as does the information that the device disgorges.

As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores. In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.

Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it. The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

Now that our phones have made it so easy to gather information online, our brains are likely off-loading even more of the work of remembering to technology. If the only thing at stake were memories of trivial facts, that might not matter. But, as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.” Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.

This story has a twist. It turns out that we aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones or computers. As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”

That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.

Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains. When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning. Upgrading our gadgets won’t solve the problem. We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.

This essay appeared originally, in a slightly different form, in the Wall Street Journal.

Image of iPhone X: Apple.

Design for misuse

Last Friday, New Yorker editor David Remnick interviewed Apple design chief Jony Ive as part of the magazine’s TechFest. Midway through the conversation came an interesting (and widely reported) exchange in which Ive expressed some regret about people’s use of his most celebrated creation, the iPhone:

Remnick: There’s a ubiquity about the iPhone and its imitators. And I wonder, … do you have any sense of how much you’ve changed life and the way daily life is lived, and the way our brains work? And how do you feel about it? Is it pure joy? Are you ambivalent about it in any way?

Ive: No, there’s — there’s certainly an awareness. I mean, I tend to be so completely preoccupied with what we’re working on at the moment. That tends to take the oxygen. Like any tool, you can see there’s wonderful use and then there’s misuse. …

Remnick: How can — how can they be misused? What’s a misuse of an iPhone?

Ive: I think perhaps constant use.

Remnick: Yes.

[Laughter]

It’s good to see Ive admitting that there may be a problem with people’s compulsive use of smartphones. But, as Business Insider‘s Kif Leswing and others have pointed out, there’s something cynical about the designer’s attempt to shift the blame to the owners of the iPhone for “misusing” it. Remnick didn’t follow up by asking Ive to describe particular design decisions that he and his team have made to deter the iPhone’s “constant use,” but it would have been a fair question, and I’m pretty sure Ive wouldn’t have had much of an answer.

Everything we know about the iPhone and its development and refinement suggests it has been consciously and meticulously designed and marketed to encourage people to use it as much as possible, to treat it, even, as a fetish. Here, for example, is how Apple is promoting the new iPhone X at its web store:

If Apple’s “vision” has always been to create a phone “so immersive the device itself disappears into the experience,” it’s hard to credit Ive’s suggestion that people are misusing it by using it immersively. If “constant use” is a misuse of the iPhone, then the device has been designed for misuse. And the future we’re supposed to welcome will be one in which the smartphone becomes all the more encompassing, the line between gadget and experience blurring further.

If Ive is sincere in his belief that people should be more deliberate in their use of smartphones — and I believe he is — I’m sure he could find elegant ways to nudge people in that direction. One obvious possibility is to change the way the iPhone handles notifications. (I would guess Ive didn’t foresee how app makers, including Apple itself, would come to abuse the iPhone’s notification function, but I assume he would acknowledge that torrents of notifications push people to use the phone constantly.)

The iPhone does have a “Do Not Disturb” setting that turns off tactile and audio notification alerts (though it still allows notifications to appear on the home screen), but that setting is turned off by default. To put it a different way, the iPhone’s default setting is “Disturb.” That could be reversed in the next iOS update. The default setting could be changed so that all notifications are turned off, requiring the user to make a conscious choice to turn on notifications, preferably app by app. And when the user turns on notifications, a notice might appear warning of the dangers in overusing the device.

If Apple took steps to discourage the “constant use” of its most popular product, might that not put a small dent in the company’s profits? Probably so, but let me leave you with one other thing Ive said in the interview:

Ive: Steve [Jobs] was very clear that the goal of Apple was not to make money. And we were very, very disciplined and very clear about how we configured our goals.

Image: Apple.

Apples and atoms

The long conversation in the New York Review between Riccardo Manzotti and Tim Parks is bearing fruit:

Manzotti: The view that only the smallest constituents, atoms, are “real” is called smallism in science, or nihilism in philosophy, and it clashes with everyday experience and common sense in the most blatant way. As Democritus suggests, it’s self-defeating because it is conducted only with the aid of the senses, which it claims have no reality. The world we live in is a world of objects. Apples exist, too!

Parks: But an apple is made of atoms.

Manzotti: To be made of something is not the same as to be identical with it. … We live in a time when scientists seem to like nothing better than to expose our everyday view of reality as delusional. They say, “You see the color red, but in fact, out there are only atoms; there are no colors. You hear music, but out there, there are no sounds,” etc. This gives them the authority to describe an entirely different reality, in which deciding between chocolate or strawberry ice cream, say, is nothing more than a matter of warring cohorts of neurons transferring their electrical charges and chemical processes this way and that, while outside your brain there is only a flavorless world of atomic particles. It’s a vision that denies not only our existence — as people choosing between ice-cream flavors — but also the existence of the things we experience: the banana sundae, a new car, paintings, planets, smells, seas. All these macroscopic objects cease to be real. They are all merely subjective. Merely the product of your brain.

Parks: But what if that’s the truth of the matter?

Manzotti: It is not the truth. It is a profound misunderstanding. The notion that objects exist relative to each other, brought into existence by each other, does not clash with any scientific finding or demonstrated result. Only with smallism, which, again, is an idea, a theory, not a scientific finding. There are atoms, but there are also macroscopic objects, and the key to understanding why both categories exist and are equally real is that they exist relative to different things.

Image: Michele Dorsey Walfred.

The robots we deserve

“Your domestic problems are completely solved.” So says a robotics technician to a grateful housewife in “Leave It to Roll-Oh,” a promotional film produced by the Chevrolet Motor Company for the 1939 New York World’s Fair. The titular star of the picture, a “chromium-plated butler,” is an ambulatory automaton that looks like a beefy version of the tin man from The Wizard of Oz. Operated by remote control, the contraption can be commanded to perform various chores at the push of a button: Clean House, Get Dinner, Wash Dishes, Fix Furnace.

Although “just a daydream,” as the movie’s narrator comes to reveal, Roll-Oh personified the common conception of a household robot. From the moment we first imagined having mechanical servants at our beck and call, we’ve assumed they would be constructed in our own image. Outfitted with arms and legs, heads and torsos, they would perform everyday tasks that we’d otherwise have to do ourselves. From The Jetsons’ indefatigable maid Rosie, to the officious droid C-3PO in Star Wars, to Westworld’s tortured “host” Dolores Abernathy, the robotic helpmates of popular culture have been humanoid in form and function.

I’s time to revise our assumptions. A robot invasion of our homes is under way, but the machines — so-called smart speakers like Amazon Echo, Google Home, and the forthcoming Apple HomePod — look nothing like what we anticipated. Small, squat, and stationary, they resemble vases or cat-food tins more than they do people. Echo and its ilk do, however, share an important trait with their imaginary forebears: They illuminate the times. Whatever their shape, robots tell us something vital about our technologies and ourselves.

Smart speakers have been around just three years, but they already have a hold on us. Powered by “chatbots” like Siri and Alexa, the devices are in the midst of a sales boom. Some 35 million Americans now use the diminutive, talking computers — more than twice the number of just a year ago, according to estimates by eMarketer — and analysts predict sales will continue to surge in the coming months. Google just expanded its Home line, and Microsoft, Samsung, Facebook, and China’s Alibaba are all expected to enter the market soon.

The allure of the gadgets is obvious. Smart speakers are oracles of the countertop. They may not speak for the gods, but they do deliver useful reports on news, traffic, and weather. And they have other talents that their Delphic ancestor couldn’t even dream of. They can serve as DJs, spinning playlists of everything from blue-eyed soul to British grime. They can diagnose ailments and soothe anxieties. They can summon taxis and order pizzas. They can read bedtime stories to toddlers. They can even bark like a watchdog to scare off burglars. And they promise to be the major-domos of home automation, adjusting lights and thermostats, controlling appliances, and issuing orders to specialized robots like the Roomba vacuum cleaner.

Still, if you were looking forward to having a Rosie scurrying around your abode, feather duster in hand, an Echo feels like a letdown. It just sits there.

There are good reasons the domestic robot has taken such an uninspiring shape. Visualizing a nimble, sure-footed android is easy, but building one is hard. As Carnegie Mellon professor Illah Nourbakhsh explains in his book Robot Futures, it requires advances not only in artificial intelligence but in the complex hardware systems required for movement, perception, and dexterity. The human nervous system is a marvel of physical control, able to sense and respond fluidly to an ever-changing environment. Just maintaining one’s balance when standing upright entails a symphony of neural signals and musculoskeletal adjustments, almost all of which take place outside conscious awareness.

Achieving that kind of agility with silicon and steel lies well beyond the technical reach of today’s engineers. Despite steady progress in all fields of robotics, even the most advanced of today’s automatons still look and behave like parodies of human beings. They get flustered by mundane tasks like loading a dishwasher or dusting a shelf of knickknacks, never mind cooking a meal or repairing a furnace. As for multitalented robots able to shift flexibly among an array of everyday tasks: they remain science-fiction fantasies. Roll-Oh is still a no-go.

Meanwhile, thanks to rapid gains in networking, natural language processing, and miniaturization, it’s become simple to manufacture small, cheap computers that can understand basic questions and commands, gather and synthesize information from online databanks, and control other electronics. The technology industry has enormous incentives to promote such gadgets. Now that many of the biggest tech firms operate like media businesses, trafficking in information, they’re in a race to create new products to charm and track consumers. Smart speakers provide a powerful complement to smartphones in this regard. Equipped with sensitive microphones, they serve as in-home listening devices — benign-seeming bugs — that greatly extend the companies’ ability to monitor the habits and needs of individuals. Whenever you chat with a smart speaker, you’re disclosing valuable information about your routines and proclivities.

Beyond the technical and commercial challenges, there’s a daunting psychological barrier to constructing and selling anthropomorphic machines. No one has figured out how to bridge what computer scientists term the “uncanny valley” — the wide gap we sense between ourselves and imitations of ourselves. Because we humans are such social beings, our minds are exquisitely sensitive to the expressions, gestures, and manners of others. Any whiff of artificiality triggers revulsion. Humanoid robots seem creepy to us, and the more closely they’re designed to mimic us, the creepier they become. That puts roboticists in a bind: the more perfect their creations, the less likely we’ll want them in our homes. Lacking human features, smart speakers avoid the uncanny valley altogether.

Although they may not look like the robots we expected, smart speakers do have antecedents in our cultural fantasy life. The robot they most recall at the moment is HAL, the chattering eyeball in Stanley Kubrick’s sci-fi classic 2001: A Space Odyssey. But their current form — that of a standalone gadget — is not likely to be their ultimate form. They seem fated to shed their physical housing and turn into a sort of ambient digital companion. Alexa will come to resemble Samantha, the “artificially intelligent operating system” that beguiles the Joaquin Phoenix character in the movie Her. Through a network of tiny speakers, microphones, and sensors scattered around our homes, we’ll be able to converse with our solicitous AI assistants wherever and whenever we like.

Facebook founder and CEO Mark Zuckerberg spent much of last year programming a prototype of such a virtual agent. In a video released in December, he gave a demo of the system.  Walking around his Silicon Valley home, he conducted a running dialogue with his omnipresent chatbot, calling on it to supply him with a clean t-shirt and toast bread for his breakfast, play movies and music, and entertain his infant daughter Max in her crib. Hooked up to outside cameras with facial-recognition software, the digitized Jeeves also acted as a sentry for the Zuckerberg compound, screening visitors and unlocking the gate.

Whether real or fictional, robots hold a mirror up to society. If Rosie and Roll-Oh embodied a twentieth-century yearning for domestic order and familial bliss, smart speakers symbolize our own, more self-absorbed time.

It seems apt that, as we come to live more of our lives virtually, through social networks and other simulations, our robots should take the form of disembodied avatars dedicated to keeping us comfortable in our media cocoons. Even as they spy on us, the gadgets offer sanctuary from the unruliness of reality, with its frictions and strains. They place us in an artificial world meticulously arranged to suit our bents and biases, a world that understands us and shapes itself automatically to our desires. Amazon’s decision to draw on classical mythology in naming its smart speaker was a masterstroke. Every Narcissus deserves an Echo.

This essay appeared originally, in a slightly shorter form and under the headline “These Are Not the Robots We Were Promised,” in the New York Times.

What they have wrought

Paul Lewis has a sharp, ominous article in this weekend’s Guardian about the misgivings some prominent Silicon Valley inventors are feeling over what they’ve created. Alumni from Google, Twitter, and Facebook worry that the products they helped design and market are having dire side effects, creating a society of compulsive, easily manipulated screen junkies.

Lewis describes how seemingly small design elements ended up having big effects on people’s behavior, from Facebook’s introduction of the Like button (a little dose of “social affirmation” that proved addictive to sender and receiver alike) to the company’s decision to switch its notification icon from the color blue to the color red (turning it from an unobtrusive reminder to an eye-grabbing “alarm signal”). Both the Like button and the red notification icon have become standards in social media apps.

Most illuminating is the story of the downward-swipe gesture used to refresh a feed. It was invented by Loren Brichter for his Tweetie service in 2009 and was adopted by Twitter when the company acquired Tweetie a year later. The “pull-to-refresh” feature has now become ubiquitous. But that raises a question: why does the gesture continue to be used now that it’s easy for social media companies to refresh their feeds automatically? The answer is that the tactile gesture is more seductive. Explains Lewis:

Brichter says he is puzzled by the longevity of the feature. In an era of push notification technology, apps can automatically update content without being nudged by the user. “It could easily retire,” he says. Instead it appears to serve a psychological function: after all, slot machines would be far less addictive if gamblers didn’t get to pull the lever themselves. Brichter prefers another comparison: that it is like the redundant “close door” button in some elevators with automatically closing doors. “People just like to push it.” …

“Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”

Seemingly benign design tweaks turned into “psychologically manipulative” features because they were introduced into businesses that make their money by encouraging compulsive behavior. The more we poke and stroke the screen, the more data the companies collect and the more ads they dispense. Whatever the Like button started out as, it was quickly recognized to be the engine of a powerful feedback loop through which social media companies could track their users and monetize the resulting data. “There’s no ethics,” former Googler Tristan Harris tells Lewis.

Even the prominent venture capitalist Roger McNamee, an early investor in Google and Facebook, is feeling remorse:

[McNamee] identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.” …

McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.” … But McNamee worries the behemoths he helped build may already be too big to curtail.

Lewis’s article happened to appear on the same day as my Wall Street Journal essay “How Smartphones Hijack Our Minds.” It’s a telling coincidence, I think, that the headline on Lewis’s piece is so similar: “‘Our Minds Can Be Hijacked’: The Tech Insiders Who Fear a Smartphone Dystopia.” It’s been clear for some time that smartphones and social-media apps are powerful distraction machines. They routinely divide our attention. But the “hijack” metaphor — I took it from Adrian Ward’s article “Supernormal” — implies a phenomenon different and more insidious than simple distraction. To hijack something is to seize control of it from its rightful owner. What’s up for grabs is your mind.