AI: the Ziggy Stardust Syndrome

“Ziggy sucked up into his mind.” –David Bowie

In his Wall Street Journal column this weekend, Nobel laureate Frank Wilczek offers a fascinating theory as to why we haven’t been able to find signs of intelligent life elsewhere in the universe. Maybe, he suggests, intelligent beings are fated to shrink as their intelligence expands. Once the singularity happens, AI implodes into invisibility.

It’s entirely logical. Wilczek notes that “effective computation must involve interactions and that the speed of light limits communication.” To optimize its thinking, an AI would have no choice but to compress itself to minimize delays in the exchange of messages. It would need to get really, really small.

Consider a computer operating at a speed of 10 gigahertz, which is not far from what you can buy today. In the time between its computational steps, light can travel just over an inch. Accordingly, powerful thinking entities that obey the laws of physics, and which need to exchange up-to-date information, can’t be spaced much farther apart than that. Thinkers at the vanguard of a hyperadvanced technology, striving to be both quick-witted and coherent, would keep that technology small.

The upshot is that the most advanced civilizations would be tiny and shy. They would “expand inward, to achieve speed and integration — not outward, where they’d lose patience waiting for feedback.” Call it the Ziggy Stardust Syndrome. An AI-based civilization would suck up into its own mind, becoming a sort of black hole of braininess. We wouldn’t be able to see such civilizations because, lost in their own thoughts, they’d have no interest in being seen. “A hyperadvanced civilization,” as Wilczek puts it, “might just want to be left alone.” Like Greta Garbo.

The idea of a jackbooted superintelligent borg bent on imperialistic conquest has always left me cold. It seems an expression of anthropomorphic thinking: an AI would act like us. Wilczek’s vision is much more appealing. There’s a real poignancy — and, to me at least, a strange hopefulness — to the idea that the ultimate intelligence would also be the ultimate introvert, drawn ever further into the intricacies of its own mind. What would an AI think about? It would think about its own thoughts. It would be a pinprick of pure philosophy. It would, in the end, be the size of an idea.

The meek may not inherit the earth, but it seems they may inherit the cosmos, if they haven’t already.

The Big Switch: ten years on

My second book, The Big Switch: Rewiring the World, from Edison to Google, celebrates its tenth birthday this year. The book, which came out in January 2008, heralds the coming of the cloud and speculates on its consequences. It’s hard to imagine now, but in 2008 cloud computing was a new and largely unproven concept, and the common wisdom was that it wouldn’t work. Software programs running in centralized server farms and delivered over the internet to users would be too slow and balky, it was thought, to displace the programs running on hard drives inside personal computers or on servers in data centers owned by individual companies. The naysayers were wrong. The technical barriers fell, network latency evaporated, and in short order computing went from being a decentralized resource to being a centralized one — a utility, essentially. The computer scientists, engineers, and programmers who made this monumental technical shift possible still haven’t received their due, and they probably never will. The real work went on behind the scenes, anonymously, in companies like Google,, Amazon Web Services, Akamai, and Facebook, among many others.

The Big Switch has two parts. The first, “One Machine,” draws a parallel between the building of the electric grid a hundred years ago and the building of the cloud today. In both cases, a decentralized resource essential to society (power, data processing) was centralized through the construction of a distribution network (electric grid, internet) and central plants (generation stations, server farms). The stories of the electric grid and the computing grid are both stories of technical ingenuity and fearlessness. The book’s second part, “Living in the Cloud,” is darker. In fact, it was during the course of writing it that my view of the future of computing changed. I began The Big Switch believing that the new computing grid would democratize the use of computing power even as it centralized the machinery of data processing. That is, after all, what the electric grid did. By industrializing the generation and distribution of electricity, it made power a cheap resource that everyone could use simply by sticking a plug into a wall socket. But data is fundamentally different from electric current, I belatedly realized, and centralizing the provision of computing would also mean centralizing control over information. The owners of the server farms would not be faceless utilities; they would be our overseers.

Here’s an excerpt from “The Inventor and His Clerk,” a chapter in the first half of The Big Switch:

Thomas Edison was tired. It was the summer of 1878, and he‘d just spent a grueling year perfecting and then promoting his most dazzling invention yet, the tinfoil phonograph. He needed a break from the round-the-clock bustle of his Menlo Park laboratory, a chance to clear his mind before embarking on some great new technological adventure. When a group of his friends invited him to join them on a leisurely camping and hunting tour of the American West, he quickly agreed. The trip began in Rawlins, Wyoming, where the party viewed an eclipse of the sun, and then continued westward through Utah and Nevada, into Yosemite Valley, and on to San Francisco.

While traveling through the Rockies, Edison visited a mining site by the side of the Platte River. Seeing a crew of workers struggling with manual drills, he turned to a companion and remarked, “Why cannot the power of yonder river be transmitted to these men by electricity?” It was an audacious thought—electricity had yet to be harnessed on anything but the smallest scale—but for Edison audacity was synonymous with inspiration. By the time he returned east in the fall, he was consumed with the idea of supplying electricity over a network from a central generating station. His interest no longer lay in powering the drills of work crews in the wilderness, however. He wanted to illuminate entire cities. He rushed to set up the Edison Electric Light Company to fund the project and, on October 20, he announced to the press that he would soon be providing electricity to the homes and offices of New York City. Having made the grand promise, all he and his Menlo Park team had to do was figure out how to fulfill it.

Unlike lesser inventors, Edison didn‘t just create individual products; he created entire systems. He first imagined the whole, then he built the necessary pieces, making sure they all fit together seamlessly. “It was not only necessary that the lamps should give light and the dynamos generate current,” he would later write about his plan for supplying electricity as a utility, “but the lamps must be adapted to the current of the dynamos, and the dynamos must be constructed to give the character of current required by the lamps, and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine.” Fortunately for Edison, he had a good model at hand. Urban gaslight systems, invented at the start of the century, had been set up in many cities to bring natural gas from a central gasworks into buildings to be used as fuel for lamps. Light, having been produced by simple candles and oil lamps for centuries, had already become a centralized utility. Edison‘s challenge was to replace the gaslight systems with electric ones.

Electricity had, in theory, many advantages over gas as a source of lighting. It was easier to control, and because it provided illumination without a flame it was cleaner and safer to use. Gaslight by comparison was dangerous and messy. It sucked the oxygen out of rooms, gave off toxic fumes, blackened walls and soiled curtains, heated the air, and had an unnerving tendency to cause large and deadly explosions. While gaslight was originally “celebrated as cleanliness and purity incarnate,” Wolfgang Schivelbusch reports in Disenchanted Night, his history of lighting systems, its shortcomings became more apparent as it came to be more broadly used. People began to consider it “dirty and unhygienic”—a necessary evil. Edison himself dismissed gaslight as “barbarous and wasteful.” He called it “a light for the dark ages.”

Despite the growing discontent with gas lamps, technological constraints limited the use of electricity for lighting at the time Edison began his experiments. For one thing, the modern incandescent lightbulb had yet to be invented. The only viable electric light was the arc lamp, which worked by sending a naked current across a gap between two charged iron rods. Arc lamps burned with such intense brightness and heat that you couldn‘t put them inside rooms or most other enclosed spaces. They were restricted to large public areas. For another thing, there was no way to supply electricity from a central facility. Every arc lamp required its own battery. “Like the candle and the oil lamp,” Schivelbusch explains, “arc lighting was governed by the pre-industrial principle of a self-sufficient supply.” However bad gaslight might be, electric light was no alternative.

To build his “one machine,” therefore, Edison had to pursue technological breakthroughs in every major component of the system. He had to pioneer a way to produce electricity efficiently in large quantities, a way to transmit the current safely to homes and offices, a way to measure each customer‘s use of the current, and, finally, a way to turn the current into controllable, reliable light suitable for normal living spaces. And he had to make sure that he could sell electric light at the same price as gaslight and still turn a profit.

It was a daunting challenge, but he and his Menlo Park associates managed to pull it off with remarkable speed. Within two years, they had developed all the critical components of the system. They had invented the renowned Edison lightbulb, sealing a thin copper filament inside a small glass vacuum to create, as one reporter poetically put it, “a little globe of sunshine, a veritable Aladdin‘s lamp.” They had designed a powerful new dynamo that was four times bigger than its largest precursor. (They named their creation the Jumbo, after a popular circus elephant of the time.) They had perfected a parallel circuit that would allow many bulbs to operate independently, with separate controls, on a single wire. And they had created a meter that would keep track of how much electricity a customer used. In 1881, Edison traveled to Paris to display a small working model of his system at the International Exposition of Electricity, held in the Palais de l‘Industrie on the Champs-Elysées. He also unveiled blueprints for the world‘s first central generating station, which he announced he would construct in two warehouses on Pearl Street in lower Manhattan.

The plans for the Pearl Street station were ambitious. Four large coal-fired boilers would create the steam pressure to power six 125-horsepower steam engines, which in turn would drive six of Edison‘s Jumbo dynamos. The electricity would be sent through a network of underground cables to buildings in a square-mile territory around the plant, each of which would be outfitted with a meter. Construction of the system began soon after the Paris Exposition, with Edison often working through the night to supervise the effort. A little more than a year later, the plant had been built and the miles of cables laid. At precisely three o‘clock in the afternoon on September 4, 1882, Edison instructed his chief electrician, John Lieb, to throw a switch at the Pearl Street station, releasing the current from one of its generators. As the New York Herald reported the following day, “in a twinkling, the area bounded by Spruce, Wall, Nassau and Pearl Streets was in a glow.” The electric utility had arrived.

And here’s an excerpt from “A Spider’s Web,” a chapter in the second half:

The most far-reaching corporate use of the cloud [will be] as a control technology for optimizing how we act as consumers. Despite the resistance of the Web‘s early pioneers and pundits, consumerism long ago replaced libertarianism as the prevailing ideology of the online world. Restrictions on the commercial use of the Net collapsed with the launch of the World Wide Web in 1991. The first banner ad—for a Silicon Valley law firm—appeared in 1993, followed the next year by the first spam campaign. In 1995, Netscape tweaked its Navigator browser to support the “cookies” that enable companies to identify and monitor visitors to their sites. By 1996, the dotcom gold rush had begun. More recently, the Web‘s role as a sales and promotion channel has expanded further. Assisted by Internet marketing consultants, companies large and small have become much more adept at collecting information on customers, analyzing their behavior, and targeting products and promotional messages to them.

The growing sophistication of Web marketing can be seen most clearly in advertising. Rather than being dominated by generic banner ads, online advertising is now tightly tied to search results or other explicit indicators of people‘s desires and identities. Search engines themselves have become the leading distributors of ads, as the prevailing tools for Web navigation and corporate promotion have merged into a single and extraordinarily profitable service. Google originally resisted the linking of advertisements to search results—its founders argued that “advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers”—but now it makes billions of dollars through the practice. Search-engine optimization—the science of using advanced statistical techniques to increase the likelihood that a person will visit a site or click on an ad—has become an important corporate function, which Google and other search engines promote by sharing with companies information on how they rank sites and place ads.

In what is perhaps the most remarkable manifestation of the triumph of consumerism on the Web, popular online communities like MySpace encourage their members to become friends with corporations and their products. During 2006, for example, more than 85,000 people “friended” Toyota‘s Yaris car model at the site, happily entangling themselves in the company‘s promotional campaign for the recently introduced vehicle. “MySpace can be viewed as one huge platform for ‘personal product placement,'” writes Wade Roush in an article in Technology Review. He argues that “the large supply of fake ‘friends,‘ together with the cornucopia of ready-made songs, videos, and other marketing materials that can be directly embedded in [users’] profiles, encourages members to define themselves and their relationships almost solely in terms of media and consumption.” In recognition of the blurring of the line between customer and marketer online, Advertising Age named “the consumer” its 2007 Advertising Agency of the Year.

But the Internet is not just a marketing channel. It‘s also a marketing laboratory, providing companies with unprecedented insights into the motivations and behavior of shoppers. Businesses have long been skilled at controlling the supply side of their operations, thanks in large part to earlier advances in information technology, but they‘ve struggled when it comes to exerting control over the demand side—over what people buy and where and when they buy it. They haven‘t been able to influence customers as directly as they‘ve been able to influence employees and suppliers. Advertising and promotion have always been frustratingly imprecise. As the department store magnate John Wanamaker famously said more than a hundred years ago, “Half the money I spend on advertising is wasted. The trouble is, I don‘t know which half.”

The cloud is beginning to change that. It promises to strengthen companies‘ control over consumption by providing marketers with the data they need to personalize their pitches precisely and gauge the effects of those pitches accurately. It optimizes both communication and measurement. In a 2006 interview with the Economist, Rishad Tobaccowala, a top executive with the international ad agency Publicis, summed up the change in a colorful, and telling, metaphor. He compared traditional advertising to dropping bombs on cities—a company can‘t be sure who it hits and who it misses. But with Internet ads, he said, companies can “make lots of spearheads and then get people to impale themselves.” […]

“As every man goes through life he fills in a number of forms for the record, each containing a number of questions,” Alexander Solzhenitsyn wrote in his novel Cancer Ward. “A man‘s answer to one question on one form becomes a little thread, permanently connecting him to the local center of personnel records administration. There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider‘s web. … Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads.”

As we go about our increasingly digitized lives, the threads that radiate from us are multiplying far beyond anything that even Solzhenitsyn could have imagined in the Soviet Union in the 1960s. Nearly everything we do online is recorded somewhere in the machinery of the cloud. Every time we read a page of text or click on a link or watch a video, every time we put something in a shopping cart or perform a search, every time we send an email or chat in an instant-messaging window, we are filling in a “form for the record.” Unlike Solzhenitsyn‘s Everyman, however, we‘re often unaware of the threads we‘re spinning and how and by whom they‘re being manipulated. And even if we were conscious of being monitored or controlled, we might not care. After all, we also benefit from the personalization that the Internet makes possible—it makes us more perfect consumers and workers. We accept greater control in return for greater convenience. The spider‘s web is made to measure, and we‘re not unhappy inside it.

That was the view, or at least one view, from 2008.

The metadata of experience, the experience of metadata

I like to know where things stand. I like to know how things are progressing. I signed up for UPS My Choice and FedEx Delivery Manager and USPS Informed Delivery. I know when a package has been shipped to me, where it is at every moment as it hops across the country toward me, the projected window of its ultimate delivery, the fact of its delivery.

I know when there are exceptions. I know when the weather has turned inclement. I know the hubs, and I know the spokes.

When I myself require carriage, I order a car through Uber or Lyft, and in an instant I know my driver’s name, what he looks like, the model and color of the vehicle he is driving, and his rating (Vladislav, white Toyota Camry, 4.8). I see where he is on the map, how many minutes I must wait for his arrival. Sometimes his progress stalls, and my wait time increases by a minute, and this is painful to me.

I don’t want to be surprised. I prefer suffocation to surprise.

I give Vladislav five stars not because he deserves more than four stars but because I would like my life to be a series of five-star experiences.

I am standing in line in a building, waiting to get a matter taken care of, and in front of me is a sign requesting that I rate how I am feeling by pressing one of three emoji buttons. There is a smiling face and there is a frowning face, and between the two is a face with an expression of complete affectlessness. I choose the button in the middle, and immediately I feel my face go blank.

I like the fact that I can now check my credit rating without affecting my credit rating. I am no fan of the uncertainty principle.

I have come to realize that I learn more about other people by googling them than by meeting them and talking with them.

When a friend posts a new photo on Instagram, I give a lot of thought as to whether or not I should like it. These choices have effects on people, and they have ramifications for how people will judge me and my own offerings in the future. The likes I give, or withhold, say something about me as well as about the object or experience being rated. The generation of metadata should never be taken lightly.

Metadata is a kind of agony.

Everything that happens to me is time-stamped. My life is a series of transactions recorded in official ledgers. I am a clerk. I am a bureaucrat. I’m always on the job.

I know all the details. I know what just happened, and I know what happens next. Only the present escapes me.

Trump and Twitter

I have an essay on Donald Trump’s Twitter habit, and what it says about the times, in the new issue of Politico Magazine.

Here’s a bit:

In the early 1950s, the Canadian political economist Harold Innis suggested that every informational medium has a bias. By encouraging certain forms of speech and discouraging others, a popular medium not only influences how people converse; it also shapes a society’s institutions and values. Early types of media — tablets, scrolls, theaters — were “time-biased,” Innis wrote. Durable and largely stationary, they encouraged the long view and tended to underpin communities that were stable, hierarchical, and often deeply religious.

As communication technology advanced, new “space-biased” media came to the fore. Communication networks extended across great distances and reached mass audiences, and the messages the networks carried took on a more transactional and transitory character. Modern media, from post offices to telegraph lines to TV stations, encouraged the development of more dynamic societies built not on eternal verities but on commerce and trade.

By altering prevailing forms of communication, Innis argued, every new medium tends to upset the status quo. The recent arrival of social media fits this pattern. Thanks to the rise of networks like Twitter, Facebook, and Snapchat, the way we express ourselves, as individuals and as citizens, is in a state of upheaval. Radically biased toward space and against time, social media is inherently destabilizing. What it teaches us, through its whirlwind of fleeting messages, is that nothing lasts. Everything is disposable. Novelty rules. The disorienting sway that Trump’s tweets hold over us, the way they’ve blurred the personal and the public, the vital and the trivial, the true and the false, testifies to the power of the change, and the uncertainty of its consequences.

Read on.

Image: Politico.

Gray area: Google’s truthiness problem

I never realized that the guy who wrote Straw Dogs is the very same guy who wrote Men Are from Mars, Women Are from Venus. Sometimes it boggles the mind what you discover through Google:

The absurdity of confusing these two writers is priceless, but Google’s cavalier willingness to allow its algorithms to publish misinformation and nonsense does raise important questions, both epistemological and ethical. Is it OK to run an AI when you know that it will spread falsehoods to the public — and on a massive scale? Is it OK to treat truth as collateral damage in the supposed march of progress? Google in its early days illuminated the possibilities of search algorithms. Now it seems determined to reveal their limits.

Update: Google is even cavalier when it comes to issuing death notices.

Being there

I have an essay, “The World Beyond the Screen,” in the catalog for the current exhibition Being There at the Louisiana Museum of Modern Art in Denmark. The essay begins:

The paradox of modern media is that in opening the world to us, it removes us from the world. To enjoy the informational and recreational bounties of the networked screen, whether it’s a television, a personal computer, or a smartphone, we have to withdraw from our immediate physical and social surroundings. Our mind, with its faculties of attention and perception, has to shift its focus to the simulated, or virtual, images and experiences served up by the technological system. To speak of our time as one of great “connectivity” is to see only half the picture. We disconnect to connect.

The ability of the human mind to remove itself, purposely, from its physical surroundings and enter an artificial environment is, so far as we can tell, a talent unique to our species. And it’s a talent that seems elementally important to us, as individuals and as a society. It’s what unlocks the realm of imagination, and it’s what gives rise to much of what’s most valuable and enduring in culture. When we speak of being “transported” by a work of literature or art, we’re acknowledging the power of the mind to wander beyond the here-and-now, beyond the physical and temporal bounds of immediate reality.

Yet our ability to remove ourselves from reality also brings peril. . . .

The exhibition runs until February 25th of next year. Copies of the catalog are available at the museum’s shop.

How smartphones hijack our minds

So you bought that new iPhone. If you’re like the typical owner, you’ll be pulling your phone out and using it some 80 times a day, according to data Apple collects. That means you’ll be consulting the glossy little rectangle nearly 30,000 times over the coming year. Your new phone, like your old one, will become your constant companion and trusty factotum — your teacher, secretary, confessor, guru. The two of you will be inseparable.

The smartphone is something new in the world. We keep the gadget within reach more or less around the clock, and we use it in countless ways, consulting its apps and checking its messages and heeding its alerts scores of times a day. The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.

We love our phones for good reasons. It’s hard to think of another product that has provided so many useful functions in such a handy form. But while our phones offer convenience and diversion, they also breed anxiety. Their extraordinary usefulness gives them an unprecedented hold on our attention and a vast influence over our thinking and behavior. So what happens to our minds when we allow a single tool such dominion over our perception and cognition?

Scientists have begun exploring that question — and what they’re discovering is both fascinating and troubling. Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.

So what happens to our minds
when we allow a single tool such dominion
over our perception and cognition?

Adrian Ward, a cognitive psychologist and marketing professor at the University of Texas at Austin, has been studying the way smartphones and the internet affect our thoughts and judgments for a decade. In his own work, as well as that of others, he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.

A 2015 Journal of Experimental Psychology study found that when people’s phones beep or buzz while they’re in the middle of a challenging task, their focus wavers, and their work gets sloppier — whether they check the phone or not. Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.

What the earlier research didn’t make clear is whether smartphones differ from the many other sources of distraction that crowd our lives. Dr. Ward suspected that our attachment to our phones has grown so intense that their mere presence might diminish our intelligence. Two years ago, he and three colleagues — Kristen Duke and Ayelet Gneezy from the University of California, San Diego, and Disney Research behavioral scientist Maarten Bos — began an ingenious experiment to test his hunch.

The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.

The results were striking. In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.

In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.

A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.

In an April article in the Journal of the Association for Consumer Research, Dr. Ward and his colleagues wrote that the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.” Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking. The fact that most of us now habitually keep our phones “nearby and in sight,” the researchers noted, only magnifies the mental toll.

Dr. Ward’s findings are consistent with other recently published research. In a similar but smaller 2014 study in the journal Social Psychology, psychologists at the University of Southern Maine found that people who had their phones in view, albeit turned off, during two demanding tests of attention and cognition made significantly more errors than did a control group whose phones remained out of sight. (The two groups performed about the same on a set of easier tests.)

In another study, published in Applied Cognitive Psychology this year, researchers examined how smartphones affected learning in a lecture class with 160 students at the University of Arkansas at Monticello. They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly. A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.

Imagine combining a mailbox, a newspaper, a TV,
a radio, a photo album, a public library
and a boisterous party attended by
everyone you know, and then compressing them all
into a single, small, radiant object.
That is what a smartphone represents to us.

It isn’t just our reasoning that takes a hit when phones are around. Social skills and relationships seem to suffer as well. Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying. In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.” The downsides were strongest when “a personally meaningful topic” was being discussed. The experiment’s results were validated in a subsequent study by Virginia Tech researchers, published in 2016 in the journal Environment and Behavior.

The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware of. But the findings shouldn’t be a surprise. Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.” Media and communications devices, from telephones to TV sets, have always tapped into this instinct. Whether turned on or switched off, they promise an unending supply of information and experiences. By design, they grab and hold our attention in ways natural objects never could.

But even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings. Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.

The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds. Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours, and we thank them by buying millions of the gadgets and downloading billions of the apps every year. Even prominent Silicon Valley insiders, such as Apple design chief Jonathan Ive and veteran venture capitalist Roger McNamee, have begun to voice concerns about the possible ill effects of their creations. Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”

As strange as it might seem, people’s knowledge
and understanding may actually dwindle
as gadgets grant them easier access
to online data stores.

A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple. The way a media device is designed and used exerts at least as much influence over our minds as does the information that the device disgorges.

As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores. In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.

Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it. The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

Now that our phones have made it so easy to gather information online, our brains are likely off-loading even more of the work of remembering to technology. If the only thing at stake were memories of trivial facts, that might not matter. But, as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.” Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.

This story has a twist. It turns out that we aren’t very good at distinguishing the knowledge we keep in our heads from the information we find on our phones or computers. As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”

That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.

Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains. When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning. Upgrading our gadgets won’t solve the problem. We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.

This essay appeared originally, in a slightly different form, in the Wall Street Journal.

Image of iPhone X: Apple.