Ear full

Running in the New Republic today is my review of In Pursuit of Silence, George Prochnik’s thoughtful examination of our complicated relationship with noise:

In 1906, Julia Barnett Rice, a wealthy New York physician and philanthropist, founded the Society for the Suppression of Unnecessary Noise. Rice, who lived with her husband and six children in a Manhattan mansion overlooking the Hudson River, had become enraged at the way tugboats would blow their horns incessantly while steaming up and down the busy waterway. During a typical night, the tugs would emit two or three thousand toots, most of which seemed to serve merely as sonic greetings between friendly captains.

Armed with research documenting the health problems caused by the sleep-shattering blasts, Rice launched a relentless lobbying campaign that took her to police stations, health departments, the offices of shipping regulators, and ultimately the halls of Congress …

Continue.

Software that loves too much

We all like friendly, helpful software, but at what point does user-friendliness go too far? Some fascinating studies are beginning to appear that show how software applications can, by usurping personal agency, subvert learning and narrow our field of view. I have a short essay on the topic, focusing on that most solicitous of software companies, Google, in the new issue of the Atlantic:

I type the letter p into Google’s search box, and a list of 10 suggested keywords, starting with pandora and concluding with people magazine, appears just beneath my cursor. I type an r after the p, and the list refreshes itself. Now it begins with priceline and ends with pregnancy calculator. I add an o. The list updates again, going from prom dresses to proxy sites.

Google is reading my mind — or trying to.

Continue.

Steven Pinker and the Internet

As someone who has enjoyed and learned a lot from Steven Pinker’s books about language and cognition, I was disappointed to see the Harvard psychologist write, in Friday’s New York Times, a cursory op-ed column about people’s very real concerns over the Internet’s influence on their minds and their intellectual lives. Pinker seems to dismiss out of hand the evidence indicating that our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts. He goes so far as to assert that such media “are the only things that will keep us smart.” And yet the evidence he offers to support his sweeping claim consists largely of opinions and anecdotes, along with one very good Woody Allen joke.

One thing that didn’t surprise me was Pinker’s attempt to downplay the importance of neuroplasticity. While he acknowledges that our brains adapt to shifts in the environment, including (one infers) our use of media and other tools, he implies that we need not concern ourselves with the effects of those adaptations. Because all sorts of things influence the brain, he oddly argues, we don’t have to care about how any one thing influences the brain. Pinker, it’s important to point out, has an axe to grind here. The growing body of research on the adult brain’s remarkable ability to adapt, even at the cellular level, to changing circumstances and new experiences poses a challenge to Pinker’s faith in evolutionary psychology and behavioral genetics. The more adaptable the brain is, the less we’re merely playing out ancient patterns of behavior imposed on us by our genetic heritage.

In Adapting Minds, his epic critique of the popular brand of evolutionary psychology espoused by Pinker and others, David J. Buller argues that evolution “has not designed a brain that consists of numerous prefabricated adaptations,” as Pinker has suggested, but rather one that is able “to adapt to local environmental demands throughout the lifetime of an individual, and sometimes within a period of days, by forming specialized structures to deal with those demands.” To understand the development of human thought, and the influence of outside influences on that thought, we need to take into account both the fundamental genetic wiring of the brain – what Pinker calls its “basic information-processing capacities” – and the way our genetic makeup allows for ongoing changes in that wiring.

On the topic of neuroplasticity, Pinker claims to speak for all brain scientists. When confronted with suggestions that “experience can change the brain,” he writes, “cognitive neuroscientists roll their eyes.” I’m wary when any scientist suggests that his view of a controversial matter is shared by all his colleagues. I also wonder if Pinker read the reports on the Net’s cognitive effects published in the Times last week, in which several leading brain researchers offer views that conflict with his own. A few examples:

“The technology is rewiring our brains,” said Nora Volkow, director of the National Institute of Drug Abuse and one of the world’s leading brain scientists …

The nonstop interactivity is one of the most significant shifts ever in the human environment, said Adam Gazzaley, a neuroscientist at the University of California, San Francisco. “We are exposing our brains to an environment and asking them to do things we weren’t necessarily evolved to do,” he said. “We know already there are consequences” …

[Stanford professor Clifford] Nass says the Stanford studies [of media multitasking] are important because they show multitasking’s lingering [cognitive] effects: “The scary part for guys like Kord is, they can’t shut off their multitasking tendencies when they’re not multitasking.”

In a brief essay published last week on the Times website, Russell A. Poldrack, the director of the Imaging Research Center and professor of psychology and neurobiology at the University of Texas at Austin, wrote: “Our research has shown that multitasking can have an insidious effect on learning, changing the brain systems that are involved so that even if one can learn while multitasking, the nature of that learning is altered to be less flexible. This effect is of particular concern given the increasing use of devices by children during studying.”

Other scholars of the mind also believe, or at least worry, that our use of digital media is having a deep, and not necessarily beneficial, influence on our ways of thinking. The distinguished neuroscientist Michael Merzenich, who has been studying the adaptability of primate brains since the late 1960s, believes that human brains are being significantly “remodeled” by our use of the Net and other modern media. Maryanne Wolf, a developmental psychologist at Tufts, fears that the shift from immersive page-based reading to distracted screen-based reading may impede the development of the specialized neural circuits that make deep, richly interpretive reading possible. We may turn into mere “decoders” of text.

Pinker may well disagree with all these views, but to pretend they don’t exist is misleading.

Pinker also pokes at straw men. Instead of grappling with the arguments of others, he reduces them to caricatures in order to dismiss them. He writes, for example, that “the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.” Who exactly does Pinker believe is proposing such an idea – John Locke? I haven’t seen anyone suggest that the brain is a shapeless blob of clay. What they are saying is that the brain, while obviously as much a product of evolution as any other part of the body, is not genetically locked into rigid modes of thought and behavior. Changes in our habits of thought echo through our neural pathways, for better and for worse.

In other cases, Pinker uses overstatement to gloss over subtleties. He writes at one point, “If electronic media were hazardous to intelligence, the quality of science would be plummeting.” Human intelligence takes many forms. Electronic media may enhance some aspects of our intelligence (the ability to spot patterns in arrays of visual data, for example, or to discover pertinent facts quickly or to collaborate at a distance) while at the same time eroding others (the ability to reflect on our experiences, say, or to express ourselves in subtle language or to read complex narratives critically). To claim that “intelligence” can be gauged by a single measure is to obfuscate rather than illuminate.

Pinker notes that “the decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.” Actually, as the political scientist James Flynn first documented, general IQ scores have been rising at a steady clip since the beginning of the 1900s, so we should be wary about linking this long-term trend to the recent popularity of any particular technology or medium. Moreover, as Flynn himself has been careful to point out, the improvements in IQ scores are largely attributable to increases in measures of visual acuity and abstract problem-solving, such as the mental rotation of geometric forms, the identification of similarities between disparate objects, and the arrangement of shapes into logical sequences. These skills are certainly very important, but measures of other components of intelligence, including verbal skill, vocabulary, basic arithmetic, memorization, critical reading, and general knowledge, have been stagnant or declining. In warning against drawing overly broad conclusions about our intelligence from the rise in IQ scores, Flynn wrote, in his book What Is Intelligence?, “How can people get more intelligent and have no larger vocabularies, no larger stores of general information, no greater ability to solve arithmetical problems?”

Drifting briefly from science to the humanities, Pinker implies that our cultural life is richer than ever, a consequence, apparently, of the bounties of digital media. As evidence, he points to the number of stories appearing on the website Arts & Letters Daily. Suffice it to say that other indicators of the depth and richness of cultural life point in different directions.

Pinker also makes several observations that, while accurate, undercut the main thrust of his argument. He writes, for example, that “the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else.” Well, yes, and that’s why some of us are deeply concerned about society’s ever-increasing devotion to the Net and other screen-based media. (The average American now spends more than eight hours a day peering into screens, while devoting only about 20 minutes to reading books and other printed works.) It’s hard not to conclude, or at least suspect, that we are narrowing the scope of our intellectual experiences. We’re training ourselves, through repetition, to be facile skimmers, scanners, and message-processors – important skills, to be sure – but, perpetually distracted and interrupted, we’re not training ourselves in the quieter, more attentive modes of thought: contemplation, reflection, introspection, deep reading, and so forth.

And there’s this: “Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.” Precisely so. Which is one of the reasons that many experts on multitasking are concerned about its increasing prevalence. People may think, as they juggle emails, texts, tweets, updates, Google searches, glances at web pages, and various other media tasks, that they’re adeptly doing a lot of stuff all at once, but what they’re really doing is switching constantly between different tasks, and suffering the cognitive costs that accompany such switching. As Steven Yantis, a professor of psychological and brain sciences at Johns Hopkins, told the Times:

In addition to the switch cost, each time you switch away from a task and back again, you have to recall where you were in that task, what you were thinking about. If the tasks are complex, you may well forget some aspect of what you were thinking about before you switched away, which may require you to revisit some aspect of the task you had already solved (for example, you may have to re-read the last paragraph you’d been reading). Deep thinking about a complex topic can become nearly impossible.

The fact that people who fiddle with cell phones drive poorly shouldn’t make us less concerned about the cognitive effects of media distractions; it should make us more concerned.

And then there’s this: “It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people.” Exactly. And that’s another cause for concern. Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study. Patricia Greenfield, a professor of developmental psychology at UCLA, warned in a Science article last year that our growing use of screen-based media appears to be weakening our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

We should all celebrate, along with Pinker, the many benefits that the Net and related media have brought us. I have certainly enjoyed those benefits myself over the last two decades. And we should heed his advice to look for “strategies of self-control” to ameliorate the distracting and addictive qualities of those media. But we should not share Pinker’s complacency when it comes to the Net’s ill effects, and we should certainly not ignore the mounting evidence of those effects.

Net effects

I had a couple of fairly lengthy discussions on the themes of The Shallows this week:

On the NPR program On Point, I talked about the book with Tom Ashbrook. New York Times blogger Nick Bilton provided a dissenting view. Listen.

And I discussed the book with Jerry Brito, of the Mercatus Center at George Mason University, as part of his excellent Surprisingly Free podcast series. Listen.

Finally, if you’d like to get a sense of the scope of the book’s content, I’d recommend taking a look at Michael Agger’s review at Slate.

Nowhere fast

Monday’s New York Times features a series of articles on the theme “Your Brain on Computers”:

Hooked on Gadgets, and Paying a Mental Price, by Matt Richtel

An Ugly Toll of Technology – Impatience and Forgetfulness, by Tara Parker Pope

More Americans Sense a Downside to an Always Plugged-In Existence, by Marjorie Connelly

Maybe we’re ready to stop and think.

UPDATE: As part of this series, the Times is also looking for volunteers who would be willing to “unplug temporarily and tell us about their experience.”

Losing our bearings

All technologies have unintended side effects, and the most useful and popular technologies tend to have the largest unintended side effects. (Witness the automobile.) Our eager embrace of GPS systems and other computer mapping tools will be no different, I suggest in a column in today’s Washington Post. (Someone please tell me that the headline they’re using online is not the same one they’re using in the paper.) The column was inspired by the recent news that a woman, Lauren Rosenberg, is suing Google because, she claims, one of its walking maps led her into the path of a speeding car:

Blaming Google seems like a stretch. Using any kind of map requires caution, and on its site the company warns people about the dangers inherent in walking near traffic (though it’s not clear whether the warning appeared on Rosenberg’s BlackBerry). Google, a multibillion-dollar company, is a big target, and Rosenberg’s suit may prove frivolous.

But her experience should nevertheless give us pause. It highlights a remarkable shift in the way people get around these days. We may not all be wandering across highways in the dark, but most of us have become dependent on computer-generated maps of one sort or another ….

More.

For a much fuller discussion of the subject, I highly recommend an essay by Alex Hutchinson that appeared in The Walrus last year.

Self-linking behavior

Has anyone written a good essay about the soul-sapping power of ego feeds? If not, I’m going to have to give it a shot, as I’m rapidly becoming an expert on the matter.

One thing I’ve learned about myself is that I’m better at writing than talking. So I always cringe when I read, or watch or listen to, an interview I’ve done. But I’m fairly pleased with my interview with Benjamin Carlson over at the Atlantic’s site. I seem to have been more or less cogent in my replies – or else the piece has just been well edited. (They may want to fix the typo in the headline, though.)

Tomorrow’s New York Times Book Review has a review of The Shallows by Jonah Lehrer. I’m flattered to be reviewed by Lehrer in the Times – I’m a fan of his blog, The Frontal Cortex – but I was startled to find him claim that “the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind.” I think that’s incorrect, even while I’m happy to acknowledge that brain studies are imprecise and can be interpreted in different ways (and that the definition of what’s “good for the mind” will vary from person to person). For a balanced and expert review of the literature, I would refer readers to a paper by Patricia M. Greenfield, the distinguished UCLA developmental psychologist, that appeared in the journal Science last year. As I point out in The Shallows, the Internet and related technologies have definitely been associated with gains in certain types of cognition, but the preponderance of evidence indicates that, as Greenfield writes, what we are sacrificing is our capability for “deep processing: mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.” (I’m not suggesting that Greenfield’s paper is the definitive or final word on the subject – there are plenty more studies and reviews – but it’s a good starting point.)

Finally, today’s Wall Street Journal features dueling essays by me and Clay Shirky. They’re dueling, though technically speaking I’m not sure the viewpoints are mutually exclusive.