Zero tolerance for print

Politicians are usually sticks in the mud, technologywise, but that certainly wasn’t the case down in Tallahassee this week. Florida legislators closed their eyes, clicked their heels, and took a giant leap forward into the Information Age, passing a budget measure that bans printed textbooks from schools starting in the 2015-16 school year. That’s right: four years from now it will be against the law to give a kid a printed book in a Florida school. One lawmaker said the bill was intended to “meet the students where they are in their learning styles,” which means nothing but sounds warm and fuzzy.

I reported last week on a new study indicating that e-textbooks, despite some real advantages, aren’t very good at supporting the variety of “learning styles” that students actually employ in their studies, particularly when compared to printed editions. That research won’t be the last word on the subject, but it does show that we’re still a long way from understanding exactly what’s gained and lost when you shift from printed books to digital ones. Yet, as the moronic Florida bill shows, perception often matters more than reason when it comes to injecting new technologies into schools. E-textbooks are so obviously superior to printed ones – they’re digital, for crying out loud – that waiting for a rigorous evaluation would seem like a pathetic act of Ludditism.

But remember: When print is outlawed only outlaws will have print.

Virtual books on virtual shelves for virtual readers

As part of its Ideas Market speaker series, the Wall Street Journal is hosting a discussion on that venerable topic “The Future of the Book” at the New York Public Library on Tuesday evening. I’ll be one of the panelists, along with tech scribe Steven Levy and Random House e-strategist Liisa Mcloy-Kelley. Moderating will be the Journal’s Alexandra Alter. The event is free and open to the public, but seats are limited and need to be reserved in advance, by sending an email with your name to ReviewSeries@wsj.com. More details here.

E-textbooks flunk an early test

When it comes to buzzy new computer technologies, schools have long had a tendency to buy first and ask questions later. That seems to be the case once again with e-readers and other tablet-style computers, which many educators, all the way down to the kindergarten level, are lusting after, not least because the gadgets promise to speed the replacement of old-style printed textbooks with newfangled digital ones. In theory, the benefits of e-textbooks seem clear and compelling. They can be updated quickly with new information. They promise cost savings, at least over the long haul. They reduce paper and photocopier use. They can incorporate all manner of digital tools. And they’re lightweight, freeing students from the torso-straining load of book-filled backpacks.

But schools may want to pause before jumping on the e-textbook bandwagon. This morning, at the ACM Conference on Human Factors in Computing Systems in Vancouver, a team of researchers from the University of Washington, led by doctoral student Alex Thayer, is presenting the results of a year-long study of student reading, and the findings suggest that e-readers may be deeply flawed as replacements for traditional textbooks. Students find the devices cumbersome to use, ill-suited to their study routines, and generally underwhelming. Paper textbooks, it seems, may not be quite as obsolete as they appear.

In the fall of 2009, seven U.S. universities, including the University of Washington, launched pilot programs to evaluate how well Amazon’s Kindle DX, a large-format version of the popular e-reader, fulfills the needs of students. At the University of Washington, 39 graduate students were given Kindles, and their use of the device was monitored through diary entries and interviews. By the end of the school year, nearly two-thirds of the students had abandoned the Kindle or were using it only infrequently. Of those who continued to use it regularly, the researchers write, “some attempted to augment e-readers with paper or computers, others became less diligent about completing their reading tasks, and still others switched to a different and usually less desirable reading technique.”

One of the key themes emerging from the study, as well as from earlier research into reading behavior, is that people in general and students in particular read in a variety of ways. Sometimes they immerse themselves in a text, reading without interruption. Sometimes they skim a text to get a quick sense of the content or the argument. Sometimes they search a text for a particular piece of information or a particular topic. Sometimes they skip back and forth between two or more sections of a text, making comparisons. And sometimes they take notes, make marginal annotations, or highlight passages as they read. Reading is, moreover, a deeply personal, highly idiosyncratic activity, subject to all kinds of individual quirks. Every reader is unique.

Because we’ve come to take printed books for granted, we tend to overlook their enormous flexibility as reading instruments. It’s easy to flip through the pages of a physical book, forward and backward. It’s easy to jump quickly between widely separated sections, marking your place with your thumb or a stray bit of paper or even a hair plucked from your head (yes, I believe I’ve done that). You can write anywhere and in any form on any page of a book, using pen or pencil or highlighter or the tip of a burnt match (ditto). You can dog-ear pages or fold them in half or rip them out. You can keep many different books open simultaneously, dipping in and out of them to gather related information. And when you just want to read, the tranquility of a printed book provides a natural shield against distraction. Despite being low-tech – or maybe because of it – printed books and other paper documents support all sorts of reading techniques, they make it easy to shift seamlessly between those techniques, and they’re amenable to personal idiosyncrasies and eccentricities.

E-books are much more rigid. Refreshing discrete pages of text on a fixed screen is a far different, and far less flexible, process than flipping through pliant pages of fixed text. By necessity, a screen-based, software-powered reading device imposes navigational protocols and routines on the user, allowing certain patterns of use but preventing or hindering others. All sorts of modes of navigation and reading that are easy with printed books become more difficult with electronic books – and even a small degree of added difficulty will quickly frustrate a reader. Whereas a printed book adapts readily to whoever is holding it, an e-book requires the reader to adapt to it.

Some of the problems the University of Washington students had with the Kindle – hard-to-read charts, lack of support for color illustrations, inability to write notes directly on the text – are fairly easy to fix. (Indeed, touchscreen tablets like the iPad, together with apps like Inkling, have already fixed some of them.) But a more fundamental problem for the students was the e-reader’s unsuitability for certain modes of reading and for shifting quickly between different modes. And because that problem is intrinsic to the nature of a screen-based reading device, it is going to be very difficult, if not impossible, to overcome entirely.

The researchers point out that, in addition to supporting various styles of navigation, a printed book provides many subtle cues about a book’s structure and contents. We make a “cognitive map” of a physical book as we read it:

When we read, we unconsciously note the physical location of information within a text and its spatial relationship to our location in the text as a whole … These mental images and representations do more than just help us recall where ideas are located in a given text. We use cognitive maps to retain and recall textual information more effectively, making them useful tools for students who are reading academic texts to satisfy specific goals.

E-readers “strip away some of these kinesthetic cues,” and that’s another reason why so many students ended up frustrated with the Kindle. When students “have no cognitive maps on which to rely,” the researchers write, “the process of locating information takes longer, they have less mental energy for other tasks, and their ability to maintain their desired levels of productivity suffers.” It’s certainly possible to provide on-screen tools, such as scroll bars and progress meters, that can aid in the creation of cognitive maps for e-books, but it’s unlikely that a digital book will ever provide the rich and intuitive set of physical cues that a printed book offers.

The researchers provide an illuminating case study showing how important cognitive mapping can be:

[One student] used kinesthetic cues such as folded page corners and the tangible weight of the printed book to help him locate content quickly. He told us that “after I’ve spent some time with the physical book, I know … exactly how to open it to the right page. … I kind of visually can see where I am in the book.” His physical experience with the text changed dramatically when he began using his Kindle DX: He lost these kinesthetic cues and spent much more time hunting for information than he had previously done. He stopped using the Kindle DX for his assigned academic readings because he wanted to remain as productive and efficient as he was before he received his Kindle DX.

None of this is to say that e-readers and tablets won’t find a place – an important place, probably – in schools. Students already do a great deal of reading and research on computer screens, after all, and there are many things that digital documents can do that printed pages can’t. What this study does tell us, though, is that it’s naive to assume that e-textbooks are a perfect substitute for printed textbooks. The printed page continues to be a remarkably robust reading tool, offering an array of unique advantages, and it seems to be particularly well suited to textual studies. Traditional textbooks may be heavy, but they’re heavy in a good way.

“The Shallows” is Pulitzer Finalist

The 2011 Pulitzer Prizes were announced today, and I’m thrilled to report that my book The Shallows: What the Internet Is Doing to Our Brains was named a finalist in the General Nonfiction category. The prize winner in the category was Siddhartha Mukherjee’s The Emperor of All Maladies: A Biography of Cancer. The other finalist was S. C. Gwynne’s Empire of the Summer Moon: Quanah Parker and the Rise and Fall of the Comanches, the Most Powerful Indian Tribe in American History.

Is Facebook geared to dullards?

Are you ashamed that you find Facebook boring? Are you angst-ridden by your weak social-networking skills? Do you look with envy on those whose friend-count dwarfs your own? Buck up, my friend. The traits you consider signs of failure may actually be marks of intellectual vigor, according to a new study appearing in the May issue of Computers in Human Behavior.

The study, by Bu Zhong and Marie Hardin at Penn State and Tao Sun at the University of Vermont, is one of the first to examine the personalities of social networkers. The researchers looked in particular at connections between social-network use and the personality trait that psychologists refer to as “need for cognition,” or NFC. NFC, as Professor Zhong explained in an email to me, “is a recognized indicator for deep or shallow thinking.” People who like to challenge their minds have high NFC, while those who avoid deep thinking have low NFC. Whereas, according to the authors, “high NFC individuals possess an intrinsic motivation to think, having a natural motivation to seek knowledge,” those with low NFC don’t like to grapple with complexity and tend to content themselves with superficial assessments, particularly when faced with difficult intellectual challenges.

The researchers surveyed 436 college students during 2010. Each participant completed a standard psychological assessment measuring NFC as well as a questionnaire measuring social network use. (Given what we know about college students’ social networking in 2010, it can be assumed that the bulk of the activity consisted of Facebook use.) The study revealed a significant negative correlation between social network site (SNS) activity and NFC scores. “The key finding,” the authors write, “is that NFC played an important role in SNS use. Specifically, high NFC individuals tended to use SNS less often than low NFC people, suggesting that effortful thinking may be associated with less social networking among young people.” Moreover, “high NFC participants were significantly less likely to add new friends to their SNS accounts than low or medium NFC individuals.”

To put it in layman’s terms, the study suggests that if you want to be a big success on Facebook, it helps to be a dullard.

To hold infinity in the palm of your hand

Alice Gregory writes:

Shteyngart says the first thing that happened when he bought an iPhone “was that New York fell away . . . It disappeared. Poof.” That’s the first thing I noticed too: the city disappeared, along with any will to experience. New York, so densely populated and supposedly sleepless, must be the most efficient place to hone observational powers. But those powers are now dulled in me. I find myself preferring the blogs of remote strangers to my own observations of present ones. Gone are the tacit alliances with fellow subway riders, the brief evolution of sympathy with pedestrians. That predictable progress of unspoken affinity is now interrupted by an impulse to either refresh a page or to take a website-worthy photo. I have the nervous hand-tics of a junkie. For someone whose interest in other people’s private lives was once endless, I sure do ignore them a lot now.

Via Doc Searls and, with rueful irony, William Blake.

Grand Theft Attention: video games and the brain

Having recently come off a Red Dead Redemption jag, I decided, as an act of penance, to review the latest studies on the cognitive effects of video games. Because videogaming has become such a popular pastime so quickly, it has, like television before it, become a focus of psychological and neuroscientific experiments. The research has, on balance, tempered fears that video games would turn players into bug-eyed, bloody-minded droogs intent on ultraviolence. The evidence suggests that spending a lot of time playing action games – the ones in which you run around killing things before they kill you (there are lots of variations on that theme) – actually improves certain cognitive functions, such as hand-eye coordination and visual acuity, and can speed up reaction times. In retrospect, these findings shouldn’t have come as a surprise. As anyone who has ever played an action game knows, the more you play it, the better you get at it, and getting better at it requires improvements in hand-eye coordination and visual acuity. If scientists had done the same sort of studies on pinball players 50 years ago, they would have probably seen fairly similar results.

But these studies have also come to be interpreted in broader terms. Some popular-science writers draw on them as evidence that the heavy use of digital media – not just video games, but web-surfing, texting, online multitasking, and so forth – actually makes us “smarter.” The ur-text here is Steven Johnson’s 2005 book Everything Bad Is Good for You. Johnson draws on an important 2003 study, published as a letter to Nature magazine, by University of Rochester researchers Shawn Green and Daphne Bavelier, which demonstrated that “10 days of training on an action game is sufficient to increase the capacity of visual attention, its spatial distribution and its temporal resultion.” In other words, playing an action game can help you keep track of more visual stimuli more quickly and across a broader field, and these gains may persist even after you walk away from the gaming console. Other studies, carried out both before and after the Green and Bavelier research, generally back up these findings. In his book, Johnson concluded, sweepingly, that video games “were literally making [players] perceive the world more clearly,” and he suggested that gaming research “showed no evidence of reduced attention spans compared to non-gamers.”

More recently, the New York Times blogger Nick Bilton, in his 2010 book I Live in the Future, also suggested that videogaming improves attentiveness as well as visual acuity and concluded that “the findings argue for more game playing.” The science writer Jonah Lehrer last year argued that videogaming leads to “significant improvements in performance on various cognitive tasks,” including not only “visual perception” but also “sustained attention” and even “memory.” In her forthcoming book Now You See It, Cathy N. Davidson, an English professor at Duke, devotes a chapter to video game research, celebrating a wide array of apparent cognitive benefits, particularly in the area of attentiveness. Quoting Green and Bavelier, Davidson notes, for example, that “game playing greatly increases ‘the efficiency with which attention is divided.'”

The message is clear and, for those of us with a fondness for games, reassuring: Fire up the Xbox, grab the controller, and give the old gray matter a workout. The more you play, the smarter you’ll get.

If only it were so. The fact is, such broad claims about the cognitive benefits of video games, and by extension other digital media, have always been dubious. They stretch the truth. The mental faculties of attention and memory have many different facets – neuroscientists are still a long way from hashing them out – and to the extent that past gaming studies demonstrate improvements in these areas, they relate to gains in the kinds of attention and memory used in the fast-paced processing of a welter of visual stimuli. If you improve your ability to keep track of lots of images flying across a screen, for instance, that improvement can be described as an improvement in a type of attentiveness. And if you get better at remembering where you are in a complex fantasy world, that improvement can be described as an improvement in a sort of memory. The improvements may well be real – and that’s good news – but they’re narrow, and they come with costs. The fact that video games seem to make us more efficient at dividing our attention is great, as long as you’re doing a task that requires divided attention (like playing a video game). But if you’re actually trying to do something that demands undivided attention, you may find yourself impaired. As UCLA developmental psychologist Patricia Greenfield, one of the earliest researchers on video games, has pointed out, using media that train your brain to be good at dividing your attention appears to make you less able to carry out the kinds of deep thinking that require a calm, focused mind. Optimizing for divided attention means suboptimizing for concentrated attention.

Recent studies back up this point. They paint a darker picture of the consequences of heavy video-gaming, particularly when it comes to attentiveness. Far from making us smarter, heavy gaming seems to be associated with attention disorders in the young and, more generally, with a greater tendency toward distractedness and a reduced aptitude for maintaining one’s focus and concentration. Playing lots of video games, these studies suggest, does not improve a player’s capacity for “sustained attention,” as Lehrer and others argue. It weakens it.

In a 2010 paper published in the journal Pediatrics, Edward L. Swing and a team of Iowa State University psychologists reported on a 13-month study of the media habits of some 1,500 kids and young adults. It found that “[the] amount of time spent playing video games is associated with greater attention problems in childhood and on into adulthood.” The findings indicate that the correlation between videogaming and attention disorders is at least equal to and probably greater than the correlation between TV-viewing and those disorders. Importantly, the design of the study “rules out the possibility that the association between screen media use and attention problems is merely the result of children with attention problems being especially attracted to screen media.”

A 2009 study by a different group of Iowa State researchers, published in Psychophysiology, investigated the effects of videogaming on cognitive control, through experiments with 51 young men, both heavy gamers and light gamers. The study indicated that videogaming has little effect on “reactive” cognitive control – the ability to respond to some event after it happens. But when it comes to “proactive” cognitive control – the ability to plan and adjust one’s behavior in advance of an event or stimulus – videogaming has a significant negative effect. “The negative association between video game experience and proactive cognitive control,” the researchers write, “is interesting in the context of recent evidence demonstrating a similar correlation between video game experience and self-reported measures of attention deficits and hyperactivity. Together, these data may indicate that the video game experience is associated with a decrease in the efficiency of proactive cognitive control that supports one’s ability to maintain goal-directed action when the environment is not intrinsically engaging.” Videogamers, in other words, seem to have a difficult time staying focused on a task that doesn’t involve constant incoming stimuli. Their attention wavers.

These findings are consistent with more general studies of media multitasking. In a much-cited 2009 paper in Proceedings of the National Academy of Sciences, for example, Stanford’s Eyal Ophir, Clifford Nass, and Anthony D. Wagner show that heavy media multitaskers demonstrate significantly less cognitive control than light multitaskers. The heavy multitaskers “have greater difficulty filtering out irrelevant stimuli from their environment” and are also less able to suppress irrelevant memories from intruding on their work. The heavy multitaskers were actually less efficient at switching between tasks – in other words, they were worse at multitasking.

So should people be prevented from playing video games? Not at all (though parents should monitor and restrict young kids’ use of the games). Moderate game-playing probably isn’t going to have any significant long-term cognitive consequences, either good or bad. Video-gaming is fun and relaxing, and those are good things. Besides, people engage in all sorts of pleasant, diverting pursuits that carry risks, from rock-climbing to beer-drinking (don’t mix those two), and if we banned all of them, we’d die of boredom.

What the evidence does show is that while videogaming might make you a little better at certain jobs that demand visual acuity under stress, like piloting a jet fighter or being a surgeon, it’s not going to make you generally smarter. And if you do a whole lot of it, it may well make you more distracted and less able to sustain your attention on a single task, particularly a difficult one. More broadly, we should be highly skeptical of anyone who draws on video game studies to argue that spending a lot time in front of a computer screen strengthens our attentiveness or our memory or even our ability to multitask. Taken as a whole, the evidence, including the videogaming evidence, suggests it has the opposite effect.