Minds like sieves

“As gravity holds matter from flying off into space, so memory gives stability to knowledge; it is the cohesion which keeps things from falling into a lump, or flowing in waves.” -Emerson

There’s a fascinating – and, to me, disquieting – study on the internet’s effects on memory that’s just come out in Science.* It provides more evidence of how quickly and flexibly our minds adapt to the tools we use to think with, for better or for worse.

The study, “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips,” was conducted by three psychologists: Betsy Sparrow, of Columbia University; Jenny Liu, of the University of Wisconsin at Madison; and Daniel Wegner, of Harvard. They conducted a series of four experiments aimed at answering this question: Does our awareness of our ability to use Google to quickly find any fact or other bit of information influence the way our brains form memories? The answer, they discovered, is yes: “when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it.” The findings suggest, the researchers write, “that processes of human memory are adapting to the advent of new computing and communication technology.”

In the first experiment, people were asked a series of trivia questions. They were then given a test in which they were shown different corporate brand names, some from search engines (eg, Google) and some from other familiar companies (eg, Nike), in different colors and asked to identify the color. In this kind of test, called a Stroop task, a greater delay in naming the color indicates a greater interest in, and cognitive focus on, the word itself. As the researchers explain: “People who have been disposed to think about a certain topic typically show slowed reaction times for naming the color of the word when the word itself is of interest and is more [cognitively] accessible, because the word captures attention and interferes with the fastest possible color naming.” The experiment revealed that after people are asked a question to which they don’t know the answer, they take significantly longer to identify the color of a search-related brand name than a non-search-related one. The upshot: “It seems that when we are faced with a gap in our knowledge, we are primed to turn to the computer to rectify the situation.” There was even a delay, though a lesser one, in identifying the color of an internet brand name when people had been asked questions that they did know the answer to, suggesting that “the computer may be primed when the concept of knowledge in general is activated.” In other words, we seem to have trained our brains to immediately think of using a computer when we’re called on to answer a question or otherwise provide some bit of knowledge.

In the second experiment, people read forty factual statements of the kind you’d tend to look up with a search engine (eg, “an ostrich’s eye is bigger than its brain”) and then typed the statements into a computer. Half the participants were told the computer would save what they typed, and half were told that what they typed would be erased. Afterwards, the participants were asked to write down as many of the statements as they could remember. The experiment revealed that people who believed the information would be stored in the computer had a weaker memory of the information than those who assumed that the information would not be available in the computer. The researchers conclude: “Participants apparently did not make the effort to remember when they thought they could later look up the trivia statements they had read. Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”

The third experiment was a variation on the second, which again showed that people were less likely to remember a fact if they believed they would be able to find it on a computer and more likely to remember it if they believed it would not be available on a computer. The experiment further revealed that when people were asked whether a fact had been saved or erased, they displayed a better recall for the act of saving than erasing. “Thus,” the researchers explain, “it appears that believing that one won’t have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed, at least in general.”

In the fourth experiment, people again read a series of factual statements and typed them into a computer. They were told that the statements would be stored in a specific folder with a generic name (eg, “facts” or “data”). They were then given ten minutes to write down as many statements as they could remember. Finally, they were asked to name the folder in which a particular statement was stored (eg, “What folder was the statement about the ostrich saved in?”). It was discovered that people were better able to remember the folder names than the facts themselves. “These results seem remarkable on the surface, given the memorable nature of the statements and the unmemorable nature of the folder names,” the researchers write. The experiment provides “preliminary evidence that when people expect information to remain continuously available (such as we expect with Internet access), we are more likely to remember where to find it than we are to remember the details of the item.”

Human beings, of course, have always had external, or “transactive,” information stores to supplement their biological memory. These stores can reside in the brains of other people we know (if your friend John is an expert on sports, then you know you can use John’s knowledge of sports facts to supplement your own memory) or in storage or media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of external information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?

The researchers seem fairly sanguine about the results of their study. “We are becoming symbiotic with our computer tools,” they conclude, “growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.” Although we don’t yet understand the possible “disadvantages of being constantly ‘wired,'” we have nevertheless “become dependent” on our gadgets. “We must remain plugged in to know what Google knows.” But as memory shifts from the individual mind to the machine’s shared database, what happens to that unique “cohesion” that is the self?

The see-through world (revisited)

Rough Type’s summer retro blitz continues with the recycling of this post, originally published on January 31, 2008.

As GPS transceivers become common accessories in cars, the benefits have been manifold. Millions of us have been relieved of the nuisance of getting lost or, even worse, the shame of having to ask a passerby for directions.

But, as with all popular technologies, those dashboard maps are having some unintended consequences. In many cases, the shortest route between two points turns out to run through once-quiet neighborhoods and formerly out-of-the-way hamlets.

Scores of villages have been overrun by cars and lorries whose drivers robotically follow the instructions dispensed by their satellite navigation systems. The International Herald Tribune reports that the parish council of Barrow Gurney in southwestern England has even requested, fruitlessly, that the town be erased from the maps used by the makers of navigation devices.

A research group in the Netherlands last month issued a study documenting the phenomenon and the resulting risk of accidents. It went so far as to say that GPS systems can turn drivers into “kid killers.”

Now, a new generation of sat-nav devices is on the horizon. They’ll be connected directly to the internet, providing drivers with a steady stream of real-time information about traffic congestion, accidents, and road construction. The debut of one of the new systems, called Dash Express, at this month’s Consumer Electronics Show in Las Vegas led to claims that the new technology might “spell the end of traffic jams forever.”

That would be nice, but I have my doubts. When we all have equally precise, equally up-to-the-second information on traffic conditions, the odds are that we’ll all respond in similar ways. As we all act in unison to avoid one bottleneck, we’ll just create a new bottleneck. We may come to look back fondly on the days when information was less uniformly distributed.

That’s the problem with the so-called transparency that’s resulting from instantly available digital information. When we all know what everyone else knows, it becomes ever harder to escape the pack.

Just ask the hardcore surfers who dedicate themselves to finding the best waves. It used to be that they could keep their favorite beaches secret, riding their boards in relative solitude. But in recent months people have begun putting up dozens of video cameras, known as “surf cams,” along remote shorelines and streaming the video over the net.

Thanks to the cameras, once secluded waters are now crowded with hordes of novice surfers. That’s led to an outbreak of “surf cam rage,” according to a report last weekend in the New York Times. Die-hard surfers are smashing any cameras they find in the hope that they might be able to turn the tide of transparency.

But the vandalism is in vain. For every surf cam broken, a few more go up in its place.

There is, of course, much to be said for the easy access to information that the internet is allowing. Information that was once reserved for the rich, the well-connected, and the powerful is becoming accessible to all. That helps level the playing field, spreading economic and social opportunities more widely and fairly.

At the same time, though, transparency is erasing the advantages that once went to the intrepid, the dogged, and the resourceful. The surfer who through pluck and persistence found the perfect wave off an undiscovered stretch of beach is being elbowed out by the lazy masses who can discover the same wave with just a few mouse clicks. The commuter who pored over printed maps to find a short cut to work finds herself stuck in a jam with the GPS-enabled multitudes.

You have to wonder whether, as what was once opaque is made transparent, the bolder among us will lose the incentive to strike out for undiscovered territory. What’s the point when every secret becomes, in a real-time instant, common knowledge?

A see-through world may not be all that it’s cracked up to be. We may find that as we come to know everything about everything, we all end up in the same mess together.

News to me

Over at the Economist site, I’m debating the proposition “the internet is making journalism better, not worse” with Jay Rosen. He’s pro, I’m con.

Here’s my opening statement:

Journalism and the internet are both hot buttons, and when you combine the two you get plenty of opinions. But there are facts as well, and what the facts show is that the internet boom has done great damage to the journalism profession.

According to a 2010 review by the U.S. Congressional Research Service, newsroom staffing at American newspapers plunged by more than 25 percent between 2001 and 2009, and large-scale layoffs of reporters continued through 2010. A 2009 study commissioned by the Columbia Journalism Review concluded that newspaper editorial jobs dropped from more than 60,000 in 1992 to about 40,000 in 2009. Scores of newspapers, both large and small, have stopped publishing, and many others have scaled back the scope of their reporting. The picture appears similarly bleak in the U.K., where the number of working journalists fell by between 27 and 33 percent over the past decade, according to an analysis by the School of Journalism, Media & Communication at the University of Central Lancashire.

The decline in journalism jobs has been particularly severe at the local level, where reporters were scarce to begin with. A 400-page report issued last month by the Federal Communications Commission documents the consequences in distressing detail. The number of reporters covering state governments has dropped by a third since 2003, and more than 50 news organizations have discontinued statehouse reporting altogether. Cutbacks in reporting on city governments have been even steeper, and there have been significant declines in the number of journalists assigned to judicial, education, environment, and business beats as well as investigative reporting. “In many communities, we now face a shortage of local, professional, accountability reporting,” the FCC report concludes. “This is likely to lead to the kinds of problems that are, not surprisingly, associated with a lack of accountability—more government waste, more local corruption, less effective schools, and other serious community problems.”

The damage is not limited to newspapers. Newsmagazines, local commercial radio stations, and television networks have also slashed their newsgathering staffs since the 1980s, in some cases by 50 percent or more. The bottom line: Far fewer journalists are at work today than when the world wide web made its debut. The shrinking of the reporting corps not only constrains coverage; it also reduces quality, as remaining reporters become stretched thin even as they’re required to meet the relentless deadlines of online publishing. According to a 2010 survey by the Pew Research Center’s Project for Excellence in Journalism, 65 percent of news editors believe that the internet has led to a “loosening of standards” in journalism, with declines in accuracy and fact-checking and increases in unsourced reporting.

The problems can’t be blamed entirely on the net, of course. Like other industries, the press has suffered greatly from the recent recession, and mismanagement has also played a role in the travails of news organizations. But it is the shift of readers and advertisers from print media to online media that has been the major force reshaping the economics of the news business. The massive losses in print revenues, resulting from sharp declines in ads, subscriptions, and newsstand sales, have dwarfed the meager gains in online revenues. As the FCC report explains, “each print dollar [has been] replaced by four digital pennies.”

If we can agree that the internet, by altering the underlying economics of the news business, has thinned the ranks of professional journalists, then the next question is straightforward: Has the net created other modes of reporting to fill the gap? The answer, alas, is equally straightforward: No.

Certainly, the net has made it easier for ordinary citizens to be involved in journalism in all sorts of ways. Blogs and other online publishing and commenting tools allow people to share their opinions with a broad audience. Social networking services like Twitter and Facebook enable people to report breaking news, offer eyewitness accounts, and circulate links to stories. Groups of online volunteers have proven capable of digging newsworthy nuggets from large troves of raw data, whether it’s the expense reports of British politicians or the emails of Sarah Palin.

Such capabilities can be immensely valuable, but it’s important to recognize that they supplement rigorous, tenacious, in-depth reporting; they don’t replace it. And while there have been many noble attempts to create new kinds of net-based newsgathering organizations—some staffed by paid workers, others by volunteers; some for-profit, others not-for-profit—their successes so far have been modest and often fleeting. They have not come anywhere close to filling the gap left by the widespread loss of newspapers and reporters. As the Pew Center put it in its 2010 State of the News Media report, “the scale of these new efforts still amounts to a small fraction of what has been lost.”

The future may be sunnier. Professional news organizations may find ways to make more money online, and they may begin hiring again. Citizen journalism initiatives may begin to flourish on a large scale. Innovations in social networking may unlock entirely new ways to report and edit the news. But for the moment that’s all wishful thinking. What’s clear is that, up to now, the net has harmed journalism more than it’s helped it.

Here’s the debate site.

Semidelinkification, Shirky-style

Call me a nostalgist, but sometimes I like to plop my hoary frame down in front of the old desktop and surf the world wide web – the way we used to do back in the pre-Facebook days of my boyhood, when the internet was still tragically undermonetized. I was in fact on a little surfin’ safari this morning when I careened into a new post from Clay Shirky about – you guessed it – the future of the news biz.* It was totally longform, ie, interfrigginminable. But I did manage to read a sizable chunk of it before clicking the Instapaper “Read Later” button (a terrific way to avoid reading long stuff without having to feel guilty about it). It was a solid piece, as you’d expect from Shirky, if marred a bit by an unappealing new-media elitism (apparently the great unwashed never made it past the sports pages). But what interests me at the moment is not the content of Shirky’s post but its form, particularly the form of its linkage.

It’s been a while since I wrote about delinkification, but it’s still an issue I struggle with: How does one hang on to the benefits of having hyperlinks in online text while minimizing the distractions links cause to readers? Some people have taken to putting a list of sources, with links, at the foot of an online article or post, while leaving the main text unmolested. That works pretty well, but it strikes me as kind of cumbersome, and it also creates more work for the writer (which for a lazy s.o.b. like yours truly is a fatal flaw). You could also just dispense with links altogether – anyone who can’t by now Google a citation in two shakes is a moron – but for those of us who maintain a sentimental attachment to the idea of links as the coin of the internet realm (even while recognizing that the currency has been debased to near worthlessness), throwing in the towel on links seems like a moral failing.

But I like Shirky’s solution. He puts an asterisk at the end of a citation, and uses the asterisk as the link. I don’t know that it’s the best of all possible worlds, but it’s a nice mashup of the sedate footnote and the propulsive hyperlink. It’s much easier to tune out asterisks or other footnote marks than it is to tune out underscored, color-highlighted, in-your-face anchor text. And if you want to check out the cited document you still get the speed of the link. Click! Zoom! And you still make your little payment to the author of the cited work.

There was a time, many years ago, when having a crapload of links in a post or other piece of online prose was a sign that you were au courant – that you were down with this whole web thing. That time’s long gone. Arriving at a page covered with drips and drabs of blue link type is tiresome. (The equivalent today is using a Twitter hashtag to add a cute little ironic or sardonic comment at the end of a tweet. A year ago, the hashtag witticism was the mark of a hip tweetin’ dude. Now, it’s the mark of a dweeb.) It’s permissible these days – advisable, in fact – to offer a calmer reading experience to brain-addled netizens. Chill those pixels.

Given the revolting popularity of self-linking as a means to ratchet up page- and ad-views, I know that Shirky style and other forms of semidelinkification are unlikely to revolutionize the appearance of the web. So be it. I’m still going to go ahead and adopt Shirky style for my more discursive posts. For posts that exist purely to point to something interesting elsewhere on the net, I’ll continue to use trad text links. And I may change my mind and take a different direction in the future.

For the moment, though, Rough Type is officially shirkified.

Exile from realtime

I’ve got a bad case of the shakes today, and it has nothing to do with the M-80s and bottle rockets going off into the wee hours last night. No, over the long weekend I was cast out of realtime. I had no warning, no time to prepare for my reentry into the drab old chronological order. I feel like a refugee living in a crappy tent in a muddy field on the outskirts of some godforsaken country. I know exactly how T. S. Eliot felt when he wrote “Ridiculous the sad waste time / Stretching before and after.”

What happened is that Google turned off its spigot of realtime results. I still see the “Realtime” option in the drop-down list of search options, but when I click on it it returns nothing. Just a horrifying whiteness, like a marble tombstone before the letters are carved. And the “Latest” option for arranging results that used to appear in the lefthand column of search tools has been replaced by “Past hour.” Past hour? Are you kidding me? Why not just say “Eternity”? I freaking lived in “Latest,” with its single page of perpetually updated results, punctuated by pithy little tweets from all manner of avatarial life. It was pure pre-algorithmic democracy, visceral as raw beef.

Now, the stream is dry.

Apparently this all stems from some tiff between Google and Twitter. The two Internet Goliaths – okay, one Goliath and one mini-Goliath – had a pact that allowed Google to stream tweets in its results, but that agreement went kaput on Saturday. So on Sunday morning Google put a cork in the firehose. And left me an exile from realtime.

Time itself is contingent on the vagaries of online competition. As flies to wanton boys we are to the Gods of the Net.

I take some solace from a statement that came out of the Plex on Sunday: “We’ve temporarily disabled google.com/realtime. We’re exploring how to incorporate our recently launched Google+ project into this functionality going forward, so stay tuned.”

Living in realtime is all about staying tuned. Staying tuned is the way we live today. Rest assured, Googlers, that I will keep hitting Refresh until “this functionality” returns. The alternative is too distressing to ponder. I need my Now.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.

Another study points to advantages of printed textbooks

Even as administrators and legislators push schools to dump printed books in favor of electronic ones, evidence mounts that paper books have important advantages as tools for learning. Last month, I reported on a study out of the University of Washington which showed that students find printed books more flexible than e-books in supporting a wide range of reading and learning styles. Now comes a major study from the University of California system showing that students continue to prefer printed books to e-books and that many undergraduates complain that they have trouble “learning, retaining, and concentrating” when reading from screens.

The University of California Libraries began a large e-textbook pilot program in 2008. In late 2010, more than 2,500 students and faculty members were surveyed to assess the results of the program. Overall, 58% of the respondents said they used e-books for their academic work, with the percentage varying from 55% for undergraduates to 57% for faculty to 67% for graduate students. The respondents who used e-books were then asked whether they preferred e-books or printed books for their studies. Overall, 44% said they preferred printed books and 35% said they preferred e-books, with the remainder expressing no preference. The preference for print was strongest among undergraduates, 53% of whom preferred printed books, with only 27% preferring e-books. Graduate students preferred printed books by 45% to 35%, and faculty preferred printed books by 43% to 33%.

The most illuminating part of the survey came when respondents were asked to explain their preferences. The answers suggest that while students prefer e-books when they need to search through a book quickly to find a particular fact or passage, they prefer printed books for deep, attentive reading. “E-books divide my attention,” said one undergraduate. “Paper … keeps me focused and away from distractions that may arise from computer usage,” said another. “I have some difficulty paying careful attention to long passages on my computer,” said another. “Reading on the computer makes it harder for me to understand the information,” said another. Commented a graduate student: “I am a better reader when I have the print copy in front of me.”

Another graduate student, in the social sciences, explained the different strengths of printed books and e-books:

I answered that I prefer print books, generally. However, the better answer would be that print books are better in some situations, while e-books are better in others. Each have their role – e-books are great for assessing the book, relatively quick searches, like encyclopedias or fact checking, checking bibliography for citations, and reading selected chapters or the introduction. If I want to read the entire book, I prefer print. If I want to interact extensively with the text, I would buy the book to mark up with my annotations; if I want to read for background (not as intensively) I will check out a print book from the library if possible. All options have their place. I am in humanities/social sciences, so print is still very much a part of my research life at this point.

Several respondents noted that they often used both electronic and print versions of the same book, “utilizing digital copies of a title for search and discovery tasks, and moving to corresponding paper copies for reading, note taking, text comparison, and deep study.” Two-thirds of undergraduates said it was important to them to have access to print copies of books even when electronic versions were available.

Two years ago, then-California Governor Arnold Schwartzenegger dismissed printed textbooks as outdated. “Our kids get their information from the internet, downloaded onto their iPods, and in Twitter feeds to their cell phones,” he said. “Basically, kids are feeling as comfortable with their electronic devices as I was with my pencils and crayons. So why are California’s school students still forced to lug around antiquated, heavy, expensive textbooks?” Many school administrators and government bureaucrats make similar assumptions, with little or no evidence to back them up. Maybe if they went out and looked at how students actually read, study, and learn, they’d see that paper books and electronic books are different tools and that the printed page remains superior to the screen in many cases.

United States vs. Google (revisited)

Summer is a good time to pick lazily at the archives. Here’s a post that originally appeared on Rough Type on October 12, 2006. Given last week’s news that the Federal Trade Commission has launched a formal anti-trust investigation of Google, it seems timely to repost it now.

Every era of computing has its defining antitrust case. In 1969, at the height of the mainframe age’s go-go years, the Justice Department filed its United States vs. IBM lawsuit, claiming that Big Blue had an unfair monopoly over the computer industry. At the time, IBM held a 70 percent share of the mainframe market (including services and software as well as machines).

In 1994, with the PC age in full flower, the Justice Department threatened Microsoft with an antitrust suit over the company’s practice of bundling products into its ubiquitous Windows operating system. Three years later, when Microsoft tightened the integration of its Internet Explorer browser into Windows, the government acted, filing its United States vs. Microsoft suit.

With Google this week taking over YouTube, it seems like an opportune time to look forward to the prospect – entirely speculative, of course – of what could be the defining antitrust case of the Internet era: United States vs. Google.

That may seem far-fetched at this point. In contrast to IBM and Microsoft, whose fierce competitiveness made them good villains, Google seems an unlikely monopolist. It’s a happy-face company, childlike even, which has gone out of its way to portray itself as the Good Witch to Microsoft’s Bad Witch, as the Silicon Valley Skywalker to the Redmond Vader. And yet, however pure its intentions, Google already has managed to seize a remarkable degree of control over the Internet. According to recent ComScore figures, it already holds a dominant 44 percent share of the web search market, more than its next two competitors, Yahoo and Microsoft, combined, and its share rises to 50% if you include AOL searches, which are subcontracted to Google. An RBC Capital Markets analyst recently predicted that Google’s share will reach 70 percent. “The question, really,” he wrote, “comes down to, ‘How long could it take?'”

Google’s AdWords ad-serving system, tightly integrated with the search engine, is even more dominant. It accounts for 62 percent of the market for search-based ads. That gives the company substantial control over the money flows throughout the vast non-retailing sector of the commercial internet.

With the YouTube buy, Google seizes a commanding 43 percent share of the web’s crowded and burgeoning video market. In a recent interview, YouTube CEO Chad Hurley said that his business enjoys a “natural network effect” that should allow its share to continue to rise strongly. “We have the most content because we have the largest audience and that’s going to continue to drive each other,” he said. “Both sides, both the content coming in and and the audience we’re creating. And it’s very similar again to the eBay issue where they had an auction product that gained critical mass.”

Google has been less successful in building up its own content and services businesses, but it’s a fabulously profitable company, thanks to its AdWords money-printing machine, and it can easily afford to acquire other attractive content and services companies. It can also afford, following the lead of Microsoft in the formative years of the PC market, to launch a slew of products across many different categories and let them chip away at their respective markets – which is exactly what it’s been doing. Moreover, its dominance in ad-serving enables it to cut exclusive advertising and search deals with major sites like MySpace, expanding its influence over users and hamstringing the competition.

Google’s corporate pronouncements are carefully, and, by all accounts, sincerely, aimed at countering fears that it is building a competition- and innovation-squelching empire. But its actions often belie its rhetoric. Its founders said they had no interest in launching an internet portal, but then they launched an internet portal. They said they wanted customers to leap off Google’s property as quickly as possible, but then they began cranking out more and more applications and sites aimed at keeping customers on Google’s property as long as possible. The company’s heart may be in the right place, but its economic interests lie elsewhere. And public companies aren’t known for being led by their hearts.

Nothing’s written in stone, of course. Someone could come up with a new and more attractive method of navigating the web that would quickly undermine the foundation of Google’s entire business. But it’s useful to remember that the commercial internet, and particularly Web 2.0, is all about scale, and right now scale is very much on Google’s side. Should Google’s dominance and power continue to grow, it would inevitably have a chilling effect on innovation and hence competition, and the public would suffer. At that point, the big unasked question would start being asked: should companies be able to compete in both the search/ad business and the content/services business, or should competition in those businesses be kept separate? If there is ultimately a defining antitrust case in the internet era, it is that question that will likely be at its core.