Google’s recipe for recipes

Q: How do people cook these days?

A: They cook with Google.

When you’re looking for a good recipe today, you probably don’t reach for Joy of Cooking or Fannie Farmer or some other trusty, soup-stained volume on your cookbook shelf. You probably grab your laptop or tablet and enter the name of a dish or an ingredient or two into the search box. And that makes Google very important in the world of eating. Very, very important. I’ll let Amanda Hesser, noted food-writer, cookbook-author, and web-entrepreneur, explain:

The entity with the greatest influence on what Americans cook is not Costco or Trader Joe’s. It’s not the Food Network or The New York Times. It’s Google. Every month about a billion of its searches are for recipes. The dishes that its search engine turns up, particularly those on the first page of results, have a huge impact on what Americans cook.

Once upon a time, Google didn’t distinguish recipe search results from any other sort of search result. You typed in, say, “cassoulet,” and that keyword ran like any other keyword through the old Google link-counting algorithm. Recipes that had earned a lot of links from a lot of good sites appeared at the top of the list of results. But then, about a month ago – on February 24, 2011, to be precise – Google rolled out a special algorithm for finding recipes. And it added a “Recipe” button to the list of specialized search options that run down the left side of its search results pages. And it allowed searchers to refine results by ingredient, calories, or cooking time.

On the surface, all these changes seemed to be good news for cooks. What’s not to like about a specialized recipe search engine? Beneath the surface, though, some funny things were going on, and not all of them were salubrious. In fact, the changes illustrate how, as search engines refine their algorithms, their results become more biased. In particular, the changes reveal how a powerful search engine like Google has come to reward professional sites that are able to spend a lot on search engine optimization, or SEO, and penalize amateurs who are simply looking to share their thoughts with the world. Originally celebrated for leveling the media playing field, the Web has come to re-tilt that field to the benefit of deep-pocketed corporations.

Let’s look at the actual effects that Google’s changes have had on the kind of sites that show up in recipe search results. I’ll let Meathead Goldwyn, proprietor of a barbecue website and self-described “hedonism evangelist,” take up the story:

When one enters “ribs” in Google, my website AmazingRibs.com is #1. [But] if you search for “ribs” and then click on the new “Recipes” option in the column on the left on most browsers, the results are limited to only those that Google is sure are recipes and not articles about some football player with broken ribs. My ribs recipes are nowhere in sight. How does Google know a recipe when it sees one? The authors have included code that tells Google “this is a recipe.” … Handy for consumers, but a pain for food bloggers like me. I’m getting smashed because I did not get around to installing the new recipe codes when Google announced them in April 2010 because the instructions were too confusing. Now the top slots are all occupied by the big-time corporate food sites, Foodnetwork.com, Epicurious.com, About.com, AllRecipes.com, etc.

If you’re publishing recipes online and you want them to rank highly in Google’s recipe results, it’s no longer enough simply to publish really good dishes and get lots of people to link to them. Now, you have to be adept at (or hire someone who’s adept at) SEO in order to code your pages in ways suited to Google’s increasingly complex algorithm. If you want to get a sense of how complicated this is, you can check out this page at Google’s Webmaster Central, which describes how the publisher of a food site needs not only to tag a page as a recipe but to put various “microdata,” “microformats,” and “RDFa” tags into the source code of their pages. As Meathead notes, the page “was obviously written by engineers for engineers.” Here’s an eye-boggling sample that Google provides for a recipe called Grandma’s Holiday Apple Pie:

microdata.jpg

It may be Grandma’s apple pie, but I don’t think Grandma is going to be able to crank out that kind of coding. And I don’t think Google’s explanation of how the coding works is going to be much help to the old gal:

  • On the first line, <itemscope itemtype="http://www.data-vocabulary.org/Recipe"> indicates that the HTML enclosed in the <div> represents a Recipe. itemscope indicates that the content of the <div> describes an item, and itemtype="http://www.data-vocabulary.org/Recipe" indicates that the item is a Recipe.
  • The sample describes properties of the recipe, such as its author, ingredients, and preparation time. To label recipe properties, each element containing one of these properties (such as <div> or <span> is assigned an itemprop attribute indicating a property. For example, <span itemprop="author">.
  • A property can consist of another item (in other words, an item can include other items). For example, the recipe above includes an Review-aggregate item (itemtype="http://www.data-vocabulary.org/Review-aggregate") with the properties rating and count, and a Recipe-ingredient item (ingredient), which in turn has the properties amount and name.

No, Grandma is out of luck.

And that’s the point. As Google’s army of codesmiths – with the best of intentions, I’m sure – make the company’s search algorithms ever more complex, ever more “refined,” the art of creating pages that will rank highly becomes ever more a job for professionals, for SEOers who spend all their time analyzing Google’s arcane instructions and mastering the esoteric codes those instructions demand. Amateurs and small-timers, like Grandma and Meathead, have little chance to compete with the big corporate sites, which can afford to spend big bucks on SEO. Once antagonists, Google and the SEO industry have developed a tightly symbiotic relationship that seems to be mutually beneficial. The folks who lose out are the little guys.

Here’s Amanda Hesser again:

Google has, in effect, taken sides in the food war. Unfortunately, it’s taken the wrong one … Imagine the blogger who has excellent recipes but has to compete against companies with staff devoted entirely to S.E.O. And who now must go back and figure out the calorie counts of all of his recipes, and then add those numbers, along with other metadata. That’s not going to happen. So the chance that that blogger’s recipes will appear anywhere near the first page of results is vanishingly small. What this means is that Google’s search engine gives vast advantage to the largest recipe websites with the resources to input all this metadata.

But that’s not all. Other biases – these having to do with Google’s idea of what people should be cooking and eating – are also at work. In setting up parameters for refining results based on cooking time and calories, Google explicitly, if subtly, gives privilege to low-calorie recipes that can be cooked quickly, as shown in the options it allows for refining a recipe search:

subsearch.jpg

Those choices may seem innocuous, but they have important consequences, as Hesser describes:

Google unwittingly – but damagingly – promotes a cooking culture focused on speed and diets.

Take, for instance, a recent search for “cassoulet.” The top search result is a recipe from Epicurious, one of the larger and better sites. But if you refine by time, your choices are “less than 15 min,” “less than 30 min,” or “less than 60 min.” There is no option for more than 60 minutes. In truth, a classic cassoulet takes at least 4 hours to make, if not several days (the Epicurious recipe takes 4 hours and 30 minutes; yet there in the results are recipes under each of these three time classes. One from Tablespoon goes so far as to claim to take just 1 minute. (It’s made with kidney beans, canned mushrooms, and beef, so it’s not long on authenticity.) … Refining recipe search by time doesn’t result in better recipes rising to the top; rather, the new winners are recipes packaged for the American eating and cooking disorder.

The proof is no longer in the pudding. It’s in the search results. And baked into those results are the biases, ideologies, and business interests of the people running the search engines. The code is not neutral.

A message to you, Larry

Just two weeks before he retakes the reins as Google’s CEO, Larry Page has been pulled down from his high horse, hauled off to the woodshed, and given a good paddling by a federal judge. The matter and tone of Judge Denny Chin’s rejection of the proposed Google Books settlement are generally circumspect and measured – James Grimmelmann provides a lawyerly rundown – but when it comes to passing judgment on Google’s actual behavior to date, Chin is blunt and scathing:

The [settlement agreement] would grant Google control over the digital commercialization of millions of books, including orphan books and other unclaimed works. And it would do so even though Google engaged in wholesale, blatant copying, without first obtaining copyright permissions. While its competitors went through the “painstaking” and “costly” process of obtaining permissions before scanning copyrighted books, “Google by comparison took a shortcut by copying anything and everything regardless of copyright status.” As one objector put it: “Google pursued its copyright project in calculated disregard of authors’ rights. Its business plan was: ‘So, sue me.'”

Google’s scanner-in-chief is, of course, Larry Page. It was Page who, in 2002, photocopied the very first of the many millions of books that Google has run through its scanners, and since then it has been Page who has been the prime mover behind the company’s “so, sue me” scanning strategy. Even today, after years of courtroom wrangling, Page seems absolutely convinced of the righteousness of his cause, dismissing the arguments of critics as little more than negotiating tactics. “Do you really want the whole world not to have access to human knowledge as contained in books?” he recently said to the writer Steven Levy, adopting the messianic tone that characterizes the proclamations handed down from the upper reaches of the Googleplex. “You’ve just got to think about that from a societal point of view.”

Society, you see, has a single point of view on this complicated matter, and it’s the Page view.

I’m no fan of current U.S. copyright law. For years, legislators have given too much weight to private interests and too little weight to public interests in setting ever more onerous copyright restrictions. But nobody elected Larry Page to unilaterally rewrite copyright rules, and by now it should be clear that Google’s interests are not the public’s interests. Yes, it would be nice to share Google’s view of itself as a selfless, righteous defender of the public’s right to unbridled creativity, but the fact that the company just grabbed a frivolous patent on the use of logo doodles makes it clear that its governing point of view is not societal but commercial. It defends its own intellectual capital, to sometimes ludicrous extremes, even as it plays Robin Hood with the property of others.

The big question is, will Page learn anything from Judge Chin’s slap? Will he be ever so slightly chastened, a bit more willing to take seriously views that conflict with his own, or will he view the ruling as just another example of the benightedness of those who don’t share his perspective? The answer may well determine whether he succeeds as Google’s new chief.

Sanctuary

For the past seven years, the Belgian photographer Sebastian Schutyser has spent his winters wandering through northern Spain taking pictures of small, ancient churches and chapels – hermitages, or “ermita,” which stand isolated and sometimes ruined in the surrounding landscape. To capture the “aura” of the buildings, Schutyser found it necessary to abandon his “expensive cameras and sophisticated lenses” and instead use a simple pinhole camera. The resulting images are mysterious, beautiful, and moving, and I find that they also carry considerable metaphorical weight. Schutyser has given me permission to share a few of the photographs with Rough Type’s readers; many more of the images can be seen here. Following the images is a brief essay by Schutyser in which he describes his “ermita” series.

Nuestra Señora de los Dolores.jpg

Mare de Déu de las Neus.jpg

Mare de Déu de la Pertusa.jpg

Nuestra Señora del Barrio.jpg

San Esteban de Viguera.jpg

San Pedro de la Nave.jpg

Santissima Trinidad de Iturgoyen.jpg

Virgen de la Lagunas.jpg

[From the top: Nuestra Señora de los Dolores; Mare de Déu de las Neus; Mare de Déu de la Pertusa; Nuestra Señora del Barrio; San Esteban de Viguera; San Pedro de la Nave; Santissima Trinidad de Iturgoyen; Virgen de la Lagunas. Photographs by Sebastian Schutyser. All rights reserved. Displayed with the permission of the photographer.]

The Spanish word ermita [English: hermitage] has a similar structure and meaning in all languages derived from Latin. It always refers to an uninhabited or isolated place, a location for spiritual retreat. In Romance languages it comes from the Latin word eremus, tracing back to the Greek eremos, which means deserted.

In Spain, where the hermitages presented in this preview have been photographed, their use has shifted throughout the centuries, but they have always been isolated sanctuaries or chapels. Hermits inhabited them in seclusion, or in other times, in small groups. Other hermitages were built by pilgrims, who tried to invoke divine protection on their journeys. Finally, some hermitages were erected for pastoral cults, or to house religious brotherhoods. At present many still have the cult of a saint celebrated in them once a year.

Underneath lies a history of cultural blending and fierce struggle between Christians and Muslims. The Moorish invasion in 711 abruptly severed the early artistic development of the Visigoths on the Iberian Peninsula. Only a few Visigothic sanctuaries have survived, but they are remarkable for their restrained beauty. The Reconquista, when Christian kingdoms fought to eliminate Islamic rule in Spain, began soon after and would take more than seven centuries to be completed. During that time a new impulse in religious architecture became known as Mozarabic. This was the name given to Christians living under Moorish rule. Under rising Muslim pressure they increasingly migrated north to reconquered territories. Their skills were strongly influenced by the Islamic arts and culture, and they built some of the most refined hermitages in Spain. From he eleventh century the triumphant birth of Romanesque architecture would leave a far more monumental mark, and become an architectural reflection of a conquering church. Hermitages often served as links in military strategies, or as vectors of repopulation in captured territories.

Today what we hold to be a hermitage is an idealized concept. Most of us still think of it as a hermit’s dwelling, and even imagine a saintly and bearded man living in it. Of the 575 ermitas I have visited and photographed, only one was actually occupied by a present-day hermit: a young fellow wearing an “I love New York” T-shirt, and yes, a modest beard. Many of the structures depicted in this work were not intended to be a hermit’s cell at the times they were erected. The truth is more plain. Flourishing medieval communities built their churches according to their numbers and economic strength, decaying ones ended in abandon. The rural exodus in modern Spain has left thousands of ghost villages behind. Very often only the church -built to last- remained while the other housings crumbled to nothing. These churches de facto became ermitas in the proper sense of the word.

It seems that what really has withstood the erosion of time is a mental construction: the hermitage as the ultimate refuge of modern madness. In an era in which people feel isolated if the battery of their cell phone has died, the notion of spending years without anything but incidental contact with the world is staggering. In Spain this idea has crystallized into an unequalled number of isolated sanctuaries. Vast stretches of thinly populated land and the ruggedness of the terrain have prevented their urban absorption. In the last few decades too many of them dropped into a terrible state of abandon, or worse, became subject to destructive theft and vandalism. For all that the spiritual resonance of these ermitas has survived.

These millennium old constructions have been the waypoints of my quest for a contemporary view on the concept ermita . For seven winters I have roamed the innards of rural Northern Spain in a basic camper. The solitude and harsh weather conditions taught me more about a hermit’s life than the bibliographical research for this project. It was a profound experience, allowing me much better to understand the transformations these architectural volumes in desolate landscapes undergo by winter light. The absence of company and all other diversions sharpened the senses…

The transcription of the word ermita into a visual language is the axis around which this work has evolved. It forced me to keep it simple, and discard anything redundant. My expensive cameras and sophisticated lenses were the first to go. There is no need for analytical sharpness while trying to make an abstraction perceptible. Instead I used the most primitive form of a photographic device, the camera obscura or pinhole camera. The choice of this tool, in fact no more than a wooden box with a tiny hole in it, was essential to find the right visual texture. The resulting photographs have the sort of clarity that comes from staring at an object for a long time, while also slightly blurring and distorting the world around the edges—an almost hallucinatory feel that one might get from long periods of isolation.

The hermitages in these photographs were all chosen for their aura rather than their cultural-historical importance. Some of them are cracked open by time, but still remain hermetic and austere. The almost complete absence of windows gives the buildings a sepulchral quality which is often at odds with the open nature of the land. They seem both alien and organic at the same time. Others are overgrown with vegetation, retreating even more into their surroundings. They become a fusion between nature and the spiritual footprint of man. As they blend with the empty landscapes in which they stand, the original meaning of the word ermita emerges.

But the real key to the success of this work was the light. Nothing is as plain as a sunny day under a blue sky. Something more complex was needed to express the emotional charge of the subject. It had to reflect desolation but not desperation. A wide palette of meteorological conditions was at my disposal. Northern Spain is a cold place in winter! Just as the tough physical conditions were leading hermits to a higher spiritual awareness, photographing with long exposures during rain or snow spells made a certain radiance become apparent in the images. While infinite grey skies set the canvas of their isolation, this luminescence is bringing out the true spirit of these humble sanctuaries.

I have often wondered what I was hoping to achieve, while I was balancing in icy wet winds on a small ladder holding on to my tripod with one hand and clutching an umbrella over my equipment with the other. The answer lies of course in these photographs, but I believe it also matters in what manner they were created. The use of a pinhole camera was not merely a technical option, but also a philosophical choice. Indeed, a poor man’s camera for a poor man’s church.

Nothing much happened

“If you look at the history of the world, up until 1700 nothing much happened.” That’s what Karl Marx said to Friedrich Engels when the two first met, at a cafe in Paris, in 1844. No, I’m kidding. The guy who actually spoke those words is Hal Varian, Google’s chief economist, and he spoke them just a few days ago. What roused history from its millennia-long stupor – what finally made things happen – was, in Varian’s view, the steam engine, the technology that jump-started “the wonderful Industrial Revolution,” which in turn began to lift “GDP growth per capita.” History, in the Googley view, isn’t about what people do; it’s about what they output with the help of machines. Before 1700 you could see history everywhere except in the productivity statistics.

The Google folks are Marxists in their historical materialism, but they seem blind to class-related phenomena such as the rapidly growing divide in wealth. Theirs is a happy, trickle-down world. “What rich people have now,” Varian says, “middle class people will have in twenty years.” What’s he talking about? Private jets? Ranches in Montana? Income growth?

What the steam engine did for industrial output, Varian implies, the Google search engine is doing for intellectual output:

There’s a recent study out of the University of Michigan, where they had a team of students find answers to a set of questions using materials in the campus library. Then another team had to answer the same set of questions using Google. It took them 7 minutes to answer the questions on Google and 22 minutes to answer them in the library. Think about all the time saved! Thirty years ago, getting answers was really expensive, so we asked very few questions. Now getting answers is cheap, so we ask billions of questions a day, like “what is Jennifer Aniston having for breakfast?” We would have never asked that 30 years ago.

Lord knows it’s great that we can answer well-defined questions a lot more quickly today than we could 20 years ago, and that that allows us to ask more, and more-trivial, questions in the course of a day than we could before, but Varian’s desire to apply measures of productivity to the life of the mind also testifies to the narrowness of Google’s view. It values the measurable over the nonmeasurable, and what it values most of all are those measurable variables that are increasing thanks to recent technological advances. In other words, it stacks history’s deck. How did the University of Michigan researchers come up with the questions that they had their subjects find answers to? They “obtained a random sample of 2515 queries from a major search engine.” Ha!

Maybe the question we should be asking, not of Google but of ourselves, is what types of questions the Net is encouraging us to ask. Should human thought be gauged by its output or by its quality? That question might actually propel one into the musty depths of a library, where “time saved” is not always the primary concern.

A few years after Marx kicked the bucket, Engels observed, “Marx and I are ourselves partly to blame for the fact that the younger people sometimes lay more stress on the economic side than is due to it.” But it was Shakespeare, in one of those empty years before 1700, who made the point more eloquently when he had Hamlet say, to his university buddy, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”

Situational overload and ambient overload

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

“It’s not information overload. It’s filter failure.” That was the main theme of a thoughtful and influential talk that Clay Shirky gave at a technology conference back in 2008. It’s an idea that’s easy to like both because it feels intuitively correct and because it’s reassuring: better filters will help reduce information overload, and better filters are things we can actually build. Information overload isn’t an inevitable side effect of information abundance. It’s a problem that has a solution. So let’s roll up our sleeves and start coding.

There was one thing that bugged me, though, about Shirky’s idea, and it was this paradox: The quality and speed of our information filters have been improving steadily for a few centuries, and have been improving extraordinarily quickly for the last two decades, and yet our sense of being overloaded with information is stronger than ever. If, as Shirky argues, improved filters will reduce overload, then why haven’t they done so up until now? Why don’t we feel that information overload is subsiding as a problem rather than getting worse? The reason, I’ve come to believe, is that Shirky’s formulation gets it precisely backwards. Better filters don’t mitigate information overload; they intensify it. It would be more accurate to say: “It’s not information overload. It’s filter success.”

But let me back up a little, because it’s actually more complicated than that. One of the traps we fall into when we talk about information overload is that we’re usually talking about two very different things as if they were one thing. Information overload actually takes two forms, which I’ll call situational overload and ambient overload, and they need to be treated separately.

Situational overload is the needle-in-the-haystack problem: You need a particular piece of information – in order to answer a question of one sort or another – and that piece of information is buried in a bunch of other pieces of information. The challenge is to pinpoint the required information, to extract the needle from the haystack, and to do it as quickly as possible. Filters have always been pretty effective at solving the problem of situational overload. The introduction of indexes and concordances – made possible by the earlier invention of alphabetization – helped solve the problem with books. Card catalogues and the Dewey decimal system helped solve the problem with libraries. Train and boat schedules helped solve the problem with transport. The Reader’s Guide to Periodicals helped solve the problem with magazines. And search engines and other computerized navigational and organizational tools have helped solve the problem with online databases.

Whenever a new information medium comes along, we tend to quickly develop good filtering tools that enable us to sort and search the contents of the medium. That’s as true today as it’s ever been. In general, I think you could make a strong case that, even though the amount of information available to us has exploded in recent years, the problem of situational overload has continued to abate. Yes, there are still frustrating moments when our filters give us the hay instead of the needle, but for most questions most of the time, search engines and other digital filters, or software-based, human-powered filters like email or Twitter, are able to serve up good answers in an eyeblink or two.

Situational overload is not the problem. When we complain about information overload, what we’re usually complaining about is ambient overload. This is an altogether different beast. Ambient overload doesn’t involve needles in haystacks. It involves haystack-sized piles of needles. We experience ambient overload when we’re surrounded by so much information that is of immediate interest to us that we feel overwhelmed by the neverending pressure of trying to keep up with it all. We keep clicking links, keep hitting the refresh key, keep opening new tabs, keep checking email in-boxes and RSS feeds, keep scanning Amazon and Netflix recommendations – and yet the pile of interesting information never shrinks.

The cause of situational overload is too much noise. The cause of ambient overload is too much signal.

The great power of modern digital filters lies in their ability to make information that is of inherent interest to us immediately visible to us. The information may take the form of personal messages or updates from friends or colleagues, broadcast messages from experts or celebrities whose opinions or observations we value, headlines and stories from writers or publications we like, alerts about the availability of various other sorts of content on favorite subjects, or suggestions from recommendation engines – but it all shares the quality of being tailored to our particular interests. It’s all needles. And modern filters don’t just organize that information for us; they push the information at us as alerts, updates, streams. We tend to point to spam as an example of information overload. But spam is just an annoyance. The real source of information overload, at least of the ambient sort, is the stuff we like, the stuff we want. And as filters get better, that’s exactly the stuff we get more of.

It’s a mistake, in short, to assume that as filters improve they have the effect of reducing the information we have to look at. As today’s filters improve, they expand the information we feel compelled to take notice of. Yes, they winnow out the uninteresting stuff (imperfectly), but they deliver a vastly greater supply of interesting stuff. And precisely because the information is of interest to us, we feel pressure to attend to it. As a result, our sense of overload increases. This is not an indictment of modern filters. They’re doing precisely what we want them to do: find interesting information and make it visible to us. But it does mean that if we believe that improving the workings of filters will save us from information overload, we’re going to be very disappointed. The technology that creates the problem is not going to make the problem go away. If you really want a respite from information overload, pray for filter failure.

Bottom line: When the amount of information available to be filtered is effectively unlimited, as is the case on the Net, then every improvement in the quality of filters will make information overload worse.

Killing Mnemosyne

It was, in retrospect, inevitable that once we began referring to the data stores of computers as “memory,” we would begin to confuse machine memory with the biological memory inside our minds. At the moment, though, there seems to be a renewed interest in the remarkable, and not at all machinelike, workings of biological memory, due at least in part to the popularity of Joshua Foer’s new book Moonwalking with Einstein. When I was writing The Shallows, the research that was most fascinating and enlightening to me came when I looked into what we know (and don’t know) about human memory and its role in our thinking and the development of our sense of self. (An excellent book on the science of memory is Eric Kandel’s In Search of Memory.) Here’s an excerpt from the start of “Search, Memory,” the chapter of The Shallows devoted to this subject.

* * *

Socrates was right. As people grew accustomed to writing down their thoughts and reading the thoughts others had written down, they became less dependent on the contents of their own memory. What once had to be stored in the head could instead be stored on tablets and scrolls or between the covers of codices. People began, as the great orator had predicted, to call things to mind not “from within themselves, but by means of external marks.” The reliance on personal memory diminished further with the spread of the letterpress and the attendant expansion of publishing and literacy. Books and journals, at hand in libraries or on the shelves in private homes, became supplements to the brain’s biological storehouse. People didn’t have to memorize everything anymore. They could look it up.

But that wasn’t the whole story. The proliferation of printed pages had another effect, which Socrates didn’t foresee but may well have welcomed. Books provided people with a far greater and more diverse supply of facts, opinions, ideas, and stories than had been available before, and both the method and the culture of deep reading encouraged the commitment of printed information to memory. In the seventh century, Isidore, the bishop of Seville, remarked how reading “the sayings” of thinkers in books “render[ed] their escape from memory less easy.” Because every person was free to chart his own course of reading, to define his own syllabus, individual memory became less of a socially determined construct and more the foundation of a distinctive perspective and personality. Inspired by the book, people began to see themselves as the authors of their own memories. Shakespeare has Hamlet call his memory “the book and volume of my brain.”

In worrying that writing would enfeeble memory, Socrates was, as the Italian novelist and scholar Umberto Eco says, expressing “an eternal fear: the fear that a new technological achievement could abolish or destroy something that we consider precious, fruitful, something that represents for us a value in itself, and a deeply spiritual one.” The fear in this case turned out to be misplaced. Books provide a supplement to memory, but they also, as Eco puts it, “challenge and improve memory; they do not narcotize it.”

The Dutch humanist Desiderius Erasmus, in his 1512 textbook De Copia, stressed the connection between memory and reading. He urged students to annotate their books, using “an appropriate little sign” to mark “occurrences of striking words, archaic or novel diction, brilliant flashes of style, adages, examples, and pithy remarks worth memorizing.” He also suggested that every student and teacher keep a notebook, organized by subject, “so that whenever he lights on anything worth noting down, he may write it in the appropriate section.” Transcribing the excerpts in longhand, and rehearsing them regularly, would help ensure that they remained fixed in the mind. The passages were to be viewed as “kinds of flowers,” which, plucked from the pages of books, could be preserved in the pages of memory.

Erasmus, who as a schoolboy had memorized great swathes of classical literature, including the complete works of the poet Horace and the playwright Terence, was not recommending memorization for memorization’s sake or as a rote exercise for retaining facts. To him, memorizing was far more than a means of storage. It was the first step in a process of synthesis, a process that led to a deeper and more personal understanding of one’s reading. He believed, as the classical historian Erika Rummel explains, that a person should “digest or internalize what he learns and reflect rather than slavishly reproduce the desirable qualities of the model author.” Far from being a mechanical, mindless process, Erasmus’s brand of memorization engaged the mind fully. It required, Rummel writes, “creativeness and judgment.”

Erasmus’s advice echoed that of the Roman Seneca, who also used a botanical metaphor to describe the essential role that memory plays in reading and in thinking. “We should imitate bees,” Seneca wrote, “and we should keep in separate compartments whatever we have collected from our diverse reading, for things conserved separately keep better. Then, diligently applying all the resources of our native talent, we should mingle all the various nectars we have tasted, and then turn them into a single sweet substance, in such a way that, even if it is apparent where it originated, it appears quite different from what it was in its original state.” Memory, for Seneca as for Erasmus, was as much a crucible as a container. It was more than the sum of things remembered. It was something newly made, the essence of a unique self.

Erasmus’s recommendation that every reader keep a notebook of memorable quotations was widely and enthusiastically followed. Such notebooks, which came to be called “commonplace books,” or just “commonplaces,” became fixtures of Renaissance schooling. Every student kept one. By the seventeenth century, their use had spread beyond the schoolhouse. Commonplaces were viewed as necessary tools for the cultivation of an educated mind. In 1623, Francis Bacon observed that “there can hardly be anything more useful” as “a sound help for the memory” than “a good and learned Digest of Common Places.” By aiding the recording of written works in memory, he wrote, a well-maintained commonplace “supplies matter to invention.” Through the eighteenth century, according to American University linguistics professor Naomi Baron, “a gentleman’s commonplace book” served “both as a vehicle for and a chronicle of his intellectual development.”

The popularity of commonplace books ebbed as the pace of life quickened in the nineteenth century, and by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.

Clive Thompson, the Wired writer, refers to the Net as an “outboard brain” that is taking over the role previously played by inner memory. “I’ve almost given up making an effort to remember anything,” he says, “because I can instantly retrieve the information online.” He suggests that “by offloading data onto silicon, we free our own gray matter for more germanely ‘human’ tasks like brainstorming and daydreaming.” David Brooks, the popular New York Times columnist, makes a similar point. “I had thought that the magic of the information age was that it allowed us to know more,” he writes, “but then I realized the magic of the information age is that it allows us to know less. It provides us with external cognitive servants—silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.”

Peter Suderman, who writes for the American Scene, argues that, with our more or less permanent connections to the Internet, “it’s no longer terribly efficient to use our brains to store information.” Memory, he says, should now function like a simple index, pointing us to places on the Web where we can locate the information we need at the moment we need it: “Why memorize the content of a single book when you could be using your brain to hold a quick guide to an entire library? Rather than memorize information, we now store it digitally and just remember what we stored.” As the Web “teaches us to think like it does,” he says, we’ll end up keeping “rather little deep knowledge” in our own heads. Don Tapscott, the technology writer, puts it more bluntly. Now that we can look up anything “with a click on Google,” he says, “memorizing long passages or historical facts” is obsolete. Memorization is “a waste of time.”

Our embrace of the idea that computer databases provide an effective and even superior substitute for personal memory is not particularly surprising. It culminates a century-long shift in the popular view of the mind. As the machines we use to store data have become more voluminous, flexible, and responsive, we’ve grown accustomed to the blurring of artificial and biological memory. But it’s an extraordinary development nonetheless. The notion that memory can be “outsourced,” as Brooks puts it, would have been unthinkable at any earlier moment in our history. For the Ancient Greeks, memory was a goddess: Mnemosyne, mother of the Muses. To Augustine, it was “a vast and infinite profundity,” a reflection of the power of God in man. The classical view remained the common view through the Middle Ages, the Renaissance, and the Enlightenment—up to, in fact, the close of the nineteenth century. When, in an 1892 lecture before a group of teachers, William James declared that “the art of remembering is the art of thinking,” he was stating the obvious. Now, his words seem old-fashioned. Not only has memory lost its divinity; it’s well on its way to losing its humanness. Mnemosyne has become a machine.

The shift in our view of memory is yet another manifestation of our acceptance of the metaphor that portrays the brain as a computer. If biological memory functions like a hard drive, storing bits of data in fixed locations and serving them up as inputs to the brain’s calculations, then offloading that storage capacity to the Web is not just possible but, as Thompson and Brooks argue, liberating. It provides us with a much more capacious memory while clearing out space in our brains for more valuable and even “more human” computations. The analogy has a simplicity that makes it compelling, and it certainly seems more “scientific” than the suggestion that our memory is like a book of pressed flowers or the honey in a beehive’s comb. But there’s a problem with our new, post-Internet conception of human memory. It’s wrong.

Distractions and decisions

In a new article, Sharon Begley, Newsweek’s science writer, surveys the growing body of evidence indicating that an overabundance of information handicaps our ability to make smart decisions. One of the main causes of the problem seems to be that our conscious mind, which has trouble handling an onslaught of incoming information, seizes up when overloaded:

Angelika Dimoka, director of the Center for Neural Decision Making at Temple University, … recruited volunteers to try their hand at combinatorial auctions, and as they did she measured their brain activity with fMRI. As the information load increased, she found, so did activity in the dorsolateral prefrontal cortex, a region behind the forehead that is responsible for decision making and control of emotions. But as the researchers gave the bidders more and more information, activity in the dorsolateral PFC suddenly fell off, as if a circuit breaker had popped. “The bidders reach cognitive and information overload,” says Dimoka. They start making stupid mistakes and bad choices because the brain region responsible for smart decision making has essentially left the premises. For the same reason, their frustration and anxiety soar: the brain’s emotion regions—previously held in check by the dorsolateral PFC—run as wild as toddlers on a sugar high. The two effects build on one another. “With too much information, ” says Dimoka, “people’s decisions make less and less sense.”

Another reason is that our minds are inclined to give more importance to recent information than to older information, even when the older is actually more important:

The brain is wired to notice change over stasis. An arriving email that pops to the top of your BlackBerry qualifies as a change; so does a new Facebook post. We are conditioned to give greater weight in our decision-making machinery to what is latest, not what is more important or more interesting. “There is a powerful ‘recency’ effect in decision making,” says behavioral economist George Loewenstein of Carnegie Mellon University. “We pay a lot of attention to the most recent information, discounting what came earlier.” Getting 30 texts per hour up to the moment when you make a decision means that most of them make all the impression of a feather on a brick wall, whereas Nos. 29 and 30 assume outsize importance, regardless of their validity. “We’re fooled by immediacy and quantity and think it’s quality,” says Eric Kessler, a management expert at Pace University’s Lubin School of Business. “What starts driving decisions is the urgent rather than the important.”

The fact that we think less clearly when we’re distracted shouldn’t be a big surprise, but perhaps the hard evidence Begley reviews will give pause to those who labor under the misapprehension that, when it comes to information, more is always better.