Monthly Archives: January 2011

Tools of the mind

One of the things I try to do in The Shallows is to place the Internet into the long history of technologies that have shaped human thought – what I term “intellectual technologies.” In this clip from an interview I did recently with Big Think in New York, I discuss three of those technologies: the map, the mechanical clock, and the printed book.

Moderating abundance

Every year, Edge.org poses a question to a bunch of folks and then publishes the answers. This year’s question is (in so many words): What scientific concept would have big practical benefits if it became more broadly known? Here’s my answer:

Cognitive load

You’re sprawled on the couch in your living room, watching a new episode of Justified on the tube, when you think of something you need to do in the kitchen. You get up, take ten quick steps across the carpet, and then, just as you reach the kitchen door – poof! – you realize you’ve already forgotten what it was you got up to do. You stand befuddled for a moment, then shrug your shoulders and head back to the couch.

Such memory lapses happen so often that we don’t pay them much heed. We write them off as “absentmindedness” or, if we’re getting older, “senior moments.” But the incidents reveal a fundamental limitation of our minds: the tiny capacity of our working memory. Working memory is what brain scientists call the short-term store of information where we hold the contents of our consciousness at any given moment – all the impressions and thoughts that flow into our mind as we go through a day. In the 1950s, Princeton psychologist George Miller famously argued that our brains can hold only about seven pieces of information simultaneously. Even that figure may be too high. Some brain researchers now believe that working memory has a maximum capacity of just three or four elements.

The amount of information entering our consciousness at any instant is referred to as our cognitive load. When our cognitive load exceeds the capacity of our working memory, our intellectual abilities take a hit. Information zips into and out of our mind so quickly that we never gain a good mental grip on it. (Which is why you can’t remember what you went to the kitchen to do.) The information vanishes before we’ve had an opportunity to transfer it into our long-term memory and weave it into knowledge. We remember less, and our ability to think critically and conceptually weakens. An overloaded working memory also tends to increase our distractedness. After all, as the neuroscientist Torkel Klingberg has pointed out, “we have to remember what it is we are to concentrate on.” Lose your hold on that, and you’ll find “distractions more distracting.”

Developmental psychologists and educational researchers have long used the concept of cognitive load in designing and evaluating pedagogical techniques. When you give a student too much information too quickly, they know, comprehension degrades and learning suffers. But now that all of us – thanks to the incredible speed and volume of modern digital communication networks and gadgets – are inundated with more bits and pieces of information than ever before, everyone would benefit from having an understanding of cognitive load and how it influences memory and thinking. The more aware we are of how small and fragile our working memory is, the more we’ll be able to monitor and manage our cognitive load. We’ll become more adept at controlling the flow of the information coming at us.

There are times when you want to be awash in messages and other info-bits. The resulting sense of connectedness and stimulation can be exciting and pleasurable. But it’s important to remember that, when it comes to the way your brain works, information overload is not just a metaphor; it’s a physical state. When you’re engaged in a particularly important or complicated intellectual task, or when you simply want to savor an experience or a conversation, it’s best to turn the information faucet down to a trickle.

Short is the new long

“The general point is this,” writes economist Tyler Cowen, the infovore’s infovore, in his 2009 book Create Your Own Economy:

When access [to information] is easy, we tend to favor the short, the sweet, and the bitty. When access is difficult, we tend to look for large-scale productions, extravaganzas, and masterpieces. Through this mechanism, costs of access influence our interior lives. There are usually both “small bits” and “large bits” of culture within our grasp. High costs of access shut out the small bits – they’re not worthwhile – and therefore shunt us toward the large bits. Low costs of access give us a diverse mix of small and large bits, but in relative terms, it is pretty easy to enjoy the small bits.

The current trend – as it has been running for decades – is that a lot of our culture is coming in shorter and smaller bits … To be sure, not everything is shorter and to the point. The same wealth that encourages brevity also enables very long performances and spectacles … There is an increasing diversity of length, but when it comes to what is culturally central, shortness is the basic trend.

I think Cowen’s analysis is essentially correct, and he’s certainly right to point out how the cost of information influences the consumption of information. (There’s also neurological evidence suggesting that, when confronted with a diversity of easily available information, our brains will prefer to sample lots of small bits of new information rather than focus for a long time on something more substantial.) If you look at the statistics of information consumption, you see considerable evidence of this decades-long trend toward ever bittier degrees of bittiness. Measures of the average length of pretty much any cultural product – magazine and newspaper articles, TV news segments and soundbites, books, personal correspondence, commercials, motion pictures – reveal a steady and often cumulatively dramatic compression in size. Studies of reading and research behavior also suggest that we are spending less time with each passing object of our attention. A survey by library sciences professor Ziming Liu, published in the Journal of Documentation, found, for example, that between 1993 and 2003 – a period characterized by a rapid shift from print reading to screen reading – people’s reading habits changed substantially, with a rapid increase in “browsing and scanning” and a falloff in “in-depth reading.”

More recently, we’ve seen a particularly dramatic compression in the average length of correspondence and other personal messages, as the production and consumption of Facebook updates, text messages, and tweets have exploded. This phenomenon, it would seem natural to assume, is further accelerating the bittiness trend.

But that’s not how the technology writer Clive Thompson sees it. As he describes in a new Wired column, he has a hunch, or at least an inkling, that the rise of Facebook and Twitter is actually increasing our appetite for longer stuff and, more surprising still, making us more contemplative. Even as we ratchet up our intake of “short takes,” he argues, we’re also increasing our intake of “long takes,” and the only thing we’re consuming less of is “middle takes.” “I think,” he writes, that “the torrent of short-form thinking is actually a catalyst for more long-form meditation.” Thompson never describes precisely how or why this catalytic action, through which the swirl of info-bits deepens our engagement with longer-form material, plays out, but it seems to involve a change in how society makes sense of events:

When something newsworthy happens today—Brett Favre losing to the Jets, news of a new iPhone, a Brazilian election runoff—you get a sudden blizzard of status updates. These are just short takes, and they’re often half-baked or gossipy and may not even be entirely true. But that’s OK; they’re not intended to be carefully constructed. Society is just chewing over what happened, forming a quick impression of What It All Means.

The long take is the opposite: It’s a deeply considered report and analysis, and it often takes weeks, months, or years to produce. It used to be that only traditional media, like magazines or documentaries or books, delivered the long take. But now, some of the most in-depth stuff I read comes from academics or businesspeople penning big blog essays, Dexter fans writing 5,000-word exegeses of the show, and nonprofits like the Pew Charitable Trusts producing exhaustively researched reports on American life.

The logic here seems murky to me. Pointing to a few examples of how some new sources of long-form writing have emerged online says nothing about trends in consumption. As Tyler Cowen suggests, it’s a fallacy to assume that the availability of long-form works means that our reading and viewing of long-form works are increasing. As Cowen points out, reducing the cost of information production has increased the diversity of the forms of information available (across the entire spectrum of length, from the micro to the jumbo), but we have gravitated to the shorter forms, not the longer ones. Even on the production side, Thompson is probably overstating the case for length by highlighting new sources of long-form writing (eg, the blogs of Dexter fans) but ignoring the whittling away of many traditional sources of long-form content (eg, popular magazines).

None of this means that Thompson’s optimistic hunch is necessarily wrong – I personally hope he’s right – but it does mean that, in the absence of real evidence supporting his case, we probably shouldn’t take his hunch as anything more than a hunch. Up to now, the evidence has pointed pretty strongly in the opposite direction, and it remains difficult for me to see how the recent explosion of micro-messages will catalyze a reversal of the long-term trend toward bittiness.

Thompson ends his column – itself a “middle take” – by pointing to the recent development of online reading tools, like Instapaper and Readability, that, by isolating digital text from the web’s cacophony of distractions, encourage deeper, more attentive reading. I agree with him that the appearance of these tools is a welcome sign. At the very least, they reveal a growing awareness that the web, in its traditional form, is deeply flawed as a reading medium, and they suggest a yearning to escape what Cory Doctorow has termed our “ecosystem of interruption technologies.” What remains to be seen is how broadly and intensively these tools will actually be used. Will they really mark a change in our habits, or will they, like home exercise machines, stand as monuments to wishful thinking? (To return to Cowen’s point, availability does not necessarily imply use.) My sense right now is that they remain peripheral technologies, particularly when compared to tools of bittiness like Facebook or texting, but it’s not impossible that they’ll become more popular.

In the course of his argument, I should note, Thompson does offer one piece of seemingly hard evidence to support his case. It concerns the length of blog posts: “One survey found that the most popular blog posts today are the longest ones, 1,600 words on average.” As it turns out, though, this “survey” – you can read it here – is pretty much worthless. It consisted of a guy asking four bloggers to list their five “most linked to” posts and then calculating the mean length of those 20 posts (1,600 words). This exercise tells us next to nothing about online reading habits, and it’s a stretch even to suggest that it shows that “the most popular blog posts today are the longest ones.” Indeed, if you look at some of the long posts highlighted in the study, they actually take the form not of “deeply considered” long takes but of cursory lists of short takes (representative title: “101 Ways to Build Link Popularity”).

What was most interesting to me about Thompson’s reference to this survey was the implication that he considers a 1,600-word article to qualify as a “long take.” Perhaps what Thompson is actually picking up on, and helping to propel forward, is a general downward trend in our expectations about the length of content. We’re shrinking our definition of long-form writing to fit the limits of our ever more distracted reading habits. What would have once been considered a remark is now considered a “short take”; what would once have been considered a “short take” is now a “middle take”; and what once would have been considered a “middle take” is now seen as a “long take.” As long as we take this path, we’ll always be able to reassure ourselves that long takes haven’t gone out of fashion.

The quality of allusion is not google

Last Saturday, Adam Kirsch, the talented TNR penman, accomplished a rare feat. His cherry-scented byline marked the pages of both the Wall Street Journal and the New York Times. In the Times piece, he tied the Congressional whitewashing of the Constitution to the latest attempt to give poor old Huck Finn a thorough scrubbing. Upshot: “To believe that American institutions were ever perfect makes it too easy to believe that they are perfect now. Both assumptions, one might say, are sins against the true spirit of the Constitution.” Yes, one might very well say that. One might even say “one might say,” if one wanted, say, to allude to one’s own words as if they were another’s.

Which brings us to the Journal column, titled, promisingly, “Literary Allusion in the Age of Google.” Here, one not only might but must say, Kirsch goes agley.

The piece begins well, as things that go agley so often do. Kirsch describes how the art of allusion has waned along with the reading of the classics and the Bible. As one’s personal store of literary knowledge shrinks, so too does one’s capacity for allusiveness. But Kirsch also believes that, as our shared cultural kitty has come to resemble Mother Hubbard’s cupboard, the making of a literary allusion has turned into an exercise in elitism. Rather than connecting writer and reader, it places distance between them. It’s downright undemocratic. Says Kirsch:

it is almost impossible to be confident that your audience knows the same books you do. It doesn’t matter whether you slip in “April is the cruelest month,” or “To be or not to be,” or even “The Lord is my shepherd”—there’s a good chance that at least some readers won’t know what you’re quoting, or that you’re quoting at all. What this means is that, in our fragmented literary culture, allusion is a high-risk, high-reward rhetorical strategy. The more recondite your allusion, the more gratifying it will be to those who recognize it, and the more alienating it will be to those who don’t.

No need to fret, though. The search engine is making the world safe again for literary allusions:

In the last decade or so, however, a major new factor has changed this calculus. That is the rise of Google, which levels the playing field for all readers. Now any quotation in any language, no matter how obscure, can be identified in a fraction of a second. When T.S. Eliot dropped outlandish Sanskrit and French and Latin allusions into “The Waste Land,” he had to include notes to the poem, to help readers track them down. Today, no poet could outwit any reader who has an Internet connection. As a result, allusion has become more democratic and more generous.

Reader, rejoice! Literature, like the world, has been flattened.

It’s a dicey proposition, these days, to take issue with a cultural democratizer, a leveler of playing fields, but there are big problems with Kirsch’s analysis, and they stem from his desire to see “allusion” as being synonymous with “citation” or “quotation.” An allusion is not a citation. It’s not a direct quotation. It’s not a pointer. It’s not a shout-out. And it most certainly is not a hyperlink. An allusion is a hint, a suggestion, a tease, a wink. The reference it contains is implicit rather than explicit. Its essential quality is playfulness; the word allusion derives from the Latin verb alludere, meaning “to play with” or “to joke around with.”

The lovely fuzziness of a literary allusion – the way it blurs the line between speaker and source – is the essence of its art. It’s also what makes the literary allusion an endangered species in the Age of Google. A computerized search engine like Google can swiftly parse explicit connections like citations, quotations, and hyperlinks – it feeds on them as a whale feeds on plankton – but it has little sensitivity to more ethereal connections, to the implicit, the playful, the covert, the fuzzy. Search engines are literal-minded, not literary-minded. Google’s overarching goal is to make culture machine-readable. We’ve all benefited enormously from its pursuit of that goal, but it’s important to remember that Google’s vast field of vision has a very large blind spot. Much of what’s most subtle and valuable in culture – and the allusions of artists fall into this category – is too blurry to be read by machines.

Kirsch says that T. S. Eliot “had to include notes” to “The Waste Land” in order to enable readers to “track down” its many allusions. The truth is fuzzier. The first publications of the poem, in the magazines The Criterion and The Dial, lacked the notes. The notes only appeared when the poem was published as a book, and Eliot later expressed regret that he had included them. The notes, he wrote, “stimulated the wrong kind of interest among the seekers of sources … I regret having sent so many enquirers off on a wild goose chase after Tarot cards and the Holy Grail.” By turning his allusions into mere citations, the notes led readers to see his poem as an intricate intellectual puzzle rather than a profound expression of personal emotion – a confusion that continues to haunt, and hamper, readings of the poem to this day. The beauty of “The Waste Land” lies not in its sources but in its music, which is in large measure the music of allusion, of fragments of distant melodies woven into something new. The more you google “The Waste Land,” Eliot would have warned, the less of it you’ll hear.

Let’s say, to bring in another poet, you’re reading Yeats’s “Easter 1916,” and you reach the lines

And what if excess of love
Bewildered them till they died?

It’s true that you might find the poem even more meaningful, even more moving, if you catch the allusion to Shelley’s “Alastor” (“His strong heart sunk and sickened with excess/Of love …”), but the allusion deepens and enriches Yeats’s poem whether or not you pick up on it. What matters is not that you know “Alastor” but that Yeats knows it, and that his reading of the earlier work, and his emotional connection with it, resonates through his own lyric. And since Yeats provides no clue that he’s alluding to another work, Google would be no help in “tracking down” the source of that allusion. A reader who doesn’t already have an intimate knowledge of “Alastor” would have no reason to Google the lines.

Indeed, for the lines to be Google-friendly, the allusion would have to be transformed into a quotation:

And what if “excess of love”
Bewildered them till they died?

or, worse yet, a hyperlink:

And what if excess of love
Bewildered them till they died.

As soon as an allusion is turned into an explicit citation in this way – as soon as it’s made fit for the Age of Google – it ceases to be an allusion, and it loses much of its emotional resonance. Distance is inserted between speaker and source. The lovely fuzziness is cleaned up, the music lost.

In making an allusion, a writer (or a filmmaker, or a painter, or a composer) is not trying to “outwit” the reader (or viewer, or listener), as Kirsch suggests. Art is not a parlor game. Nor is the artist trying to create a secret elitist code that will alienate readers or viewers. An allusion, when well made, is a profound act of generosity through which an artist shares with the audience a deep emotional attachment with an earlier work or influence. If you see an allusion merely as something to be tracked down, to be Googled, you miss its point and its power. You murder to dissect. An allusion doesn’t become more generous when it’s “democratized”; it simply becomes less of an allusion.

My intent here is not to knock Google, which has unlocked great stores of valuable information to many, many people. My intent is simply to point out that there are many ways to view the world, and that Google offers only one view, and a limited one at that. One of the great dangers we face as we adapt to the Age of Google is that we will all come to see the world through Google goggles, and when I read an article like Kirsch’s, with its redefinition of “allusion” into Google-friendly terms, I sense the increasing hegemony of the Google view. It’s already becoming common for journalists to tailor headlines and stories to fit the limits of search engines. Should writers and other artists be tempted to make their allusions a little more explicit, a little more understandable to literal-minded machines, before we know it allusiveness will have been redefined out of existence.

UPDATE:

Alan Jacobs corrects an overstatement I made in this post:

I think it’s clearly wrong to say that “what matters is not that you know ‘Alastor’ but that Yeats knows it.” It does matter that Yeats knows it — Yeats’s encounter with Shelley strengthens and deepens his verse — but is also matters if the reader does, because if I hear that echo of Shelley I understand better the conversation that Yeats is participating in, and that enriches my experience of his poem and also of Shelley’s. And not incidentally, the enriching power of our knowledge of intellectual tradition is one of Eliot’s key emphases.

I should have written “what matters most” rather than “what matters.” Jacobs is right that allusions draw on and reflect “the enriching power of our knowledge of intellectual tradition,” which is enriching for both writer and reader. But, in the context of a particular poem (or other work of art), the power of an allusion also derives from how deeply the artist has made the earlier work his or her own and hence how seamlessly it becomes part of his or her own work (and emotional and intellectual repertoire). I think this is one of the things Eliot meant when he remarked that mature poets steal while immature poets merely borrow. The pressure of Google, I believe, pushes us to be borrowers rather than thieves, to prize the explicit connection over the implicit one.

Media’s medium

The New Republic is today running my review of Douglas Coupland’s biography Marshall McLuhan: You Know Nothing of My Work! Here’s the start:

One of my favorite YouTube videos is a clip from a Canadian television show in 1968 featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both heroes of the 60s, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”

Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose. Both impressions, it turns out, are valid. As Douglas Coupland argues in his pithy new biography, McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas. In 1960, he had a stroke so severe that he was given his last rites. In 1967, just a few months before the Mailer debate, surgeons removed a tumor the size of an apple from the base of his brain. A later procedure revealed that McLuhan had an extra artery pumping blood into his cranium.

Read on.

Bonus: the YouTube clip:

The internet changes everything/nothing

In an essay at Berfrois, Justin E. H. Smith gets at the weird technological totalitarianism that makes the Net so unusual in the history of tools:

The Internet has concentrated once widely dispersed aspects of a human life into one and the same little machine: work, friendship, commerce, creativity, eros. As someone sharply put it a few years ago in an article in Slate or something like that: our work machines and our porn machines are now the same machines. This is, in short, an exceptional moment in history, next to which 19th-century anxieties about the railroad or the automated loom seem frivolous. Looms and cotton gins and similar apparatuses each only did one thing; the Internet does everything.

It is the nuclear option for human culture, unleashed, evidently, without any reflection upon its long-term consequences. I am one of its victims, caught in the initial blast wave. Nothing is the same anymore, not reading, not friendship, not thinking, not love. In my symptoms, however, I resemble more the casualty of an opium war than of a nuclear war: I sit in my dark den and hit the ‘refresh’ button all day and night. When I go out, I take a portable dose in my pocket, in the form of a pocket-sized screen. You might see me hitting ‘refresh’ as I’m crossing the street. You might feel an urge to honk.

And yet perhaps all the Net does is make what was always implicitly virtual explicitly virtual:

If then there is a certain respect in which it makes sense to say that the Internet does not change everything, it is that human social reality was always virtual anyway. I do not mean this in some obfuscating Baudrillardian sense, but rather as a corollary to a thoroughgoing naturalism: human institutions only exist because they appear to humans to exist; nature is entirely indifferent to them. And tools and vehicles only are what they are because people make the uses of them that they do.

Consider the institution of friendship. Every time I hear someone say that Facebook ‘friendship’ should be understood in scare quotes, or that Facebook interaction is not real social interaction, I feel like asking in reply: What makes you think real-world friendships are real? Have you not often felt some sort of amical rapport with a person with whom you interact face-to-face, only to find that in the long run it comes to nothing? How exactly was that fleeting sensation any more real than the discovery and exploration of shared interests and sensibilities with a ‘friend’ one knows only through the mediation of a social-networking site? …

One would do better to trace [the Net] back far further, to holy scripture, to runes and oracle bones, to the discovery of the possibility of reproducing the world through manipulation of signs.

If human culture has always been artificial, isn’t it frivolous to worry about it becoming more artificial?

I’m going to have to mull that over.

The “Like” bribe

Yesterday, I was one of the recipients of an amusing mass email from the long-time tech pundit Guy Kawasaki. He sent it out to promote a new book he’s written as well as to promote the Facebook fan page for that book. Under the subject line “Free copy of Guy’s first book,” it went as follows:

A long time ago (1987 exactly), I published my first book, The Macintosh Way. I wrote it because I was bursting with idealistic and pure notions about how a company can change the world, and I wanted to spread the gospel …

I recently re-acquired the rights for this book, and I’m making it freely available from the fan page of my upcoming book, Enchantment: The Art of Changing Hearts, Minds, and Actions. To download The Macintosh Way:

1. Go to the fan page.

2. “Like” the page.

3. Click on The Macintosh Way book cover to download the PDF.

Yes, that’s right. The pure-hearted, Apple-cheeked idealism of youth has given way to the crass cynicism of using virtual swag as a bribe to get you to click a Like button. Marketing corrupts, and Facebook marketing corrupts absolutely.

guybribe.jpg

Here, by the way, is how Kawasaki describes his new tome: “The book explains when and why enchantment is necessary and then the pillars of enchantment: likability, trustworthiness, and a great cause.” That’s “likability” in the purely transactional sense, I assume.

Back in elementary school, there was this distinctly unlikable kid who, if you agreed to act like his friend for a day, would let you swim in his family’s swimming pool. Little did we know that he was a cultural pioneer.