Category Archives: Uncategorized

Sex, math, code

One topic that book writers and publishers don’t much like to talk about is the recent explosion of bootleg copies of popular books online. And I’m not going to talk about it either. But I am going to point to GalleyCat’s current bestseller list for pirated books, which provides a remarkably clear view of what savvy media pirates spend their time thinking about:

1. 1000 Photoshop Tips and Tricks

2. Advanced Sex: Explicit Positions for Explosive Lovemaking

3. What Did We Use Before Toilet Paper?: 200 Curious Questions

4. Photoshop CS5 All-in-One For Dummies

5. What Rich People Know & Desperately Want to Keep a Secret

6. 101 Short Cuts in Maths Any One Can Do

7. Touch Me There!: A Hands-On Guide to Your Orgasmic Hot Spots

8. How to Blow Her Mind in Bed

9. 1001 Math Problems

10. How To Make People Like You In 90 Seconds Or Less

I’m going to title my next book “The Code of Sex: Ten Secrets for Using Math to Keep Her Satisfied and Hungry for More.” I promise you that it’s going to be the most pirated book of all time.

Cities of the page

The rapid spread of the printing press, after its invention by Gutenberg around 1450, still stands as one of history’s most remarkable examples of technological transformation. Jeremiah Dittmar, an American University professor who has been studying the economic consequences of the early diffusion of printing technology, provides a striking visual representation of the print explosion, showing how, over just 50 years, printing presses spread from a single city – Gutenberg’s Mainz – to more than 200 cities throughout Europe:

citiesofpage.jpg

What makes the diffusion of printing so remarkable is not just that it began in the late Middle Ages, when news, ideas, and people moved exceedingly slowly (by today’s standards), but also that printing encompassed a complex system of devices and processes – not only the press itself but metallurgy, the design and casting of typographical symbols of standardized size, the creation of new oil-based inks, the expertise required to set type and work the press, and so forth – and that the inventions were very much treated as trade secrets. It seems clear that the desire for the products of the press was overwhelming.

There has, up to now, been a lot of uncertainty and controversy surrounding the economic ramifications of the printing press. Dittmar’s new paper, Information Technology and Economic Change: The Impact of the Printing Press, sheds some new light on the question. He studied the relative growth of the cities that were the sites of early presses. He found that the cities that had print shops by the end of the 15th century “grew at least 20 percentage points – and as much as 78 percentage points – more than similar cities” over the course of the next century. That suggests that “the impact of printing accounted for at least 18 and as much as 68 percent of European city growth between 1500 and 1600.” The printing press appears to have had profound economic and demographic effects as well as cultural ones.

Inside out, outside in

Adam Gopnik surveys a year’s worth of books about the Internet in the new New Yorker. “A series of books explaining why books no longer matter is a paradox that Chesterton would have found implausible,” he says, and goes on from there. Like other such New Yorker surveys, reading this one feels something like taking a walk through the woods with a charming, clever, and jaded nature guide – “The squirrel is renowned as an industrious creature, but let’s not forget that it is also a flighty one” – but toward the end Gopnik makes a particularly penetrating point:

What we live in is not the age of the extended mind but the age of the inverted self. … A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them. Everything once inside is outside, a click away; much that used to be outside is inside, experienced in solitude. And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers [that’s my clan!] rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.

The idea that social networks have the effect of turning up rather than turning down the volume of our self-consciousness seems to me precisely right. The Net turns the social instinct inward, which ends up fencing in rather than freeing the self.

Tools of the mind

One of the things I try to do in The Shallows is to place the Internet into the long history of technologies that have shaped human thought – what I term “intellectual technologies.” In this clip from an interview I did recently with Big Think in New York, I discuss three of those technologies: the map, the mechanical clock, and the printed book.

Moderating abundance

Every year, Edge.org poses a question to a bunch of folks and then publishes the answers. This year’s question is (in so many words): What scientific concept would have big practical benefits if it became more broadly known? Here’s my answer:

Cognitive load

You’re sprawled on the couch in your living room, watching a new episode of Justified on the tube, when you think of something you need to do in the kitchen. You get up, take ten quick steps across the carpet, and then, just as you reach the kitchen door – poof! – you realize you’ve already forgotten what it was you got up to do. You stand befuddled for a moment, then shrug your shoulders and head back to the couch.

Such memory lapses happen so often that we don’t pay them much heed. We write them off as “absentmindedness” or, if we’re getting older, “senior moments.” But the incidents reveal a fundamental limitation of our minds: the tiny capacity of our working memory. Working memory is what brain scientists call the short-term store of information where we hold the contents of our consciousness at any given moment – all the impressions and thoughts that flow into our mind as we go through a day. In the 1950s, Princeton psychologist George Miller famously argued that our brains can hold only about seven pieces of information simultaneously. Even that figure may be too high. Some brain researchers now believe that working memory has a maximum capacity of just three or four elements.

The amount of information entering our consciousness at any instant is referred to as our cognitive load. When our cognitive load exceeds the capacity of our working memory, our intellectual abilities take a hit. Information zips into and out of our mind so quickly that we never gain a good mental grip on it. (Which is why you can’t remember what you went to the kitchen to do.) The information vanishes before we’ve had an opportunity to transfer it into our long-term memory and weave it into knowledge. We remember less, and our ability to think critically and conceptually weakens. An overloaded working memory also tends to increase our distractedness. After all, as the neuroscientist Torkel Klingberg has pointed out, “we have to remember what it is we are to concentrate on.” Lose your hold on that, and you’ll find “distractions more distracting.”

Developmental psychologists and educational researchers have long used the concept of cognitive load in designing and evaluating pedagogical techniques. When you give a student too much information too quickly, they know, comprehension degrades and learning suffers. But now that all of us – thanks to the incredible speed and volume of modern digital communication networks and gadgets – are inundated with more bits and pieces of information than ever before, everyone would benefit from having an understanding of cognitive load and how it influences memory and thinking. The more aware we are of how small and fragile our working memory is, the more we’ll be able to monitor and manage our cognitive load. We’ll become more adept at controlling the flow of the information coming at us.

There are times when you want to be awash in messages and other info-bits. The resulting sense of connectedness and stimulation can be exciting and pleasurable. But it’s important to remember that, when it comes to the way your brain works, information overload is not just a metaphor; it’s a physical state. When you’re engaged in a particularly important or complicated intellectual task, or when you simply want to savor an experience or a conversation, it’s best to turn the information faucet down to a trickle.

Short is the new long

“The general point is this,” writes economist Tyler Cowen, the infovore’s infovore, in his 2009 book Create Your Own Economy:

When access [to information] is easy, we tend to favor the short, the sweet, and the bitty. When access is difficult, we tend to look for large-scale productions, extravaganzas, and masterpieces. Through this mechanism, costs of access influence our interior lives. There are usually both “small bits” and “large bits” of culture within our grasp. High costs of access shut out the small bits – they’re not worthwhile – and therefore shunt us toward the large bits. Low costs of access give us a diverse mix of small and large bits, but in relative terms, it is pretty easy to enjoy the small bits.

The current trend – as it has been running for decades – is that a lot of our culture is coming in shorter and smaller bits … To be sure, not everything is shorter and to the point. The same wealth that encourages brevity also enables very long performances and spectacles … There is an increasing diversity of length, but when it comes to what is culturally central, shortness is the basic trend.

I think Cowen’s analysis is essentially correct, and he’s certainly right to point out how the cost of information influences the consumption of information. (There’s also neurological evidence suggesting that, when confronted with a diversity of easily available information, our brains will prefer to sample lots of small bits of new information rather than focus for a long time on something more substantial.) If you look at the statistics of information consumption, you see considerable evidence of this decades-long trend toward ever bittier degrees of bittiness. Measures of the average length of pretty much any cultural product – magazine and newspaper articles, TV news segments and soundbites, books, personal correspondence, commercials, motion pictures – reveal a steady and often cumulatively dramatic compression in size. Studies of reading and research behavior also suggest that we are spending less time with each passing object of our attention. A survey by library sciences professor Ziming Liu, published in the Journal of Documentation, found, for example, that between 1993 and 2003 – a period characterized by a rapid shift from print reading to screen reading – people’s reading habits changed substantially, with a rapid increase in “browsing and scanning” and a falloff in “in-depth reading.”

More recently, we’ve seen a particularly dramatic compression in the average length of correspondence and other personal messages, as the production and consumption of Facebook updates, text messages, and tweets have exploded. This phenomenon, it would seem natural to assume, is further accelerating the bittiness trend.

But that’s not how the technology writer Clive Thompson sees it. As he describes in a new Wired column, he has a hunch, or at least an inkling, that the rise of Facebook and Twitter is actually increasing our appetite for longer stuff and, more surprising still, making us more contemplative. Even as we ratchet up our intake of “short takes,” he argues, we’re also increasing our intake of “long takes,” and the only thing we’re consuming less of is “middle takes.” “I think,” he writes, that “the torrent of short-form thinking is actually a catalyst for more long-form meditation.” Thompson never describes precisely how or why this catalytic action, through which the swirl of info-bits deepens our engagement with longer-form material, plays out, but it seems to involve a change in how society makes sense of events:

When something newsworthy happens today—Brett Favre losing to the Jets, news of a new iPhone, a Brazilian election runoff—you get a sudden blizzard of status updates. These are just short takes, and they’re often half-baked or gossipy and may not even be entirely true. But that’s OK; they’re not intended to be carefully constructed. Society is just chewing over what happened, forming a quick impression of What It All Means.

The long take is the opposite: It’s a deeply considered report and analysis, and it often takes weeks, months, or years to produce. It used to be that only traditional media, like magazines or documentaries or books, delivered the long take. But now, some of the most in-depth stuff I read comes from academics or businesspeople penning big blog essays, Dexter fans writing 5,000-word exegeses of the show, and nonprofits like the Pew Charitable Trusts producing exhaustively researched reports on American life.

The logic here seems murky to me. Pointing to a few examples of how some new sources of long-form writing have emerged online says nothing about trends in consumption. As Tyler Cowen suggests, it’s a fallacy to assume that the availability of long-form works means that our reading and viewing of long-form works are increasing. As Cowen points out, reducing the cost of information production has increased the diversity of the forms of information available (across the entire spectrum of length, from the micro to the jumbo), but we have gravitated to the shorter forms, not the longer ones. Even on the production side, Thompson is probably overstating the case for length by highlighting new sources of long-form writing (eg, the blogs of Dexter fans) but ignoring the whittling away of many traditional sources of long-form content (eg, popular magazines).

None of this means that Thompson’s optimistic hunch is necessarily wrong – I personally hope he’s right – but it does mean that, in the absence of real evidence supporting his case, we probably shouldn’t take his hunch as anything more than a hunch. Up to now, the evidence has pointed pretty strongly in the opposite direction, and it remains difficult for me to see how the recent explosion of micro-messages will catalyze a reversal of the long-term trend toward bittiness.

Thompson ends his column – itself a “middle take” – by pointing to the recent development of online reading tools, like Instapaper and Readability, that, by isolating digital text from the web’s cacophony of distractions, encourage deeper, more attentive reading. I agree with him that the appearance of these tools is a welcome sign. At the very least, they reveal a growing awareness that the web, in its traditional form, is deeply flawed as a reading medium, and they suggest a yearning to escape what Cory Doctorow has termed our “ecosystem of interruption technologies.” What remains to be seen is how broadly and intensively these tools will actually be used. Will they really mark a change in our habits, or will they, like home exercise machines, stand as monuments to wishful thinking? (To return to Cowen’s point, availability does not necessarily imply use.) My sense right now is that they remain peripheral technologies, particularly when compared to tools of bittiness like Facebook or texting, but it’s not impossible that they’ll become more popular.

In the course of his argument, I should note, Thompson does offer one piece of seemingly hard evidence to support his case. It concerns the length of blog posts: “One survey found that the most popular blog posts today are the longest ones, 1,600 words on average.” As it turns out, though, this “survey” – you can read it here – is pretty much worthless. It consisted of a guy asking four bloggers to list their five “most linked to” posts and then calculating the mean length of those 20 posts (1,600 words). This exercise tells us next to nothing about online reading habits, and it’s a stretch even to suggest that it shows that “the most popular blog posts today are the longest ones.” Indeed, if you look at some of the long posts highlighted in the study, they actually take the form not of “deeply considered” long takes but of cursory lists of short takes (representative title: “101 Ways to Build Link Popularity”).

What was most interesting to me about Thompson’s reference to this survey was the implication that he considers a 1,600-word article to qualify as a “long take.” Perhaps what Thompson is actually picking up on, and helping to propel forward, is a general downward trend in our expectations about the length of content. We’re shrinking our definition of long-form writing to fit the limits of our ever more distracted reading habits. What would have once been considered a remark is now considered a “short take”; what would once have been considered a “short take” is now a “middle take”; and what once would have been considered a “middle take” is now seen as a “long take.” As long as we take this path, we’ll always be able to reassure ourselves that long takes haven’t gone out of fashion.

The quality of allusion is not google

Last Saturday, Adam Kirsch, the talented TNR penman, accomplished a rare feat. His cherry-scented byline marked the pages of both the Wall Street Journal and the New York Times. In the Times piece, he tied the Congressional whitewashing of the Constitution to the latest attempt to give poor old Huck Finn a thorough scrubbing. Upshot: “To believe that American institutions were ever perfect makes it too easy to believe that they are perfect now. Both assumptions, one might say, are sins against the true spirit of the Constitution.” Yes, one might very well say that. One might even say “one might say,” if one wanted, say, to allude to one’s own words as if they were another’s.

Which brings us to the Journal column, titled, promisingly, “Literary Allusion in the Age of Google.” Here, one not only might but must say, Kirsch goes agley.

The piece begins well, as things that go agley so often do. Kirsch describes how the art of allusion has waned along with the reading of the classics and the Bible. As one’s personal store of literary knowledge shrinks, so too does one’s capacity for allusiveness. But Kirsch also believes that, as our shared cultural kitty has come to resemble Mother Hubbard’s cupboard, the making of a literary allusion has turned into an exercise in elitism. Rather than connecting writer and reader, it places distance between them. It’s downright undemocratic. Says Kirsch:

it is almost impossible to be confident that your audience knows the same books you do. It doesn’t matter whether you slip in “April is the cruelest month,” or “To be or not to be,” or even “The Lord is my shepherd”—there’s a good chance that at least some readers won’t know what you’re quoting, or that you’re quoting at all. What this means is that, in our fragmented literary culture, allusion is a high-risk, high-reward rhetorical strategy. The more recondite your allusion, the more gratifying it will be to those who recognize it, and the more alienating it will be to those who don’t.

No need to fret, though. The search engine is making the world safe again for literary allusions:

In the last decade or so, however, a major new factor has changed this calculus. That is the rise of Google, which levels the playing field for all readers. Now any quotation in any language, no matter how obscure, can be identified in a fraction of a second. When T.S. Eliot dropped outlandish Sanskrit and French and Latin allusions into “The Waste Land,” he had to include notes to the poem, to help readers track them down. Today, no poet could outwit any reader who has an Internet connection. As a result, allusion has become more democratic and more generous.

Reader, rejoice! Literature, like the world, has been flattened.

It’s a dicey proposition, these days, to take issue with a cultural democratizer, a leveler of playing fields, but there are big problems with Kirsch’s analysis, and they stem from his desire to see “allusion” as being synonymous with “citation” or “quotation.” An allusion is not a citation. It’s not a direct quotation. It’s not a pointer. It’s not a shout-out. And it most certainly is not a hyperlink. An allusion is a hint, a suggestion, a tease, a wink. The reference it contains is implicit rather than explicit. Its essential quality is playfulness; the word allusion derives from the Latin verb alludere, meaning “to play with” or “to joke around with.”

The lovely fuzziness of a literary allusion – the way it blurs the line between speaker and source – is the essence of its art. It’s also what makes the literary allusion an endangered species in the Age of Google. A computerized search engine like Google can swiftly parse explicit connections like citations, quotations, and hyperlinks – it feeds on them as a whale feeds on plankton – but it has little sensitivity to more ethereal connections, to the implicit, the playful, the covert, the fuzzy. Search engines are literal-minded, not literary-minded. Google’s overarching goal is to make culture machine-readable. We’ve all benefited enormously from its pursuit of that goal, but it’s important to remember that Google’s vast field of vision has a very large blind spot. Much of what’s most subtle and valuable in culture – and the allusions of artists fall into this category – is too blurry to be read by machines.

Kirsch says that T. S. Eliot “had to include notes” to “The Waste Land” in order to enable readers to “track down” its many allusions. The truth is fuzzier. The first publications of the poem, in the magazines The Criterion and The Dial, lacked the notes. The notes only appeared when the poem was published as a book, and Eliot later expressed regret that he had included them. The notes, he wrote, “stimulated the wrong kind of interest among the seekers of sources … I regret having sent so many enquirers off on a wild goose chase after Tarot cards and the Holy Grail.” By turning his allusions into mere citations, the notes led readers to see his poem as an intricate intellectual puzzle rather than a profound expression of personal emotion – a confusion that continues to haunt, and hamper, readings of the poem to this day. The beauty of “The Waste Land” lies not in its sources but in its music, which is in large measure the music of allusion, of fragments of distant melodies woven into something new. The more you google “The Waste Land,” Eliot would have warned, the less of it you’ll hear.

Let’s say, to bring in another poet, you’re reading Yeats’s “Easter 1916,” and you reach the lines

And what if excess of love
Bewildered them till they died?

It’s true that you might find the poem even more meaningful, even more moving, if you catch the allusion to Shelley’s “Alastor” (“His strong heart sunk and sickened with excess/Of love …”), but the allusion deepens and enriches Yeats’s poem whether or not you pick up on it. What matters is not that you know “Alastor” but that Yeats knows it, and that his reading of the earlier work, and his emotional connection with it, resonates through his own lyric. And since Yeats provides no clue that he’s alluding to another work, Google would be no help in “tracking down” the source of that allusion. A reader who doesn’t already have an intimate knowledge of “Alastor” would have no reason to Google the lines.

Indeed, for the lines to be Google-friendly, the allusion would have to be transformed into a quotation:

And what if “excess of love”
Bewildered them till they died?

or, worse yet, a hyperlink:

And what if excess of love
Bewildered them till they died.

As soon as an allusion is turned into an explicit citation in this way – as soon as it’s made fit for the Age of Google – it ceases to be an allusion, and it loses much of its emotional resonance. Distance is inserted between speaker and source. The lovely fuzziness is cleaned up, the music lost.

In making an allusion, a writer (or a filmmaker, or a painter, or a composer) is not trying to “outwit” the reader (or viewer, or listener), as Kirsch suggests. Art is not a parlor game. Nor is the artist trying to create a secret elitist code that will alienate readers or viewers. An allusion, when well made, is a profound act of generosity through which an artist shares with the audience a deep emotional attachment with an earlier work or influence. If you see an allusion merely as something to be tracked down, to be Googled, you miss its point and its power. You murder to dissect. An allusion doesn’t become more generous when it’s “democratized”; it simply becomes less of an allusion.

My intent here is not to knock Google, which has unlocked great stores of valuable information to many, many people. My intent is simply to point out that there are many ways to view the world, and that Google offers only one view, and a limited one at that. One of the great dangers we face as we adapt to the Age of Google is that we will all come to see the world through Google goggles, and when I read an article like Kirsch’s, with its redefinition of “allusion” into Google-friendly terms, I sense the increasing hegemony of the Google view. It’s already becoming common for journalists to tailor headlines and stories to fit the limits of search engines. Should writers and other artists be tempted to make their allusions a little more explicit, a little more understandable to literal-minded machines, before we know it allusiveness will have been redefined out of existence.

UPDATE:

Alan Jacobs corrects an overstatement I made in this post:

I think it’s clearly wrong to say that “what matters is not that you know ‘Alastor’ but that Yeats knows it.” It does matter that Yeats knows it — Yeats’s encounter with Shelley strengthens and deepens his verse — but is also matters if the reader does, because if I hear that echo of Shelley I understand better the conversation that Yeats is participating in, and that enriches my experience of his poem and also of Shelley’s. And not incidentally, the enriching power of our knowledge of intellectual tradition is one of Eliot’s key emphases.

I should have written “what matters most” rather than “what matters.” Jacobs is right that allusions draw on and reflect “the enriching power of our knowledge of intellectual tradition,” which is enriching for both writer and reader. But, in the context of a particular poem (or other work of art), the power of an allusion also derives from how deeply the artist has made the earlier work his or her own and hence how seamlessly it becomes part of his or her own work (and emotional and intellectual repertoire). I think this is one of the things Eliot meant when he remarked that mature poets steal while immature poets merely borrow. The pressure of Google, I believe, pushes us to be borrowers rather than thieves, to prize the explicit connection over the implicit one.