The quality of allusion is not google

Last Saturday, Adam Kirsch, the talented TNR penman, accomplished a rare feat. His cherry-scented byline marked the pages of both the Wall Street Journal and the New York Times. In the Times piece, he tied the Congressional whitewashing of the Constitution to the latest attempt to give poor old Huck Finn a thorough scrubbing. Upshot: “To believe that American institutions were ever perfect makes it too easy to believe that they are perfect now. Both assumptions, one might say, are sins against the true spirit of the Constitution.” Yes, one might very well say that. One might even say “one might say,” if one wanted, say, to allude to one’s own words as if they were another’s.

Which brings us to the Journal column, titled, promisingly, “Literary Allusion in the Age of Google.” Here, one not only might but must say, Kirsch goes agley.

The piece begins well, as things that go agley so often do. Kirsch describes how the art of allusion has waned along with the reading of the classics and the Bible. As one’s personal store of literary knowledge shrinks, so too does one’s capacity for allusiveness. But Kirsch also believes that, as our shared cultural kitty has come to resemble Mother Hubbard’s cupboard, the making of a literary allusion has turned into an exercise in elitism. Rather than connecting writer and reader, it places distance between them. It’s downright undemocratic. Says Kirsch:

it is almost impossible to be confident that your audience knows the same books you do. It doesn’t matter whether you slip in “April is the cruelest month,” or “To be or not to be,” or even “The Lord is my shepherd”—there’s a good chance that at least some readers won’t know what you’re quoting, or that you’re quoting at all. What this means is that, in our fragmented literary culture, allusion is a high-risk, high-reward rhetorical strategy. The more recondite your allusion, the more gratifying it will be to those who recognize it, and the more alienating it will be to those who don’t.

No need to fret, though. The search engine is making the world safe again for literary allusions:

In the last decade or so, however, a major new factor has changed this calculus. That is the rise of Google, which levels the playing field for all readers. Now any quotation in any language, no matter how obscure, can be identified in a fraction of a second. When T.S. Eliot dropped outlandish Sanskrit and French and Latin allusions into “The Waste Land,” he had to include notes to the poem, to help readers track them down. Today, no poet could outwit any reader who has an Internet connection. As a result, allusion has become more democratic and more generous.

Reader, rejoice! Literature, like the world, has been flattened.

It’s a dicey proposition, these days, to take issue with a cultural democratizer, a leveler of playing fields, but there are big problems with Kirsch’s analysis, and they stem from his desire to see “allusion” as being synonymous with “citation” or “quotation.” An allusion is not a citation. It’s not a direct quotation. It’s not a pointer. It’s not a shout-out. And it most certainly is not a hyperlink. An allusion is a hint, a suggestion, a tease, a wink. The reference it contains is implicit rather than explicit. Its essential quality is playfulness; the word allusion derives from the Latin verb alludere, meaning “to play with” or “to joke around with.”

The lovely fuzziness of a literary allusion – the way it blurs the line between speaker and source – is the essence of its art. It’s also what makes the literary allusion an endangered species in the Age of Google. A computerized search engine like Google can swiftly parse explicit connections like citations, quotations, and hyperlinks – it feeds on them as a whale feeds on plankton – but it has little sensitivity to more ethereal connections, to the implicit, the playful, the covert, the fuzzy. Search engines are literal-minded, not literary-minded. Google’s overarching goal is to make culture machine-readable. We’ve all benefited enormously from its pursuit of that goal, but it’s important to remember that Google’s vast field of vision has a very large blind spot. Much of what’s most subtle and valuable in culture – and the allusions of artists fall into this category – is too blurry to be read by machines.

Kirsch says that T. S. Eliot “had to include notes” to “The Waste Land” in order to enable readers to “track down” its many allusions. The truth is fuzzier. The first publications of the poem, in the magazines The Criterion and The Dial, lacked the notes. The notes only appeared when the poem was published as a book, and Eliot later expressed regret that he had included them. The notes, he wrote, “stimulated the wrong kind of interest among the seekers of sources … I regret having sent so many enquirers off on a wild goose chase after Tarot cards and the Holy Grail.” By turning his allusions into mere citations, the notes led readers to see his poem as an intricate intellectual puzzle rather than a profound expression of personal emotion – a confusion that continues to haunt, and hamper, readings of the poem to this day. The beauty of “The Waste Land” lies not in its sources but in its music, which is in large measure the music of allusion, of fragments of distant melodies woven into something new. The more you google “The Waste Land,” Eliot would have warned, the less of it you’ll hear.

Let’s say, to bring in another poet, you’re reading Yeats’s “Easter 1916,” and you reach the lines

And what if excess of love
Bewildered them till they died?

It’s true that you might find the poem even more meaningful, even more moving, if you catch the allusion to Shelley’s “Alastor” (“His strong heart sunk and sickened with excess/Of love …”), but the allusion deepens and enriches Yeats’s poem whether or not you pick up on it. What matters is not that you know “Alastor” but that Yeats knows it, and that his reading of the earlier work, and his emotional connection with it, resonates through his own lyric. And since Yeats provides no clue that he’s alluding to another work, Google would be no help in “tracking down” the source of that allusion. A reader who doesn’t already have an intimate knowledge of “Alastor” would have no reason to Google the lines.

Indeed, for the lines to be Google-friendly, the allusion would have to be transformed into a quotation:

And what if “excess of love”
Bewildered them till they died?

or, worse yet, a hyperlink:

And what if excess of love
Bewildered them till they died.

As soon as an allusion is turned into an explicit citation in this way – as soon as it’s made fit for the Age of Google – it ceases to be an allusion, and it loses much of its emotional resonance. Distance is inserted between speaker and source. The lovely fuzziness is cleaned up, the music lost.

In making an allusion, a writer (or a filmmaker, or a painter, or a composer) is not trying to “outwit” the reader (or viewer, or listener), as Kirsch suggests. Art is not a parlor game. Nor is the artist trying to create a secret elitist code that will alienate readers or viewers. An allusion, when well made, is a profound act of generosity through which an artist shares with the audience a deep emotional attachment with an earlier work or influence. If you see an allusion merely as something to be tracked down, to be Googled, you miss its point and its power. You murder to dissect. An allusion doesn’t become more generous when it’s “democratized”; it simply becomes less of an allusion.

My intent here is not to knock Google, which has unlocked great stores of valuable information to many, many people. My intent is simply to point out that there are many ways to view the world, and that Google offers only one view, and a limited one at that. One of the great dangers we face as we adapt to the Age of Google is that we will all come to see the world through Google goggles, and when I read an article like Kirsch’s, with its redefinition of “allusion” into Google-friendly terms, I sense the increasing hegemony of the Google view. It’s already becoming common for journalists to tailor headlines and stories to fit the limits of search engines. Should writers and other artists be tempted to make their allusions a little more explicit, a little more understandable to literal-minded machines, before we know it allusiveness will have been redefined out of existence.

UPDATE:

Alan Jacobs corrects an overstatement I made in this post:

I think it’s clearly wrong to say that “what matters is not that you know ‘Alastor’ but that Yeats knows it.” It does matter that Yeats knows it — Yeats’s encounter with Shelley strengthens and deepens his verse — but is also matters if the reader does, because if I hear that echo of Shelley I understand better the conversation that Yeats is participating in, and that enriches my experience of his poem and also of Shelley’s. And not incidentally, the enriching power of our knowledge of intellectual tradition is one of Eliot’s key emphases.

I should have written “what matters most” rather than “what matters.” Jacobs is right that allusions draw on and reflect “the enriching power of our knowledge of intellectual tradition,” which is enriching for both writer and reader. But, in the context of a particular poem (or other work of art), the power of an allusion also derives from how deeply the artist has made the earlier work his or her own and hence how seamlessly it becomes part of his or her own work (and emotional and intellectual repertoire). I think this is one of the things Eliot meant when he remarked that mature poets steal while immature poets merely borrow. The pressure of Google, I believe, pushes us to be borrowers rather than thieves, to prize the explicit connection over the implicit one.

Media’s medium

The New Republic is today running my review of Douglas Coupland’s biography Marshall McLuhan: You Know Nothing of My Work! Here’s the start:

One of my favorite YouTube videos is a clip from a Canadian television show in 1968 featuring a debate between Norman Mailer and Marshall McLuhan. The two men, both heroes of the 60s, could hardly be more different. Leaning forward in his chair, Mailer is pugnacious, animated, engaged. McLuhan, abstracted and smiling wanly, seems to be on autopilot. He speaks in canned riddles. “The planet is no longer nature,” he declares, to Mailer’s uncomprehending stare; “it’s now the content of an art work.”

Watching McLuhan, you can’t quite decide whether he was a genius or just had a screw loose. Both impressions, it turns out, are valid. As Douglas Coupland argues in his pithy new biography, McLuhan’s mind was probably situated at the mild end of the autism spectrum. He also suffered from a couple of major cerebral traumas. In 1960, he had a stroke so severe that he was given his last rites. In 1967, just a few months before the Mailer debate, surgeons removed a tumor the size of an apple from the base of his brain. A later procedure revealed that McLuhan had an extra artery pumping blood into his cranium.

Read on.

Bonus: the YouTube clip:

The internet changes everything/nothing

In an essay at Berfrois, Justin E. H. Smith gets at the weird technological totalitarianism that makes the Net so unusual in the history of tools:

The Internet has concentrated once widely dispersed aspects of a human life into one and the same little machine: work, friendship, commerce, creativity, eros. As someone sharply put it a few years ago in an article in Slate or something like that: our work machines and our porn machines are now the same machines. This is, in short, an exceptional moment in history, next to which 19th-century anxieties about the railroad or the automated loom seem frivolous. Looms and cotton gins and similar apparatuses each only did one thing; the Internet does everything.

It is the nuclear option for human culture, unleashed, evidently, without any reflection upon its long-term consequences. I am one of its victims, caught in the initial blast wave. Nothing is the same anymore, not reading, not friendship, not thinking, not love. In my symptoms, however, I resemble more the casualty of an opium war than of a nuclear war: I sit in my dark den and hit the ‘refresh’ button all day and night. When I go out, I take a portable dose in my pocket, in the form of a pocket-sized screen. You might see me hitting ‘refresh’ as I’m crossing the street. You might feel an urge to honk.

And yet perhaps all the Net does is make what was always implicitly virtual explicitly virtual:

If then there is a certain respect in which it makes sense to say that the Internet does not change everything, it is that human social reality was always virtual anyway. I do not mean this in some obfuscating Baudrillardian sense, but rather as a corollary to a thoroughgoing naturalism: human institutions only exist because they appear to humans to exist; nature is entirely indifferent to them. And tools and vehicles only are what they are because people make the uses of them that they do.

Consider the institution of friendship. Every time I hear someone say that Facebook ‘friendship’ should be understood in scare quotes, or that Facebook interaction is not real social interaction, I feel like asking in reply: What makes you think real-world friendships are real? Have you not often felt some sort of amical rapport with a person with whom you interact face-to-face, only to find that in the long run it comes to nothing? How exactly was that fleeting sensation any more real than the discovery and exploration of shared interests and sensibilities with a ‘friend’ one knows only through the mediation of a social-networking site? …

One would do better to trace [the Net] back far further, to holy scripture, to runes and oracle bones, to the discovery of the possibility of reproducing the world through manipulation of signs.

If human culture has always been artificial, isn’t it frivolous to worry about it becoming more artificial?

I’m going to have to mull that over.

The “Like” bribe

Yesterday, I was one of the recipients of an amusing mass email from the long-time tech pundit Guy Kawasaki. He sent it out to promote a new book he’s written as well as to promote the Facebook fan page for that book. Under the subject line “Free copy of Guy’s first book,” it went as follows:

A long time ago (1987 exactly), I published my first book, The Macintosh Way. I wrote it because I was bursting with idealistic and pure notions about how a company can change the world, and I wanted to spread the gospel …

I recently re-acquired the rights for this book, and I’m making it freely available from the fan page of my upcoming book, Enchantment: The Art of Changing Hearts, Minds, and Actions. To download The Macintosh Way:

1. Go to the fan page.

2. “Like” the page.

3. Click on The Macintosh Way book cover to download the PDF.

Yes, that’s right. The pure-hearted, Apple-cheeked idealism of youth has given way to the crass cynicism of using virtual swag as a bribe to get you to click a Like button. Marketing corrupts, and Facebook marketing corrupts absolutely.

guybribe.jpg

Here, by the way, is how Kawasaki describes his new tome: “The book explains when and why enchantment is necessary and then the pillars of enchantment: likability, trustworthiness, and a great cause.” That’s “likability” in the purely transactional sense, I assume.

Back in elementary school, there was this distinctly unlikable kid who, if you agreed to act like his friend for a day, would let you swim in his family’s swimming pool. Little did we know that he was a cultural pioneer.

Same shit, different medium

The internet changes nothing, argues Marshall Poe, whose ambitious new book, A History of Communications, has just been published:

We knew the revolution wouldn’t be televised, but many of us really hoped it might be on the Internet. Now we know these hopes were false. There was no Internet Revolution and there will be no Internet Revolution. We will stumble on in more or less exactly the way we did before massive computer networks infiltrated our daily lives …

Before the Web we were already used to sitting in front of electronic boxes for hour upon hour. The boxes have now changed, but they are still boxes. Of course the things we do on the Internet are different from those we did (and do) in front of the TV. But it’s important to remember that they are only different; they are not new. Think for a moment about what you do on the Internet. Not what you could do, but what you actually do. You email people you know. In an effort to broaden your horizons, you could send email to strangers in, say, China, but you don’t. You read the news. You could read newspapers from distant lands so as to broaden your horizons, but you usually don’t. You watch videos. There are a lot of high-minded educational videos available, but you probably prefer the ones featuring, say, snoring cats. You buy things. Every store in the world has a website, so you could buy all manner of exotic goods. As a rule, however, you buy the things you have always bought from the people who have always sold them. You play games. There are many kinds of games on the Internet, but those we seem to like best all fall into two categories: the ones where we can kill things and the ones where we can cast spells. You look things up. The Web is like a bottomless well of information. You can find the answer to almost any question if you’re willing to look. But you generally don’t like to look, so you get your answers from Wikipedia. Last, you do things you know you shouldn’t. The Internet is great for indulging bad habits. It offers endless opportunities to steal electronic goods, look at dirty pictures, and lose your money playing poker. Moreover, it’s anonymous. On the Web, you can get what you want and be pretty sure you won’t get caught getting it. That’s terrifically useful.

But what exactly is new here? Not very much. Email is still mail. Online newspapers are still newspapers. YouTube videos are still videos. Virtual stores are still stores. MMORPGs are still variations on D&D. A user-built encyclopedia is still a reference book. Stealing mp3s is still theft. Cyber-porn is still porn. Internet poker is still gambling. In terms of content, the Internet gives us almost nothing that the much maligned “traditional media” did not. It’s not much of an exaggeration to say that the Internet is a post office, newsstand, video store, shopping mall, game arcade, reference room, record outlet, adult book shop and casino rolled into one. Let’s be honest: that’s amazing. But it’s amazing in the same way a dishwasher is amazing—it enables you to do something you have always done a little easier than before.

What you see depends on where you stand, and from one viewpoint – a high one – Poe is absolutely correct. He puts his finger on a tragicomic fundamental of human existence: Whenever we come upon a wild new frontier, we jump up and down and say we’re going to restart history, and then we proceed to do exactly what we always do: build houses, shops, brothels, bars, gaming emporiums, churches. And then more shops. Modern electronic media, from this view, simply allow us to do all the same stuff with less physical effort. Lots of big boxes collapse into one small box, but the contents of the box remains the same.

The problem with a high vantage point is that you can’t see the details, and if you stand there long enough you begin to believe that the details don’t matter. But the details do matter. The texture of our lives is determined not only by what we do but by how we do it. And that’s where media play such an important part: they change the how. Which is what Poe misses. Just as the dishwasher (along with the washing machine, the vacuum cleaner, and all manner of other electrified household appliance) altered in profound ways the rhythms and roles of home life during the last century, so the internet changes, in ways small and large, everything it subsumes. The same shit, when routed through a different medium, becomes new shit.

Angst floods social networks

No sooner does Time magazine place its fabled curse on the head of the Star Child than the fanboys begin to sidle toward the exits. “I’ve started to take one step back from the digital world,” tweets Nick Bilton, the New York Times’ chief tech blogger and resident future-dweller. He cops to the fact that “over the last few months, my wife and I have started to make a conscious effort to limit the use of our mobile phones during dinner or while spending time with family.” Bilton is not alone in giving in to the denetworking urge. Wired columnist Clive Thompson confesses that he has begun “to completely ignore his e-mail ‘from Friday night to Monday morning,’ so he doesn’t accidentally get involved in work and pulled away from his family.” Gizmodo reporter Joe Johnson has also begun pocketing his gizmo, at least when dining out with his girlfriend: “The two allocate a few moments to check-in on Foursquare or snap a quick picture, but then put their phones away.” Johnson’s boss, Brian Lam, muses that “an obsession with technology can ‘dilute the quality time we should spend with the people closest to us.’” Former Digg CEO Jay Adelson worries about “the increasingly damaging and fatiguing Twitter lifestyle.” All this neoluddite handwringing comes amid word, from TechCrunch, that Twitter’s US growth seems to be flatlining, with nary an uptick since the summer. Bilton senses a meme emerging. He wonders: “Is society as a whole retreating a bit from using technology in our personal relationships?”

Interactive storytelling: an oxymoron

Craig Mod is psyched about the future of literary storytelling. “With digital media,” he writes in “The Digital Death of the Author,” an article that’s part of New Scientist’s “Storytelling 2.0” series, “the once sacred nature of text is sacred no longer. Instead, we can change it continuously and in real time.” E-storytelling is to storytelling, he says, as Wikipedia is to a printed encyclopedia. And that’s a good thing:

The biggest change is not in the form stories take but in the writing process. Digital media changes books by changing the nature of authorship. Stories no longer have to arrive fully actualised … [Ultimately,] authorship becomes a collaboration between writers and readers. Readers can edit and update stories, either passively in comments on blogs or actively via wiki-style interfaces.

Sound familiar? It should. In the 1980s and early 1990s, when personal computers were new and their screens appeared to literary theorists as virgin canvases, there was enormous excitement over the possibilities for digital media to revolutionize storytelling. The enthusiasm back then centered on hypertext and multimedia, rather than on Internet collaboration tools, but the idea was the same, as was the “death of the author” rhetoric. By “freeing” text from the page, digital media would blur the line between reader and writer, spurring a profusion of new, interactive forms of literary expression and storytelling. As George Landow and Paul Delany wrote in their introduction to the influential 1991 compendium Hypermedia and Literary Studies, “So long as the text was married to a physical media, readers and writers took for granted three crucial attributes: that the text was linear, bounded, and fixed.” The computer would break this static structure, allowing text to become more like “a network, a tree diagram, a nest of Chinese boxes, or a web.” That in turn would shift “the boundaries between individual works as well as those between author and reader,” overthrowing “certain notions of authorial property, authorial uniqueness, and a physically isolated text.”

Then, as now, the celebration of the idea of interactive writing was founded more on a popular ideology of cultural emancipation than on a critical assessment of artistic expression. It reflected a yearning for a radical sort of cultural democratization, which required that “the author” be pulled down from his pedestal and revealed to be a historical accident, a now dispensable byproduct of the technology of the printing press, which had served to fix type, and hence stories, on the page. The author was the father who had to be slain before culture could be liberated from its elitist, patriarchal shackles.

The ability to write communally and interactively with computers is nothing new, in other words. Digital tools for collaborative writing date back twenty or thirty years. And yet interactive storytelling has never taken off. The hypertext novel in particular turned out to be a total flop. When we read stories, we still read ones written by authors. The reason for the failure of interactive storytelling has nothing to do with technology and everything to do with stories. Interactive storytelling hasn’t become popular – and will never become popular – because it produces crappy stories that no one wants to read. That’s not just a result of the writing-by-committee problem (I would have liked to have a link here to the gruesome product of Penguin Books’ 2007 wiki-novel experiment, but, mercifully, it’s been removed from the web). The act of reading a story, it turns out, is very different from, and ultimately incompatible with, the act of writing a story. The state of the story-reader is not a state of passivity, as is often, and sillily, suggested, but it is a state of repose. To enter a story, to achieve the kind of immersion that produces enjoyment and emotional engagement, a reader has to give up not only control but the desire to impose control. Readership and authorship are different, if mutually necessary, states: yin and yang. As soon as the reader begins to fiddle with the narrative – to take an authorial role – the spell of the story is broken. The story ceases to be a story and becomes a contraption.

What we actually value most about stories, as readers, is what Mod terms, disparagingly, “full actualization” – the meticulous crafting of an intriguing plot, believable characters and dialogue, and settings and actions that feel true (even if they’re fantastical), all stitched together seamlessly with felicitous prose. More than a single author may be involved in this act of artistic creation – a good editor or other collaborator may make crucial contributions, for instance – but it must come to the reader as a harmonious whole (even if it comes in installments).

I agree with Mod that the shift of books from pages to screens will change the way we read books and hence, in time, the way writers write them, but I think his assessment of how those changes will play out is wrongheaded. (See also Alan Jacobs’s take, which questions another of Mod’s assumptions.) A usable encyclopedia article can, as Wikipedia has shown us, be constructed, “continuously and in real time,” by a dispersed group of writers and editors with various talents. But it’s a fallacy to believe that what works for an encyclopedia will also work for a novel or a tale. We read and evaluate encyclopedia articles in a completely different way from how we read and evaluate stories. An encyclopedia article can be “good enough”; a story has to be good.