Questioning Accidentalism

I’ve noticed over the last couple of years the rise of an interesting new theory of human history, which I’ll call Accidentalism. Although it may have broader implications, it has tended to be applied mainly to the history of media. In short, the theory posits, or, more typically, takes as a given, that the media of the past developed as a result of a series of accidents. Technological accidents begot economic accidents, which begot accidents of production and consumption, and human beings tumbled around in all those accidents like socks in a dryer.

Let me point to two examples I’ve come across in the last couple of days. In an email exchange about “the future of publishing” between Sean Cranbury and Hugh McGuire, McGuire argues that the printed book, as well as our attachment to it, is a fluke. We’ve been deceived by randomness. “Our notion of a book as such a fixed thing,” he says, “is an accident of history, an accident of technology.” This technological accident led to other accidents in production and distribution, such as “the fixed model of the publishing house.” Cranbury replies: “I love a number of things about your last message, but invoking … the notion of ‘accidents’ — of history or technology or whatever — that become adopted standards which we are reluctant to relinquish, are especially invigorating ideas.”

In a speech at Harvard earlier this week, Clay Shirky also gave voice to the “accident” theme, this time in regard to printed newspapers rather than printed books: “Some time between the rise of the penny press and the end of the Second World War, we had a very unusual circumstance — and I think especially in the United States — where we had commercial entities producing critical public goods … Now, it’s unusual to have that degree of focus on essentially both missions — both making a profit and producing this kind of public value. But that was the historic circumstance, and it lasted for decades. But it was an accident.” Shirky, in this particular speech, doesn’t explain the nature of this “accident.” But he did lay out his view more fully in an earlier essay about newspapers, in which he portrayed the traditional newspaper business as an accidental result of the technology of printing:

The expense of printing created an environment where Wal-Mart was willing to subsidize the Baghdad bureau. This wasn’t because of any deep link between advertising and reporting, nor was it about any real desire on the part of Wal-Mart to have their marketing budget go to international correspondents. It was just an accident … That the relationship between advertisers, publishers, and journalists has been ratified by a century of cultural practice doesn’t make it any less accidental … For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.

Now, I understand what the Accidentalists are getting at: Technology builds on technology, and at any given time in human history only certain technologies are in the realm of the possible and of those only a subset will actually be developed and put to use. Those technologies will in turn influence the means of production and the modes of consumption both directly (through the characteristics of the technologies themselves) and indirectly (through the economic tradeoffs inherent in using the technologies). Every technology, every means of production, every mode of consumption is hence provisional. Something better or at least cheaper or more convenient may come along tomorrow and displace what we depend on today.

All that’s true. But is it really accurate to describe the process as fundamentally accidental? Does the word “accidental” accurately reflect the complexities of technological and economic development? I don’t think it does. In fact, I think it’s difficult to imagine a poorer choice of word. When you describe an event or a thing as an accident, what you are doing is draining it of all human content. You are saying that human intention and will and desire played no part in its occurrence. A volcano is an accident in human history (if not natural history), and if it’s a big enough one it may well influence the course of that history. But the the book, the printing press, the publishing house, the newspaper, and the newspaper company are not volcanoes. Their development was guided not just by blind circumstance but by human intent and desire. They represent, not just in the abstract but in their concrete forms, something that people wanted and that people consciously brought into being, for human purposes.

Take the technology of the book. Far from bursting forth suddenly from Gutenberg’s press in the fifteenth century, the development of the book, as Eric Havelock, among others, has explained, began with the development of symbols to represent human language many millennia ago. In the eighth century BC, or thereabouts, the Greeks created a particularly refined version of these symbols, distilling the entirety of spoken language into an alphabet consisting of a mere 24 phonetic symbols – a brilliant system that much of the world continues to use today, in one form or another. The words and sentences formed from the letters of the alphabet, and other writing systems, have been inscribed on a series of writing media – the clay tablet, the papyrus and then the parchment scroll, the wax tablet, the series of wax tablets bound with string, the codex of pages of parchment or paper bound together to form the scribal book, and the printed book. Each of those general media technologies, in turn, were subtly or dramatically reshaped by all manner of technological refinement throughout their individual histories. The scribal book, for instance, advanced enormously for well over 1,000 years before Gutenberg invented his movable type press. Indeed, Gutenberg’s bible was a meticulous copy of the form of a scribal book. And, in the 550 years since Gutenberg, we’ve seen enormous refinements to both the printing press and its products.

Is the printed book, then, “an accident of history, an accident of technology”? Of course not. The myriad refinements in the technology of the written and published word over the last few millennia were not mere byproducts of a mechanical and inhuman process of blind technological advance. They were shaped by human intent. The wax tablet was invented not because man suddenly had the technologies of the wooden frame and of wax at his disposal, but because society wanted a cheap, informal writing medium that would be easy for individuals, and particularly students, to use and reuse to take notes and jot down ideas. The wax tablet, in other words, reflected a human desire to make writing more personal than had been possible with tablets and scrolls. The wax tablet was anything but an “accident.”

The crucial refinements in the form of the scribal book – punctuation marks, paragraph divisions, chapter divisions, tables of contents, etc. – were all expressions of human desire and need, not the results of random technological or economic accidents. Yes, when Gutenberg invented his press, he was working within the limits of technological possibility. But his inspiration came from his desire, which was also a broad desire on the part of society as a whole, to widen the availability of the codex and other written works. The printing press did not create people’s desire to read books; people’s desire to read books created the printing press. To say that the printed book was an accident is not only profoundly cynical; it’s profoundly foolish.

The same goes for newspapers and the newspaper business. The form of both was indeed heavily influenced by the cost of buying and running a press and transporting bundles of paper – circumstances matter a great deal – but that doesn’t mean that their forms were “just an accident.” Unlike the book, the newspaper only becomes possible when the mass production of the written word becomes possible. And it was not long after Gutenberg’s invention of the press that the first broadsheets appeared in cities. Because people naturally desire, for practical and intellectual reasons, to know what’s going on around them – a desire that more than 1,500 years earlier had led Julius Caesar to have the news of the day posted on public billboards across the Roman empire – the broadsheets proved very popular. In response to that expression of human desire – a desire strong enough that people were willing to pay to have it satisfied – entrepreneurs began to set up newspaper businesses and hire reporters and organize systems of distribution.

Of course, once people started getting the news, they naturally wanted more of it. But there were, alas, strict limits on the amount of money an average citizen could pay for a newspaper. So even as their circulation grew, it remained difficult for newspapers to fulfill people’s desire for more and more various news. Expansion required more capital and more employees than they could afford. Fortunately for both the newspaper reader and the newspaper publisher, merchants were looking for ways to publicize their goods to a wider clientele, and newspaper advertisements provided a perfect fit for their needs. So in the early years of the 1700s, newspapers began to run ads, providing a new source of income that allowed them to better fulfill the desires of readers, by, for example, paying foreign correspondents and, later, photographers, while also making more money themselves. Some of the new advertising money also went to the hiring of copy editors, proofreaders, and fact checkers, again in response to people’s desires, in this case for accurate, clearly written reports.

This was, to be sure, a system of economic subsidization but it was, equally, a system of human symbiosis, in which the various desires of publishers, journalists, advertisers, and citizens came into a happy and mutually supportive balance. To see the arrangement only in economic terms is to miss much of the story, which is a story not of accident but of conscious, purposeful action undertaken within the constraints of technological and economic possibility.

I’m going to resist the temptation to get into a discussion of today’s news business, or the broader question of whether it’s all that unusual for public goods to be produced in commercial systems, but I will point out that as the web unravels the system of economic subsidization on which newspapers depended, it is also unraveling the system of symbiosis that brought great benefits to the newspaper reader.

There is some truth in Accidentalism, but it is only a small part of the whole truth, which is much messier and much more interesting. The Accidentalists are promoting a simplistic and distorted version of media history. They are also ignoring an important implication of their own theory. If we suspend our disbelief and accept the Accidentalist view that both the media of the past and the means of their production were accidents, then we have to also view the media of today and the means of their production as accidents. If the book is a historical accident, then the web is a historical accident. If the newspaper publisher is a historical accident, then the blogger is a historical accident. To think otherwise – to think that all mankind’s past blundering has brought us suddenly to a perfected state, that the long chain of accidents has been broken in (surprise!) our very own lifetime – is to abandon any pretense of a consistent and rational view of history and leap into the realm of quasi-religious faith. We were lost, and now we’re found!

And there’s the rub, I think. Accidentalism is a theory of convenience. It is, it seems to me, a fantasy version of history conjured up to support a popular and largely faith-based ideology, an ideology built on the belief that our new digital media landscape represents a great human advance over all that’s come before. Accidentalism provides an easy way to denigrate and dismiss the past: Oh, our poor, benighted forebears: they never even realized that all they held dear was merely accidental. “Accident,” I hardly need point out, is a word with negative connotations. Those to whom accidents happen are victims. Every time we pick up a printed book or newspaper, the Accidentalists imply, we turn ourselves into victims of technological accidents.

Accidentalism, in other words. provides the perfect backdrop for the liberation mythology promoted by many of the web’s most ardent proponents, which is built on the idea that old technology put us in chains and new technology is breaking those chains. In order to underscore (and place beyond debate) the societal and personal benefits of the web, they feel compelled to paint a weirdly dark caricature of the past, portraying those human beings who had the misfortune to live before, say, 1990 as passive and enervated, victims of an (accidental!) media complex that circumscribed and diminished their lives and thoughts. One need not be a fan of old-school mass media to see that this picture is a clumsily rendered fake.

“We should always be suspicious of the contempt that flows beneath the surface of idealization,” wrote Paul Duguid in an essay collected in the 1996 volume The Future of the Book. “And we should note how often the characterization of ‘them’ is in fact a self-aggrandizement of ‘us.'” Duguid goes on to describe how the “language of liberty” has been wrapped around the new digital technologies of information creation and transmission:

Where once we had ghosts in machines, now we have information in objects like books. Technology is thus called upon to do for information what theology sought to do for the soul … The book, no longer its incarnation, has been reduced to the incarceration of the word. But a technological Prospero seems at last to be at hand to free the informational Ariel from the cleft pine (or wood products) in which he has been trapped … [The liberationist view] is both corrupting and misleading. As with so much optimistic futurology, it woos us to jump by highlighting the frying pan and hiding the fire. In the face of such arguments, we do better to remember … how Ariel quickly discovered that the same magic that liberated him from the tree indentured him to Prospero.

Any theory of the future that requires a distortion of the past should be greeted not with applause but suspicion.

Anthologized

I’m happy to report that my essay Is Google Making Us Stupid?, which appeared last year in The Atlantic, has been selected for inclusion in three anthologies: The Best American Science and Nature Writing 2009, edited by Elizabeth Kolbert; The Best Technology Writing 2009, edited by Steven Johnson; and The Best Spiritual Writing 2010, edited by Philip Zaleski. The first two anthologies are available now; the third will be published early next year.

“Is Google Making Us Stupid?” also appears in the new edition of the popular textbook Writing Logically, Thinking Critically.

Close down the schools!

The headline on Steve Lohr’s Bits post sounds pretty definitive: “Study Finds That Online Education Beats the Classroom.” And the quote that Lohr gets from the study’s lead author, Barbara Means, sounds equally definitive: “The study’s major significance lies in demonstrating that online learning today is not just better than nothing — it actually tends to be better than conventional instruction.”

Predictably, Lohr’s post is now inspiring even more extreme summaries of the study. “Want to learn better?” screams the headline at the misnamed SmartPlanet.com. “Crack open your laptop (and ditch the classroom).” Chimes in Podcasting News: “Why Go to School? Study Finds Students Do Better Online.”

But the study itself, which was conducted by SRI International for the US Department of Education, is considerably less definitive than the coverage would have you believe. Before school boards start firing teachers and shuttering classrooms, they might want to read the actual report.

The SRI study is a “meta-analysis,” or a “study of studies.” The authors reviewed research, dating back to 1996, of online education. Of the 1,132 studies they found, they focused their analysis on just 51 “experimental and quasi-experimental studies” that provided sufficient data to directly compare the “learning outcomes” of online courses with those of traditional “face-to-face” courses. In many cases, however, the “online courses” also involved face-to-face instruction; in other words, the online instruction supplemented (rather than replaced) classroom instruction. Only about half of the 51 analyzed studies involved purely online courses. Also, 19 of the 51 studies involved courses that lasted less than a month. And, as the authors note, “many of the studies suffered from weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and, in many cases, potential bias stemming from the authors’ dual roles as experimenters and instructors.”

With those limitations in mind, the authors’ overall conclusion is as follows: “The corpus of 51 [experiments] was sufficient to demonstrate that in recent applications, online learning has been modestly more effective, on average, than the traditional face-to-face instruction with which it has been compared.”

But the authors also provide some important additional caveats:

First, while the intent of this meta-analysis is “to provide policy-makers, administrators and educators with research-based guidance about how to implement online learning for K–12 education and teacher preparation,” the study doesn’t actually look at K-12 education. In fact, of the 51 studies analyzed, only five were drawn from K-12 education – and none of those involved purely online instruction. The bulk of the studies involved instruction in college or graduate school or in professional training courses, particularly instruction in “medicine or health care” but also in “computer science, teacher education, social science, mathematics, languages, science and business.” In fact, when the five K-12 studies were examined in isolation, they revealed no statistically meaningful benefit from online learning (see page xvii of the report). As the authors emphasize, it would be rash to conclude that the study results will apply to K-12 schools.

Second, the finding that online or online-supplemented courses have slightly better learning outcomes may not have anything to do with the fact that they were online. It may simply be that they involved more instructional time than the strictly classroom courses. This is a point that the authors make repeatedly in their report. For example: “Despite what appears to be strong support for online learning applications, the studies in this meta-analysis do not demonstrate that online learning is superior as a medium. In many of the studies showing an advantage for online learning, the online and classroom conditions differed in terms of time spent, curriculum and pedagogy. It was the combination of elements in the treatment conditions (which was likely to have included additional learning time and materials as well as additional opportunities for collaboration) that produced the observed learning advantages.” The findings of the meta-analysis, the authors later reiterate, “should not be construed as demonstrating that online learning is superior as a medium.”

Finally, it should be noted that, of the 51 experiments studied, only 11 actually showed a statistically significant advantage to online instruction. (The authors don’t specify how many of those 11 involved purely online instruction versus classroom/online hybrids. Nor do they specify how many involved vocational instruction.) Two of the experiments showed a statistically significant advantage to face-to-face instruction. The vast majority of the studies showed no meaningful differences in outcomes.

Sometimes, the caveats in a study speak louder than the findings. I think that’s the case here. A suggestion that classroom instruction supplemented by online exercises can lead to more learning than classroom instruction alone would hardly come as a surprise. But that’s about the only firm conclusion I draw from this meta-analysis.

Rock-by-number

Man, this looks good:

Those avatars can really swing. It’s like you’re in a wax museum and all of a sudden the wax figures come to life and you’re like jamming with them on a wax guitar.

Seriously, though, the release next month of The Beatles™: Rock Band™ is shaping up to be the cultural event of the year, if not the millennium to date. The making of the game was the subject of an epic article, by Daniel Radosh, in the Sunday New York Times, which featured comments from Paul and Ringo as well as John’s widow, George’s widow, and George Martin’s son. Apple Corps, reports Radosh, hopes the game “will be the most deeply immersive way ever of experiencing the music and the mythology of the Beatles.” The CEO of Harmonix Music Systems, the company engineering the game, says, “We’re on the precipice of a culture shift around how the mass market experiences music.” Adds Radosh, hopefully: “Playing music games requires an intense focus on the separate elements of a song, which leads to a greater intuitive knowledge of musical composition.”

Rob Horning, PopMatters’ resident stick-in-the-mud, isn’t ready to join in the celebration. To him, Rock Band is a means of distancing rather than immersion. It’s yet another sign of the commercialization of the intimate, the replacement of real personal experience with a purchased, preprogrammed replica of experience. Hold that guitar/controller in your arms, yeah, you can feel its disease:

The mix of social, cerebral and sensuous elements in my response to music is most satisfying when it seems immediate and fused, a kind of physico-cognitive dance that occurs spontaneously with the sound … I don’t want a game to mediate music to me when music is already mediating other, more profound experiences—memories, dreams, secret pathways into the hearts of friends or imagined strangers, sheer abandon to sensory stimuli. These are enough to hold my attention; I wouldn’t want those experiences endangered or compromised or supplanted by the discipline enforced by a game that measures your attention. That seems to me like covert industrial training.

In the 1950s, paint-by-number kits were all the rage. Everyone became an artist, diligently filling in the numbered areas on prefabricated canvases with specified colors. Today, even as we celebrate the contrivances of Rock Band, we look down our noses at those kitschy paint-by-number kits. Yet I’m sure that somebody back in the fifties wrote about how paint-by-number “requires an intense focus on the separate elements of a painting, which leads to a greater intuitive knowledge of artistic composition.” I’m sure it was thought that paint-by-number liberated people from being passive observers of art, that it allowed them to participate, in a deeply immersive way, in the act of painting.

We shouldn’t be too harsh on our fads. After all, the reason they become fads is that they’re fun. Still, a fad will always tell us something important about the times in which it occurs. “Paint-by-number,” wrote Brennen Jensen in a 2001 article, “is all about conformity. Indeed, there is perhaps no greater metaphor for America’s Leave It to Beaver, I Like Ike, Man in the Gray Flannel Suit 1950s than this ‘digital art’ craze that roared through the decade, promising neophyte brush-wielders ‘a beautiful oil painting the first time you try!’ … Painting, ostensibly one of the most creative and individualistic endeavors, is rendered rote – a matter of manual dexterity, not inspiration.”

Rock Band is the aural equivalent of paint-by-number. It’s musicianship-by-number. It’s also a fad. Ten or so years from now, we’ll look back on the game with a mixture of nostalgia and embarrassment.

But, like paint-by-number, Rock Band is also a metaphor. As even a cursory glance at our cultural touchstones will tell you, we live in an Age of Vampires, and The Beatles™: Rock Band™ is nothing if not vampiric. Take another gander at that YouTube trailer. What’s creepy about the game isn’t the faux guitar necks with the color-coded digital frets (that’s just rock-by-number). It isn’t even the waxworks avatars (though they are certainly ghoulish). No, what’s creepy about it is its cynical, paint-by-number rendering of sixties counterculture, from, progressively, the Ed Sullivan go-go soundstage to the trippy mindscapes of psychedelia to the flowerchild fields of the hippies.

Given that our culture is fundamentally consumerist, every countercultural movement is by definition anti-consumerist, a quixotic attempt to create an imaginary space that exists outside of and in opposition to the marketplace. Counterculturalism is a doomed attempt to maintain innocence in the face of the market’s all-consuming cynicism. Once the Beatles, and particularly John Lennon, became aware of their power, they dedicated themselves to sustaining the countercultural dream of the sixties, even after the dream had evaporated. The Beatles™: Rock Band™ makes a particularly good vampire. The blood it sucks is the blood of the innocent.

The diminishing returns on data

CNet’s Tom Krazit has posted a brief but very interesting interview with the Berkeley economist Hal Varian, who now serves as one of Google’s big thinkers. Krazit asks Varian whether search scale offers a quality advantage – in other words, does the ability to collect and analyze more data on more searches translate into better search results and better search-linked ads. Here’s the exchange:

Krazit: One thing we’ve been talking about over the last two weeks is scale in search and search advertising. Is there a point at which it doesn’t matter whether you have more market share in looking to make your product better?

Varian: Absolutely. We’re very skeptical about the scale argument, as you might expect. There’s a lot of aspects to this subject that are not very well understood.

On this data issue, people keep talking about how more data gives you a bigger advantage. But when you look at data, there’s a small statistical point that the accuracy with which you can measure things as they go up is the square root of the sample size. So there’s a kind of natural diminishing returns to scale just because of statistics: you have to have four times as big a sample to get twice as good an estimate.

Another point that I think is very important to remember … query traffic is growing at over 40 percent a year. If you have something that is growing at 40 percent a year, that means it doubles in two years.

So the amount of traffic that Yahoo, say, has now is about what Google had two years ago. So where’s this scale business? I mean, this is kind of crazy.

The other thing is, when we do improvements at Google, everything we do essentially is tested on a 1 percent or 0.5 percent experiment to see whether it’s really offering an improvement. So, if you’re half the size, well, you run a 2 percent experiment.

So in all of this stuff, the scale arguments are pretty bogus in our view…

This surprised me because there’s a fairly widespread assumption out there that Google’s search scale is an important source of its competitive advantage. Varian seems to be talking only about the effects of data scale on the quality of results and ads (there are other possible scale advantages, such as the efficiency of the underlying computing infrastructure), but if he’s right that Google long ago hit the point of diminishing returns on data, that’s going to require some rethinking of a few basic orthodoxies about competition on the web.

I was reminded, in particular, of one of Tim O’Reilly’s fundamental beliefs about the business implications of Web 2.0: that a company’s scale of data aggregation is crucial to its competitive success. As he recently wrote: “Understanding the dynamics of increasing returns on the web is the essence of what I called Web 2.0. Ultimately, on the network, applications win if they get better the more people use them. As I pointed out back in 2005, Google, Amazon, ebay, craigslist, wikipedia, and all other Web 2.0 superstar applications have this in common.” (The italics are O’Reilly’s.)

I had previously taken issue with O’Reilly’s argument that Google’s search business is characterized by a strong network effect, which I think is wrong. But Varian’s argument goes much further than that. He’s saying that the assumption of an increasing returns dynamic in data collection – what O’Reilly calls “the essence” of Web 2.0 – is “pretty bogus.” The benefit from aggregating data is actually subject to decreasing returns, thanks to the laws of statistics.

That doesn’t mean that data scale wasn’t once crucial to the quality of Google’s search results. The company certainly needed a critical mass of data – on links, on user behavior, etc. – to run the analyses necessary to deliver relevant results. It does mean that the advantages of data scale seem to go away pretty quickly – and at that point what determines competitive advantage is smarter algorithms (ie, better ideas), not more data.

Slanted and enchanted

I’ve been hanging out at the TPM Cafe this week, discussing Bill Wasik’s book And Then There’s This. Here’s my latest post from the discussion:

If, as Amanda Marcotte suggests, the Internet is like the Beach Boys in 1963, then I guess we have a few more years of inspired genius before the psychosis, death, and exploitation set in. Then again, everything goes faster on the Net, so maybe we’re already in the psychosis, death, and exploitation phase.

Like Amanda, I think that Bill Wasik, in his book, glosses over the fact that one of the foundational characteristics (and joys) of popular music has always been its ephemerality, the way new bands buzz in and out of consciousness like beautiful stinging bees. As Stephen Malkmus observed in 1994 (well before MySpace):

Music scene is crazy

Bands start up each and every day

I saw another one just the other day

A special new band

And I don’t think it’s true that, as Bill suggests, “overnight sensations” have “almost always been manufactured by radio, or by big record labels, or by the interplay between the two.” In fact, overnight sensations emerged regularly from the very “local scenes” that Bill contends (accurately and sadly, I think) that the Internet is undermining. Scenes are almost by definition fickle and hungry for the new. If you look, for instance, at the garage rock explosion in California in the late 1960s or the British punk movement of the late 1970s, you see that disposability was actually part of the point (and the excitement).

I was a bit too young for the Sixties, but I can speak from experience about the Seventies. That was before marketers came up with such terms as “indie rock” and “alternative rock.” Back then everything was just “rock,” and it all fell into basically two categories, which we defined for ourselves: “fucking great” and “fucking shit.” At that time, radio and record labels largely concerned themselves with “fucking shit,” and their goal was not to encourage one hit wonders but rather to sustain elephantine franchises like, say, ELO, Yes, and the Eagles. The ephemeral stuff, which also tended to be the good stuff, existed almost entirely outside the radio/label ambit. It existed in the scenes and was promoted, largely, via word of mouth.

So to the extent that the Web encourages “the ecstatic surf from new band to new band, from track to track, from style to style,” it represents as much a continuation of the “scene” ethic as the “corporate” ethic.

The problem with the Web, as I see it, is that it imposes, with its imperialistic iron fist, the “ecstatic surfing” behavior on everything and to the exclusion of other modes of experience (not just for how we listen to music, but for how we interact with all media once they’ve been digitized). In the pre-Web world, we not only enjoyed the thrill of the overnight sensation – the 45 that became the center of your waking hours for a week only to be replaced by the new song – but also the deeper thrill of the favorite band in whose work we deeply immersed ourselves, often following its progression over many records and many years. It wasn’t that long ago that buying an album represented, particularly for your average teenager, a significant investment. You thought a lot about that album before you bought it, and once you bought it you took it seriously – you listened to it. Repeatedly. Today, we’re quick to dismiss those ancient days of “scarcity” and to celebrate our current “abundance,” but scarcity had something going for it: it encouraged a deep engagement in listening to a particular piece of music, across the expanse of an album, and it also encouraged, in the artist, an interest in rewarding that engagement. I would like to get back the money I spent on records in my youth, but I would not give up the experience that money bought me.

It’s the deep, attentive engagement that the Web is draining away, as we fill our iTunes library with tens of thousands of “tracks” at little or no cost. What the Web tells us, over and over again, is that breadth destroys depth. Just hit Shuffle.

Amanda’s retreat to vinyl is, I think, a recognition that we’re trading away something important for the riches of the Web. And while I applaud her retreat, I have to think it’s a rearguard action that is happening a long way away from culture’s front lines. Whether it’s news stories or pop songs, we’re skimmers now. It’s a one-hit-wonder world.