The writing is on the paywall

There has been much interesting speculation about the future of the newspaper business in recent weeks. There was Michael Hirschorn’s pre-obituary for the print edition of the New York Times in The Atlantic. He foresees the Times shrinking into “a bigger, better, and less partisan version of the Huffington Post.” There was the Times’s David Carr running the old micropayments idea up the flagpole. Look to iTunes, he suggested, for a model of how “to perform a cashectomy on users.” In a Time cover story, Walter Isaacson also endorsed the development of “an iTunes-easy method of micropayment [that] will permit impulse purchases of a newspaper, magazine, article, blog or video for a penny, nickel, dime or whatever the creator chooses to charge.” In a memo posted at Poynter Online, Steve Brill argued that newspapers, the Times in particular, need to abandon the practice of giving away their stories online and begin charging for access to their content, either through pay-as-you-go micropayments or through various sorts of subscriptions.

Shadowing the discussion, naturally, have been anti-paper agitators like Clay Shirky and Jeff Jarvis. To them, the renewal of talk about asking folks to – gasp! chuckle! guffaw! – pay for content is yet more evidence of the general cluelessness of the dead-tree crowd, who are simply too dim to realize that publishers have been rendered impotent and it’s the “users” now who call all the shots. “Back in the real world,” says Shirky, “the media business is being turned upside down by our new freedoms and our new roles. We’re not just readers anymore, or listeners or viewers. We’re not customers and we’re certainly not consumers. We’re users. We don’t consume content, we use it, and mostly what we use it for is to support our conversations with one another, because we’re media outlets now too.” Consumers pay; users don’t.

Shirky argues, in particular, that micropayments won’t work. “The essential thing to understand about small payments is that users don’t like being nickel-and-dimed. We have the phrase ‘nickel-and-dimed’ because this dislike is both general and strong.” I think Shirky is right. (He wrote a seminal paper on micropayments some years ago.) But I also think he overstates his case. The clue comes in his misinterpretation of the phrase “nickel-and-dimed.” We say we’re being nickel-and-dimed when a company charges us lots of small, frivolous fees for stuff that has no value to us. The classic example is a bank charging for every check you write or every ATM withdrawal you make. We don’t say we’re being nickel-and-dimed when we buy a product we want for a very low price – a pack of gum, say, or a postage stamp. Spending a nickel or a dime (or a quarter or a dollar) for something you want is not an annoyance. It’s a purchase.

Shirky’s need to see all forms of micropayments as dead ends leads him into a tortured attempt to dismiss Apple’s success at selling songs for less than a buck a pop through iTunes. “People are not paying for music on ITMS because we have decided that fee-per-track is the model we prefer,” he writes, “but because there is no market in which commercial alternatives can be explored.” Huh? Au contraire: a whole lot of people have indeed decided that they don’t mind paying a small fee to purchase a song. There are other music-sales models out there, various forms of subscriptions, most notably, and some, like eMusic, have had some success, while others have failed spectacularly. Nearly all the music for sale at iTunes is also available for free through services that facilitate illicit downloading. A huge amount of music continues to be trafficked that way, but nevertheless Apple’s experience demonstrates that a sizable market exists for purchasing media products piecemeal at small prices. I can pretty much guarantee that if Apple were to start charging 10 cents, or 5 cents, for a track, they would actually sell a lot more of them. Buyers wouldn’t, in other words, run away, screaming “don’t nickel-and-dime me!”, because they find spending such tiny amounts a horrible hassle. They’d buy more. The iTunes store, and Amazon’s music store, demonstrates that consumers can be trained to spend small amounts of money for products and services they desire.

Still, I don’t see micropayments working for news. Most news stories, for one thing, are transitory, disposable things. That makes them very different from songs, which we buy because we want to “own” them, to have the ability to play them over and over again. We don’t want to own news stories; we just want to read them or glance over them. Hawking stories piecemeal is a harder sell than hawking tunes; the hassle factor is more difficult to overcome. Second, news stories are – and I’m speaking very generally here – more fungible than songs. If you want the Kings of Leon’s “Sex on Fire,” you want the Kings of Leon’s “Sex on Fire.” A wimpy Coldplay number just ain’t going to scratch that itch. But while there are certainly differences in quality among news stories on the same subject, sometimes very great differences, they may not matter for people looking for a quick synopsis of the facts, particularly if the alternatives are being given away free. And most news stories also go out of date very, very quickly. The window during which you’d have any chance of selling one is exceedingly brief. Finally, people don’t have any experience buying individual news stories the way they have with buying individual songs (as 45s or cassette singles of CD singles). So the whole concept just seems weird.

Does that mean that a micropayments system absolutely, positively won’t work for newspapers? No. But it does mean it’s a heck of a longshot and not worth pinning one’s hopes on.

So is the idea of getting people to pay for news online an impossible dream? You’d certainly think so reading people like Shirky and Jarvis, who can’t wait for old-time newspaper publishers to be dead and buried so we can get on with some vague, communal “reinvention” of news production and distribution. But the freeniacs are wrong. Charging people for news, even online, is by no means an impossible dream. Yes, it often seems like an impossible dream today, but that’s because the news market is currently, and massively, distorted. But market distortions have a way of sorting themselves out. Indeed, that’s one of the main reasons we have markets.

The essential problem with the newspaper business today is that it is suffering from a huge imbalance between supply and demand. What the Internet has done is broken the geographical constraints on news distribution and flooded the market with stories, with product. Supply so far exceeds demand that the price of the news has dropped to zero. Substitutes are everywhere. To put it another way, the geographical constraints on the distribution of printed news required the fragmentation of production capacity, with large groups of reporters and editors being stationed in myriad local outlets. When the geographical constraints went away, thanks to the Net and the near-zero cost of distributing digital goods anywhere in the world, all that fragmented (and redundant) capacity suddenly merged together into (in effect) a single production pool serving (in effect) a single market. Needless to say, the combined production capacity now far, far exceeds the demand of the combined market.

In this environment, you’re about as like to be able to charge for an online news story as you are to charge for air. And the overabundance of supply means, as well, an overabundance of advertising inventory. So not only can’t you charge for your product, but you can’t make decent ad revenues either. Bad times.

Now here’s what a lot of people seem to forget: Excess production capacity goes away, particularly when that capacity consists not of capital but of people. Supply and demand, eventually and often painfully, come back into some sort of balance. Newspapers have, with good reason, been pulling their hair out over the demand side of the business, where a lot of their product has, for the time being, lost its monetary value. But the solution to their dilemma actually lies on the production side: particularly, the radical consolidation and radical reduction of capacity. The number of U.S. newspapers is going to collapse (although we may have differently branded papers produced by the same production operation) and the number of reporters, editors, and other production side employees is going to continue to plummet. And syndication practices, geared to a world of geographic constraints on distribution, will be rethought and, in many cases, abandoned.

As all that happens, market power begins – gasp, chuckle, and guffaw all you want – to move back to the producer. The user no longer gets to call all the shots. Substitutes dry up, the perception of fungibility dissipates, and quality becomes both visible and valuable. The value of news begins, once again, to have a dollar sign beside it.

Shirky claims we’re “in a media environment with low barriers to entry for competition.” But that’s an illusion born of the current supply-demand imbalance. The capital requirements for an online news operation are certainly lower than for a print one, but the labor costs remain high. Reporters, editors, photographers, and other newspaper production workers are skilled professionals who require good and fair pay and benefits and, often, substantial travel allowances. It’s a fantasy to believe that the production of all the kinds of news that people value, particularly hard news, can be shifted over to amateurs or journeymen working for peanuts or some newfangled journo-syndicalist communes. Certainly, amateurs and volunteers can do some of the work that used to be done by professional journalists in professional organizations. Free-floating freelancers can also do some of the work. The journo-syndicalist communes will, I suppose, be able to do some of the work. And that’s all well and good. But they can’t do all of the work, and they certainly can’t do all of the most valuable work. The news business will remain a fundamentally commercial operation. Whatever the Internet dreamers might tell you, it ain’t going to a purely social production model.

Newspapers are certainly guilty of not battening down the spending hatches early enough. But if you look at, say, the New York Times’s emerging “last-man-standing” strategy, as laid out in its issue yesterday, you see a strategy that makes sense, and that actually is built on a rational view of the future. Make sure you have enough cash to ride out the storm, trim your spending, defend your quality and your brand, expand into the new kinds of products and services that the web makes possible and that serve to expand your reader base. And then sit tight and wait for your weaker competitors to fail. As one analyst, looking toward the future, says in the Times story, “‘there could be dramatically fewer newspapers,’ leaving those that remain in a stronger position to compete for readers and ads. ‘And then the New York Times should be a survivor.'”

Once you radically reduce supply in the industry, the demand picture changes radically as well. Ad inventory goes down, and ad rates go up. And things that seem unthinkable now – online subscription fees – suddenly become feasible. We also, at that point, get disabused of the fantasy that there’s no such thing as news consumers. We see that providing fodder for “conversations” is not the primary value of the news; it’s an important value, but it’s a secondary value. The newspaper industry is in the midst of a fundamental restructuring, and if you think that restructuring is over – that what we see today is the end state – you’re wrong. Markets for valuable goods do not stay disrupted. They evolve to a new and sustainable commercial state. Tomorrow’s reality will be different from today’s.

What I’m laying out here isn’t a pretty scenario. It means lots of lost jobs – good ones – and lots of failed businesses. The blood will run in the streets, as the chipmakers say when production capacity gets way ahead of demand in their industry. It may not even be good news in the long run. We’ll likely end up with a handful of mega-journalistic-entities, probably spanning both text and video, and hence fewer choices. This is what happens on the commercial web: power and money consolidate. But we’ll probably also end up with a supply of good reporting and solid news, and we’ll probably pay for it.

Big Switch giveaway

To mark the publication of the paperback edition of my book The Big Switch, which The Independent last week called “simultaneously lucid and mind-boggling,” I’m giving away five signed copies. I will mail a copy to each of the first five people who correctly answer the following three lucid but mind-boggling questions:

1. What fruit was implicated in the death of Alan Turing?

2. Last week, Google attributed its glitch that labeled the entire Web as hazardous to “human error.” What famous movie character, describing another computer snafu, said, “It can only be attributable to human error”?

3. What flavor of soft drink is mentioned in the third verse of the final track on Werner Vogels’ favorite album of 1969?

The contest is over! Thanks for participating. The answers are:

1. Apple

2. HAL

3. Cherry red

Smackdown

A while back, Clay Shirky argued that watching TV is like being an alky and that the Internet is the 12-step cure. Now, Daniel Markham, in his post Technology Is Heroin, says the cure is worse than the disease. If watching television is like sucking on a bottle of gin, using the Net is like mainlining speedballs with a dirty needle. Both men claim to have history on their side. You be the judge.

The Great Library of Googleplex

From Robert Darnton’s Google and the Future of Books in the New York Review of Books:

Google is not a guild, and it did not set out to create a monopoly. On the contrary, it has pursued a laudable goal: promoting access to information. But the class action character of the [Publishers vs Google] settlement makes Google invulnerable to competition. Most book authors and publishers who own US copyrights are automatically covered by the settlement. They can opt out of it; but whatever they do, no new digitizing enterprise can get off the ground without winning their assent one by one, a practical impossibility, or without becoming mired down in another class action suit. If approved by the court—a process that could take as much as two years—the settlement will give Google control over the digitizing of virtually all books covered by copyright in the United States.

This outcome was not anticipated at the outset. Looking back over the course of digitization from the 1990s, we now can see that we missed a great opportunity. Action by Congress and the Library of Congress or a grand alliance of research libraries supported by a coalition of foundations could have done the job at a feasible cost and designed it in a manner that would have put the public interest first. By spreading the cost in various ways—a rental based on the amount of use of a database or a budget line in the National Endowment for the Humanities or the Library of Congress—we could have provided authors and publishers with a legitimate income, while maintaining an open access repository or one in which access was based on reasonable fees. We could have created a National Digital Library—the twenty-first-century equivalent of the Library of Alexandria. It is too late now. Not only have we failed to realize that possibility, but, even worse, we are allowing a question of public policy—the control of access to information—to be determined by private lawsuit …

As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new kind, not of railroads or steel but of access to information.

Never alone

From William Deresiewicz’s article The End of Solitude in the new edition of the Chronicle of Higher Education:

The two emotions, loneliness and boredom, are closely allied. They are also both characteristically modern. The Oxford English Dictionary’s earliest citations of either word, at least in the contemporary sense, date from the 19th century … Loneliness is not the absence of company, it is grief over that absence. The lost sheep is lonely; the shepherd is not lonely. But the Internet is as powerful a machine for the production of loneliness as television is for the manufacture of boredom. If six hours of television a day creates the aptitude for boredom, the inability to sit still, a hundred text messages a day creates the aptitude for loneliness, the inability to be by yourself. Some degree of boredom and loneliness is to be expected, especially among young people, given the way our human environment has been attenuated. But technology amplifies those tendencies. You could call your schoolmates when I was a teenager, but you couldn’t call them 100 times a day. You could get together with your friends when I was in college, but you couldn’t always get together with them when you wanted to, for the simple reason that you couldn’t always find them. If boredom is the great emotion of the TV generation, loneliness is the great emotion of the Web generation. We lost the ability to be still, our capacity for idleness. They have lost the ability to be alone, their capacity for solitude.

Sharing is creepy

A while back, I wrote about the affliction of avatar anxiety, in which one’s self-consciousness about one’s online self amplifies one’s self-consciousness about one’s actual self. Here’s the nub:

Your online self … is entirely self-created, and because it determines your identity and social standing in an internet community, each decision you make about how you portray yourself – about which facts (or falsehoods) to reveal, which photos to upload, which people “to friend,” which bands or movies or books to list as favorites, which words to put in a blog – is fraught, subtly or not, with a kind of existential danger. And you are entirely responsible for the consequences as you navigate that danger. You are, after all, your avatar’s parents; there’s no one else to blame. So leaving the real world to participate in an online community – or a virtual world like Second Life – doesn’t relieve the anxiety of self-consciousness; it magnifies it. You become more, not less, exposed.

So far as I know, avatar anxiety has not yet been declared an actual illness by the American Psychiatric Association, but I have no doubt that it will eventually make the grade, particularly after reading a brief article by Steven Levy, called “The Burden of Twitter,” in the new edition of Wired. Levy says that he “adores” social networking but that at the same time he is consumed with guilt and remorse over the activities of his online self. The guilt comes when he fails to participate – when he doesn’t post to his blog or when he lets his tweetstream go dry. “I worry,” he writes, “that I’m snatching morsels from the information food bank without making any donation.” That’s not so surprising. Much more interesting is the remorse, which he says he feels when he does participate:

As my participation increases, I invariably suffer another psychic downside of social networking: remorse. The more I upload the details of my existence, even in the form of random observations and casual location updates, the more I worry about giving away too much. It’s one thing to share intimacies person- to-person. But with a community? Creepy.

Levy ends by turning his affliction into a knowing little joke: “So now I’m feeling guilty—for being remorseful. Maybe I should complain about it in my next tweet.” The dismissiveness of the joke strikes me as unfortunate, because I think Levy is expressing something important here. I wish, in fact, that the article were longer, that he had spent more time delving into the source of his feeling of remorse and his sense of creepiness (both of which, by the way, I share completely). He does give a hint about that source when he refers to the fact that in the Web 2.0 world we talk intimately, or at least familiarly, not just with people we actually know but with complete strangers (even if they’re sometimes given the designation of “friend”). In describing what it’s like to send tweets to hundreds of faceless followers, Levy writes:

Since I don’t know many in this mob, I try not to be personally revealing. Still, no matter how innocuous your individual tweets, the aggregate ends up being the foundation of a scary-deep self-portrait. It’s like a psychographic version of strip poker—I’m disrobing, 140 characters at a time.

Though he never names it, what Levy is really talking about here is shame. And the shame comes from something deeper than just self-exposure, though that’s certainly part of it. There’s an arrogance to sharing the details of one’s life in public with strangers – it’s the arrogance of power, the assumption that such details somehow deserve to be broadly aired. And as for the people, those strangers, on the receiving end of the disclosures, they suffer, through their desire to hear the details, to hungrily listen in, a kind of debasement. At the risk of going too far, I’d argue that there’s a certain sadomasochistic quality to the exchange (it’s a variation on the exchange that takes place between celebrity and fan). And I’m pretty sure that Levy’s remorse comes from his realization, conscious or not, that he is, in a very subtle but nonetheless real way, displaying an undeserved and unappetizing arrogance while also contributing to the debasement of others.

The power relationships in social networking, and their psychological and social consequences, is a subject that deserves more discussion. I’m glad Levy has focused some attention on the subject.

After I click the publish button for this post, I’m going to go wash my hands.

All hail the information triumvirate!

I was reading an interview today with Jorge Cauz, the president of Encyclopedia Britannica, in which he describes some of the Web 2.0-y tools that the company is preparing to roll out to enable readers to contribute to the encyclopedia’s content. (I’m on Britannica’s board of editorial advisors.) The interview touches, as you’d expect, on the great success that Wikipedia has achieved on the Web and, in particular, on its ever increasing dominance of Google search results. Cauz calls the tie between Wikipedia and Google “the most symbiotic relationship happening out there” – and I think he’s right.

Cauz’s remark reminded me that it’s been some time since I updated my informal survey of Wikipedia’s ranking on Google. A couple of years ago, I plucked from my brain, in as random a fashion as I could manage, ten topics from a range of knowledge domains: World War II, Israel, George Washington, Genome, Agriculture, Herman Melville, Internet, Magna Carta, Evolution, Epilepsy. I then googled each one to see where Wikipedia’s article on the topic would rank.

I first did the searches on August 10, 2006. The results showed that Wikipedia did indeed hold a strong position for each of the ten subjects:

World War II: #1

Israel: #1

George Washington: #4

Genome: #9

Agriculture: #6

Herman Melville: #3

Internet: #5

Magna Carta: #2

Evolution: #3

Epilepsy: #6

I next did the searches on December 14, 2007, and found that Wikipedia’s dominance of Google searches had, over the course of just a year and a half, grown dramatically:

World War II: #1

Israel: #1

George Washington: #2

Genome: #1

Agriculture: #1

Herman Melville: #1

Internet: #1

Magna Carta: #1

Evolution: #1

Epilepsy: #3

Today, another year having passed, I did the searches again. And guess what:

World War II: #1

Israel: #1

George Washington: #1

Genome: #1

Agriculture: #1

Herman Melville: #1

Internet: #1

Magna Carta: #1

Evolution: #1

Epilepsy: #1

Yes, it’s a clean sweep for Wikipedia.

The first thing to be said is: Congratulations, Wikipedians. You rule. Seriously, it’s a remarkable achievement. Who would have thought that a rag-tag band of anonymous volunteers could achieve what amounts to hegemony over the results of the most popular search engine, at least when it comes to searches for common topics.

The next thing to be said is: what we seem to have here is evidence of a fundamental failure of the Web as an information-delivery service. Three things have happened, in a blink of history’s eye: (1) a single medium, the Web, has come to dominate the storage and supply of information, (2) a single search engine, Google, has come to dominate the navigation of that medium, and (3) a single information source, Wikipedia, has come to dominate the results served up by that search engine. Even if you adore the Web, Google, and Wikipedia – and I admit there’s much to adore – you have to wonder if the transformation of the Net from a radically heterogeneous information source to a radically homogeneous one is a good thing. Is culture best served by an information triumvirate?

It’s hard to imagine that Wikipedia articles are actually the very best source of information for all of the many thousands of topics on which they now appear as the top Google search result. What’s much more likely is that the Web, through its links, and Google, through its search algorithms, have inadvertently set into motion a very strong feedback loop that amplifies popularity and, in the end, leads us all, lemminglike, down the same well-trod path – the path of least resistance. You might call this the triumph of the wisdom of the crowd. I would suggest that it would be more accurately described as the triumph of the wisdom of the mob. The former sounds benign; the latter, less so.

UPDATE: Interestingly, Britannica and Wikipedia seem to be headed toward a convergence in their editorial rules and regulations. After Wikipedia erroneously declared both Ted Kennedy and Robert Byrd dead on Inauguration Day, the Register noted that an embarrassed Jimmy Wales intensified his push to get the Wikipedians to adopt a policy of Flagged Revisions, which would require edits of sensitive articles, including those on living persons, to be vetted by editors before being incorporated into the Wikipedia site. (In what may be a preview of Wikipedia’s future, the Flagged Revisions policy has already been adopted by the German Wikipedia for all articles.) Such a move would, of course, represent a continuation of Wikipedia’s ongoing tightening of editorial controls over its content.