Monthly Archives: July 2006

A reality check for online news

The idea that we’re in the midst of a dramatic shift away from traditional print and broadcast news media and toward online news media has become a commonplace, particularly in web circles. Newspapers, it’s widely assumed, are doomed to fade away because the young prefer getting their news from the Internet. Our Resident Philistine rehearsed this theme over the weekend, when he argued that the audience for newspapers, like the audience for Woody Allen movies, “is dying off.”

But there’s a little problem with this idea: It’s not true. Yesterday, the Pew Center for the People and the Press released its latest study of “news consumption” in the United States. It makes for fascinating reading. The picture it paints of the role of online news media is not at all like it’s been made out to be by web pundits.

First of all, after a short burst of explosive growth, demand for online news media has flattened. “The growth of the online news audience has slowed considerably since 2000,” Pew reports. Currently, 31% of Americans are going online for news three or more times a week. That’s up only slightly from 29% two years ago. By contrast, 40% of Americans report reading a newspaper daily, a number that’s down sharply from where it was ten years ago, but that’s been stable for the past four years (it was 41% in 2002).

The slowdown in the growth of online news media has been most pronounced, moreover, “among the very young, who are now somewhat less likely to go online for news than are people in their 40s.” Demographically, in other words, online news consumers tend to look a lot like offline news consumers:

As internet news has gone more mainstream, its audience has aged. Since 2000, nearly all of the growth among regular internet news users has occurred among those ages 25-64. By contrast, virtually the same percentage of 18-24 year-olds say they get news online at least three days a week as did so six years ago (30% now, 29% then). Currently, about as many people ages 50 to 64 regularly get news on the internet as do those in their late teens and early 20s.

If the audience for newspapers is dying off, so is the audience for online news.

The study also reveals that “the audience for online news is fairly broad, but not particularly deep”:

People who say they logged on for news yesterday spent 32 minutes, on average, getting the news online. That is significantly less than the average number of minutes that newspaper readers, radio news listeners, and TV news viewers spend with those sources. And while nearly half of all Americans (48%) spend at least 30 minutes getting news on television, just 9% spend that long getting news online.

The upshot is that online news appears to be not a replacement for traditional media but a supplement to it. The people who tend to use online sources are the same people who read newspapers and watch news shows on TV. They take a quick look at headlines online, but they continue to rely on traditional news sources for the details:

The web serves mostly as a supplement to other sources rather than a primary source of news. Those who use the web for news still spend more time getting news from other sources than they do getting news online. In addition, web news consumers emphasize speed and convenience over detail. Of the 23% who got news on the internet yesterday, only a minority visited newspaper websites. Instead, websites that include quick updates of major headlines, such as MSNBC, Yahoo, and CNN, dominate the web-news landscape.

The report is not good news for newspapers, but it does show that the reports of their imminent death have been exaggerated. The real division is not between the audience for online news and the audience for traditional news – they are the same audience. The real division is between the people who are interested in the news and the people who couldn’t care less. In fact, it looks very much like online news media are now merging with traditional news media, as the two come together in a symbiotic relationship to serve the same set of customers. They are not competing with each other so much as they are competing together against nonconsumption.

Benkler on Calacanis’s wallet

A few days ago, I wrote a post about Yochai Benkler’s contention, in his book The Wealth of Networks, that we are today seeing the arrival of large-scale systems of “social production” that “are decentralized but do not rely on either the price system or a managerial structure for coordination.” I argued that the leading social production projects are already adopting management structures – some, indeed, have had them from the start – and that it’s likely they’ll also come to embrace the price system as well. I pointed to Jason Calacanis’s offer of payment to “social bookmarkers” as an early example of the emergence of a price-based talent market. My post ended with a question: “Which is mightier – Benkler’s dream or Calacanis’s wallet?”

Benkler posted a comment, focused on the price system side of the question, that is interesting and deserves to be elevated to full post-hood. So here it is (and please note that I have no intention of paying the professor for his contribution to Rough Type, which is run as an anarcho-syndicalist organization of one):

I’m happy to accept this wager as a measure of the quality of my predictions about the long term sustainability of commons-based peer production. The shape of the wager, however, should be clear. We could decide to appoint between one and three people who, on some date certain – let’s say two years from now, on August 1st 2008 – survey the web or blogosphere, and seek out the most influential sites in some major category: for example, relevance and filtration (like Digg); or visual images (like Flickr). And they will then decide whether they are peer production processes or whether they are price-incentivized systems. While it is possible that there will be a price-based player there, I predict that the major systems will be primarily peer-based. Look at what happened to Mojo Nation – which tried to reward participants in a swarm peer file distribution system with “mojo” convertible into goodies as compare to BitTorrent, which did not. Compare the level of use and success of pay-per-cycle distributed computing sites like Gomez Performance Networks or Capacity Calibration Networks, as compared to the socially engaged platforms like SETI@Home or Folding@Home. It is just too simplistic to think that if you add money, the really good participants will come and do the work as well as, or better than, the parallel social processes.

The reason is that the power of the major sites comes from combining large-scale contributions from heterogeneous participants, with heterogeneous motivations. Pointing to the 80/20 rule on contributions misses the dynamic that comes from being part of a large community and a recognized leader or major contributors in it, for those at the top, and misses the importance of framing this as a non-priced social process. Adding money alters the overall relationship. It makes some people “professionals,” and renders other participants, “suckers.” It is not impossible to mix paid and unpaid participants, as we see in free and open source software and even to a very limited extent in Wikipedia. It is just hard, and requires a cultural form that is definitely not “now at long last we can tell who’s worth something and pay them, while everyone else is just worthelss.” What Calacanis is doing now with his posts about the top contributors to Digg is trying to alter the cultural interpretation of what they are doing: from leaders in an engaged community, to suckers who are being taken for a ride by Ross.Maybe he will succeed in raining on Digg’s parade, though I doubt it, but that does not mean that he will succeed in building an alternative sustained social process of peer production, or in replacing peer production with a purely paid service. Once you frame the people who aren’t getting paid as poor sods being taken for a ride, for example, the best you can hope for is that some of the “leaders” elsewhere will come and become your low-paid employees (after all, what is $1,000 a month relative to the millions Calacanis would make if his plan in fact succeeds? At that point, the leaders are no longer leaders of a community, and they turn out to be suckers after all, working for pittance, comparatively speaking.)

There is an abiding skepticism, born of many years in the industrial age, about the sustainability and plausibility of nonmarket-based cooperation and productive collaboration. We have now, on the other hand, almost two decades of literature in experimental economics, game theory, anthropology, political science field studies, that shows that cooperation in fact does happen much more often than the standard economics textbooks predict, and that under certain structural conditions non-price-based production is extraordinarily robust. The same literature also suggests that there is crowding-out, or displacement, between monetary and non-monetary motivations as well as between different institutional sytems: social, as opposed to market, as opposed to state. It just is not so easy to assume that because people behave productively in one framework (the social process of peer production that is Wikipedia, free and open source software, or Digg), that you can take the same exact behavior, with the same exact set of people, and harness them to your goals by attaching a price to what previously they were doing in a social process. Anyone interested in the basic approach can look at my articles Coase’s Penguin, or Sharing Nicely, which include more of the underlying literature than does the book The Wealth of Networks, although some of the materials are there in chapter 4. The problem is not, in any event, a simple or solved one, and I, among many others, continue to work on it.

On another, less important note, of course it is “too soon to tell for sure.” “Knowing for sure” is the sure sign of religion, not analysis. I just want to point out that the particular example you use, the American Broadcast System, however, is very far from accurate. There is a very brief overview of the history of the displacement of amateurs by the networks over the course of the 1920s in an oldish piece of mine called “Overcoming Agoraphobia”. The short of the story is that the Department of the Navy more or less forced British Marconi to sell its American assets to an American company, thereby creating RCA in partial alliance with GE. GE, RCA, AT&T, and Westinghouse then created a patent pool which divided the market in radio receivers and transmitters in 1920-21, and spent the next five years jockeying within this market to try to prevent amateurs and competitive producers from competing. Throughout this period they manuvered with Herbert Hoover, then Secretary of Commerce, to regulate the airwaves so as to shunt the amateurs onto what were thought unusable short waves, and to crowd all the nonprofit and almost all the non-patent-pool stations into a single narrow channel, while reserving separate channel allocations for stations that could afford expensive broadcast stations and live performers.Amateurs were prohibited from broadcasting news, or recorded music, etc. To say that this process represents an instance in which “that nonprofessional network was soon displaced by a smaller set of commercial radio stations that were better able to fulfill the desires of the listening public” is, shall we say, not the only way to characterize that story.

Lee Gomes responds to Chris Anderson

Late yesterday, in an email exchange, I asked Lee Gomes what he thought of Chris Anderson’s response to his Wall Street Journal column on the Long Tail. (Which I discussed here.) In particular, I wanted to know whether he thought (as I do) that Anderson made a valid point in explaining why summing up sales in percentage terms might obfuscate the Long Tail effect. Gomes sent me a long and thoughtful reply, which he said I was free to post. In hopes of furthering the debate about the extent of the Long Tail phenomenon, I will do that:

Hi Nick:

Yes indeed, Chris has a very valid point in measuring head and tail the way he does. But so do I. I had that paragraph in my column to tell readers that saying that Amazon gets 25% of its revenue from its tail (everything past the 100,000 best-sellers) is just another way of saying that 2.7% of its titles represent 75% of its sales. (It has 3.7 million books.) I made it clear in the column that this was MY method, and that Chris did not approve of it. (By the way, it is simply NOT true that his economist told me my approach was wrong, as he says on his blog; the exact opposite occurred. I also spoke to the head of the statistics department at a UC college who said my approach was the more “natural” one.) Readers, though, are free to pick the method they like. Please note, though, that I at least clearly laid out the two approaches; nowhere in his book does Chris let on how small, as a percent of sales, his tail really is.

While I am at it, I’d like to correct an extremely serious misrepresentation Chris made at the end of his blog posting, to the effect that Anita Elberse of Harvard “urged” me not to characterize her work the way I did. This is manifestly false. Chris is either misremembering or deliberately conflating two separate issues. Prof. Elberse did indeed in an email remind me that the data she had for Netflix was under NDA, and I could thus not report it. But the comment had nothing to do with what Chris says it does. Let Prof. Elberse herself describe whether I got it right; below is the full text of an email she sent me after the story ran:

“I just read your article, and just wanted to thank you for being so careful in quoting me. I wish all journalists stayed this close to what was actually said! :-)

“You did beat me ‘to the market’ with your article, but I hope our academic article (which should be ready in a few weeks) will further clarify the long tail phenomenon (or lack thereof).”

She also posted a comment on Chris’ blog implying much the same thing, and correcting Chris on another matter; I urge you to read it.

More broadly, when you attend Mainstream Media Community College, you are taught that if you say something in your lede, you need to back it up. My lede was that “I don’t think things are changing as much as he does,” and I think I backed it up four different times.

1) While Chris seems to have repealed the “98 Percent Rule” in his interviews with me, he didn’t do as much in the book. This is how he begins the book, and any reader, after hearing the “Rule” described as “nearly universal,” would, if nothing else, assume that it was true at all the examples the book describes. Chris defended the fact that it’s not by noting to me that his book wasn’t titled “The 98 Percent Rule;” does this mean that any sentence without “Long Tail” in can’t be assumed to be accurate? He also complains in his blog comments that I didn’t mention the 95% play rates at Netflix. But I wasn’t trying to show the “Rule” was NEVER true; he is the one who said it was “universal.”

2) The book’s showstopper notion about the possibility of the tail possibly being greater than the head is a fine idea; it is just not happening anywhere in the world, despite the impression the book leaves. (Incidentally, I described the book’s treatment of this possibility in my column in the same future conditional tense as Chris described it in the book; I never said he said it was happening now, as Chris implies on his blog.) And nowhere in the book does Chris say that Netflix and Amazon won’t reach Long Tail nirvana for at least a decade – not by his definition of head and tail, and certainly not by my alternative one. I can’t believe readers wouldn’t want to have been told all this; they weren’t.

3) The notion that “hits matter less,” which is explicit in the book and implicit in the Long Tail idea, was rebutted just about everywhere I looked. The 90/10 rule was pretty constant. And iTunes looks like Billboard, not some paradise of niches.

4) Other economists are reaching very different conclusions from Chris. After I started asking Chris questions about his findings, he gave me the names of three economists who he said would back him up. One of them I ran out of time to call. One of them was Prof. Elberse. The third was MIT’s Erik Brynjolfsson, who told me that there indeed is a shift occurring as things move online, but it’s on the order of an 80/20 distribution moving to a 70/30 one. Had I had more space, I would have mentioned this as well, for it too is less earth-shattering than what the book leads readers to believe will occur.

Nick, I would never say that the Internet isn’t changing a lot of things, including perhaps consumption patterns. But in case I haven’t made myself clear, I don’t think it’s changing things as much as Chris does.

Lee

PS Feel free to post any or all of this.

Amusements

The Onion is all over Wikipedia’s celebration of America’s 750th birthday.

The first episode of the long-awaited Resignation Gang miniseries goes live, and Blogebrity provides a synopsis for those who find podcasts incredibly tedious.

John Gruber talks to the Magic 8-Ball about Zune.

Is video the new fiber?

It’s become something of a truism that running a web business these days doesn’t require much capital. Computing is cheap, storage is cheap, bandwidth is cheap – and, to boot, you can get your content for free (unless you’re Jason Calacanis).

Like most truisms, this one’s pretty true. But it’s not entirely true. There are exceptions – web businesses that require a whole of capital – and the exceptions are interesting because they mark the stress points in the entrepreneurial economy, the places where you can most clearly see the current balance of fear and greed. Back in the dot-com days, the big stress point was fiber-optic cable. Huge amounts of money were dumped into the ground, as companies built high-speed networks designed to handle an expected tidal wave of demand. But the surf didn’t come in – at least not quickly enough – and all that cash stayed in the ground. The overhang of overcapacity sank a lot of companies, ultimately leading to the repricing, at pennies to the dollar, of that capacity, which in turn is fueling, in part, the current resurgence of investment in web businesses. All’s well that ends well – unless it was your money that got buried.

Today, video may be emerging as the new stress point – the new fiber. Running a big on-line video business is anything but inexpensive. Storage and bandwidth may be relatively cheap, but if you consume enough of them, as video businesses do, they start to get very expensive. Yet we’re now seeing all sorts of companies make those investments, from deep-pocketed big guys like Google and Microsoft and Apple to not-quite-so-deep-pocketed big guys like Amazon to startups like YouTube, Grouper, Motionbox and their brethren. What’s not known is how much profit is going to be pumped out of this business, in its various forms, in the short to medium term. What is known is that there’s a heck of a lot of redundant capacity being built up, and unless you believe that all the companies rushing into this market are going to be successful in it, you have to assume that there’s going to end up being more capacity than demand and, hence, a painful accounting for some of the investors. How painful? We don’t know. Maybe it’ll be a toothache, maybe it’ll be a leg run over by a lawnmower. Faith-based investing is always risky, though, and it gets really risky when the business is a capital-intensive one.

How large is the long tail?

In his column in the Wall Street Journal today, Lee Gomes tries to debunk Chris Anderson’s Long Tail theory, and on his Long Tail blog today, Anderson tries to debunk Gomes’s debunking. It’s an interesting – and important – debate, and I find myself agreeing with both gentlemen.

Gomes’s main point is that the Long Tail has been oversold – that it’s not as long or as important as it’s been made out to be. He writes:

In the book’s main sections, Mr. Anderson writes that as things move online, sales of misses will increase – so much so that they can equal or exceed the sales of hits. The latter is the book’s showstopper proposition; it’s mentioned twice on the book’s jacket.

I was thus a little surprised when Mr. Anderson told me that he didn’t have any examples of this actually occurring. At Netflix and Amazon, two of his biggest case studies, misses won’t outsell hits for at least another decade, he said. None of these qualifications are in the book.

Anderson responds:

First, the book doesn’t claim that there are any cases where sales of products not available in the dominant bricks-and-mortar retailer in a sector (my definition of “tail”) are larger than the sales of products that are available in that retailer (“head”).

What it does say is that the current data at Rhapsody, Netflix and Amazon show that the tail amounts to between 21% and 40% of the market, with the head accounting for the rest. Although I don’t discuss this in detail in the book, in the case of Rhapsody, the trend data suggests that the tail (as defined above) actually will equal the head within five years.

I have no doubt that the Internet has created a Long Tail effect, making it easier for customers to find and buy rare or specialized products. Anderson’s book provides pretty compelling evidence that that’s true. And it’s important. But I’m still not quite sure if it’s really important or just mildly important. Some of my doubts stem from a crucial statistical change, relating to Amazon’s sales, that’s happened between the publication of Anderson’s original Long Tail article in Wired in October 2004 and the publication of the book. In the Wired article, Anderson wrote:

What’s really amazing about the Long Tail is the sheer size of it. Combine enough nonhits on the Long Tail and you’ve got a market bigger than the hits. Take books: The average Barnes & Noble carries 130,000 titles. Yet more than half of Amazon’s book sales come from outside its top 130,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is larger than the market for those that are. In other words, the potential book market may be twice as big as it appears to be, if only we can get over the economics of scarcity. Venture capitalist and former music industry consultant Kevin Laws puts it this way: “The biggest money is in the smallest sales.”

That claim becomes much more modest in the book:

What’s truly amazing about the Long Tail is the sheer size of it. Again, if you combine enough of the non-hits, you’ve actually established a market that rivals the hits. Take books: The average Borders carries around 100,000 titles. Yet about a quarter of Amazon’s book sales come from outside its top 100,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is already a third the size of the existing market – and what’s more, it’s growing quickly. If these growth trends continue, the potential book market may be half again as big as it appears to be, if only we can get over the economics of scarcity. Venture capitalist and former music industry consultant Kevin Laws puts it this way: “The biggest money is in the smallest sales.” [pp. 22-23]

There’s a very big difference between “more than half” of sales lying outside the top 130,000 sellers and “about a quarter” of sales lying outside the top 100,000 sellers. (Kevin Laws was right in the context of the article, but he’s wrong in the context of the book.) Anyone who’s followed Anderson’s blog knows the story of this change. To his credit, he carefully documented the statistical issues in posts like this one. The overestimation of the size of the Long Tail in the article was due to an error in an academic study of Amazon’s sales that Anderson relied on. (Amazon does not disclose detailed breakdowns of its sales.) The error’s been corrected in the book – with the estimated size of the current Long Tail scaled back – but it still influences perceptions of the Long Tail’s impact, and is even, as Gomes points out, echoed in the book jacket copy. (Please note that I don’t hold authors responsible for what’s on a book jacket.)

But the 25% is not the whole story, either. To get a clear sense of the impact of the Net on the Long Tail, you’d need another statistic: Before the Internet came along, what percentage of total book sales lay outside the 100,000 titles stocked in a typical large bookstore? There have always been specialized bookstores, selling everything from religious and spiritual books to textbooks to foreign-language books to used and out-of-print books to poetry books (though their ranks have been pruned by Amazon and other online sellers). And there have always been small presses – literary, academic and technical – selling books directly, through the mail. And you’ve always been able to go to a bookstore and order a book that it didn’t carry on its shelves. How much of the Long Tail of books represents old demand moving through a new channel, and how much represents new demand? Only by knowing how big the old Long Tail was can you understand how much larger it’s grown with the Internet.

My guess – and it’s only a guess – is that the Internet Long Tail is substantially larger than the pre-Internet Long Tail, but that, in its current form, it amounts to something less than a monumental change in the market. The important question, then, is this: Is the Long Tail going to get a lot bigger, or has most of the growth already happened? Although Anderson and Gomes probably have very different views on that question – Anderson sees evidence that the tail is expanding, while Gomes sees evidence that it isn’t – they would both, I think, agree on its fundamental importance. Anderson’s Long Tail theory explains a lot about how the Internet has influenced markets, but the true extent of the Long Tail and its impact remains to be seen.

JotSpot’s suite dreams

As the company’s name implies, JotSpot used to provide a fairly simple product: a web-based wiki application running on the web. Now, though, JotSpot is getting more ambitious. It’s transforming its wiki tool into a wiki platform – a Swiss Army Knife of office applications that run inside wiki pages. There’s word processing, spreadsheets, calendars, personal directories, even a photo gallery. JotSpot seems to be pinning its hopes on being a web version of Microsoft Office.

“With the new version of Jotspot,” CEO Joe Kraus tells Dan Farber, “we are bringing the metaphor of wikis to the productivity functions of an office [suite], sharing access, permissions and version control, and letting user[s] organize a site the way the[y] want.” Richard MacManus thinks it’s a good idea: “JotSpot is doing the right thing morphing their wiki application products into office tools, because this is tapping into a growing market for web-based office tools and will also push the boundaries of what office tools can be in the Web Office era.”

I’m not so sure. I agree that more office tools will come to reside on the web, but I’m not convinced that the best way to ensure the success of those tools is to turn them into mini-Offices. For one thing, it muddies what is at the moment the strongest selling point for web services: simplicity. The fact is, community-managed wikis can actually get pretty confusing pretty fast, a fact that JotSpot admits on its blog: “When pages can be created almost anywhere, even the most meticulously gardened wikis can become confusing and hard to navigate, particularly for new users.” Adding in a welter of additional applications raises the complexity quotient significantly, undermining the appeal of the service. A confused user is a non-user.

But the bigger issue is a strategic one: Can mini-Offices survive in an Office world? To see the challenge that a company like JotSpot faces, just listen to how it’s positioning its new suite. “It has some of the familiarity and functionality of Office,” Kraus tells MacManus; “it’s wikis meets Microsoft Office.” On the JotSpot site, the company says its word processor is “just like Microsoft Word.” It says its spreadsheet application “feels just like Microsoft Excel but on the web!” All of which leads to a simple question: Why do I need stuff that’s like Microsoft Office when I already have Office?

JotSpot is, in other words, jumping into a market that is Microsoft’s to lose. As Microsoft continues to expand its own web functionality, adding a web services layer to Office and incorporating wiki functionality as well as other collaboration tools, it will have an enormous advantage. Its web services will be integrated from the get-go with the business world’s default productivity suite, Microsoft Office. It’s going to be awfully hard to compete head-on with Microsoft if your marketing pitch is that your product “has some of the familiarity and functionality of Office.”

JotSpot may end up wishing that it had focused on providing a simple but useful tool that complements Office rather than trying to compete directly with the beast. It’s going to take a heck of a lot more than a wiki metaphor to kill Office.

UPDATE: Richard MacManus suggests, in a response to this post, that JotSpot’s suite is in fact designed to be a complement to Microsoft Office, extending rather than imitating its capabilities. (And he provides a good quote from Joe Kraus along those lines.) Still, in playing around with JotSpot today, it felt to me more like an attempt to mimic Office functionality within a wiki rather than to extend Office functionality into a wiki (I hope that makes sense) – but I may well have missed something. As I wrote about Google Spreadsheets, I think the complement strategy is the way to go in this market, but that becomes harder when you’re positioning your product as a complete suite rather than a narrowly defined but valuable add-on. (I think, for business customers, it will be more compelling to have a wiki embedded in Office than to have Office embedded in a wiki.)

Also, MacManus quotes Joe Wilcox as arguing that Microsoft won’t launch a hosted Web version of Office unless forced to by a viable competitor. I don’t think Microsoft will launch a Web “version” of Office (ie, a replacement for the desktop product) any time soon, but I’m pretty sure it will offer a set of hosted Web tools that form a kind of front-end for the desktop product (and require the purchase of the desktop product to use, or at least use fully). For some time, in other words, Office will be a hybrid, offering (assuming Microsoft pulls it off) rich Web functionality that’s tightly integrated with the traditional program. The ability to provide that integration is the great advantage Microsoft holds right now, and I would expect them to use it aggressively in competing with the Googles and JotSpots of the world.

Microsoft is vulnerable here – very much so – but I think its vulnerability lies in the potential long-run erosion of its ability to make a lot of money from Office, rather than in seeing Office displaced by a direct competitor. I largely agree with Dennis Howlett’s contention that many of the big traditional IT suppliers “will be rendered irrelevant in their current form” – even if he doesn’t think I agree with him.