The YouTube elite

As Om Malik reports, YouTube is splitting its much vaunted “community” into two tiers: a handful of stars who get paid for their work, and a great mass of unpaid volunteers. Malik quotes YouTube executive Jamie Byrne: “A select group of content creators will get promotion on the YouTube platform, and we will help them monetize their content … We want to ensure that these talented people can start making a living off their efforts.”

“A select group”? “These talented people”? So much for the myth of the social collective.

Last July, I entered into a wager with Yochai Benkler, the Yale law professor who wrote The Wealth of Networks. Benkler argues that the Internet is enabling a new “social production” system that does not “rely on either the price system or a managerial structure for coordination.” The shift away from paid, professional labor, he says, will bring “a quite basic transformation in the world around us, and how we act, alone and in concert with others, to shape our own understanding of the world we occupy and that of others with whom we share it.”

I argued that “the reason ‘social media’ has existed outside the price system up until now is simply that a market hadn’t yet emerged for this new kind of labor. We weren’t yet able to assign a value – in monetary terms – to what these workers were doing; we weren’t even able to draw distinctions between what they were contributing. We couldn’t see the talent for the crowd. Now, though, the amateurs are being sorted according to their individual skills, calculations as to the monetary value of those skills are starting to be made, and a market appears to be taking shape. As buyers and sellers come into this market, we’ll see whether large-scale social media can in fact survive outside the price system, or whether it’s fated to be subsumed into professional media.”

In a comment, Benkler wrote:

I predict that the major systems [of Internet production] will be primarily peer-based … It is just too simplistic to think that if you add money, the really good participants will come and do the work as well as, or better than, the parallel social processes. The reason is that the power of the major sites comes from combining large-scale contributions from heterogeneous participants, with heterogeneous motivations. Pointing to the 80/20 rule on contributions misses the dynamic that comes from being part of a large community and a recognized leader or major contributors in it, for those at the top, and misses the importance of framing this as a non-priced social process. Adding money alters the overall relationship. It makes some people “professionals,” and renders other participants, “suckers.” It is not impossible to mix paid and unpaid participants, as we see in free and open source software and even to a very limited extent in Wikipedia. It is just hard, and requires a cultural form that is definitely not “now at long last we can tell who’s worth something and pay them, while everyone else is just worthless.”

With YouTube’s move, we have a good opportunity to see whether “the really good participants” are motivated by fellow-feeling and prefer to operate in a “non-priced social process” or whether, in fact, they’re more than happy to enter “the price system” and earn some scratch.

YouTube itself doesn’t seem to be under any illusion that its community operates outside the price system. In announcing that it would begin rewarding its “most popular and prolific original content creators” with a bit of the green stuff, it happily dangled the carrot of compensation in front of the rest of its contributors: “So now that you’ve read this, you’re probably wondering, ‘How can I get in on the action?’ This is only available to the initial participants. But if you create original content, have built and maintained an audience on YouTube, and think you might qualify for this program based on what’s above, you can express interest on our partnership lead form. We hope that this program inspires people to keep creating original videos, building audiences and engaging with the YouTube community.” Translation: money talks.

Needless to say, I’m pretty sure that “talented people” will demand compensation (particularly when they see that a site owner – Google, in YouTube’s case – is making good money off their work). That doesn’t mean that there won’t be a lot of people that contribute their work for free (or for a pittance) to gain attention or feel part of a community or whatever. It just means that the price system will in most cases win, and that the exceptions – Wikipedia, notably – will be exceptions. Indeed, in the vast majority of cases even the masses of unpaid volunteers will work within the price system. While the stars make good money, the masses will simply donate the economic value of their work to the site owner. The reason they’ll do that is because, in isolation, their contributions have little economic value. For the successful site owner, however, all those tiny contributions, once aggregated, can turn into a large pile of cash.

Microsoft eyeing Yahoo

Microsoft has hired Goldman Sachs to help it acquire Yahoo, the New York Post reports today. Writes the paper:

Stung by the loss of Internet advertising firm DoubleClick to Google last month, Microsoft has intensified its pursuit of a deal with Yahoo!, asking the company to re-enter formal negotiations … The new approach follows an offer Microsoft made to acquire Yahoo! a few months ago, sources said. But Yahoo! spurned the advances of the Redmond, Wash.-based software giant. Wall Street sources put a roughly $50 billion price tag on Yahoo!.

Buying Yahoo would represent a hugely risky bet for Microsoft, as both companies have been struggling with the management of their internet businesses, losing search and advertising share to Google. Combining two weak performers rarely produces a strong performer.

Henry Blodget suggests that, should the companies merge, Microsoft should immediately bundle up its Yahoo-MSN web properties and spin them off. That not only seems unlikely (as Blodget notes), but would also undermine the long-term rationale of a merger. For players like Google, Microsoft, and Yahoo, content and services – the media business and the software business – are becoming inextricably intertwined, all the way from the underlying data-center infrastructure to the point of consumption. Microsoft has come to believe, for instance, that advertising will be central to the software business in the future. It’s not going to spin off its ad networks or search functions.

Nevertheless, the odds against such a merger paying off are high. But maybe Microsoft, despite downplaying its rivalry with Google, is starting to feel desperate. And maybe Yahoo is, too.

UPDATE: The Wall Street Journal, following up on the Post report, has more details. It notes the organizational upheavals that might follow a merger:

Top Yahoo executives could be a big obstacle to any deal. Co-founder Jerry Yang, for one, has a reputation for disliking Microsoft and avoids using Microsoft products, says one person familiar with the matter. Top Yahoo staff might leave if Microsoft acquired the company and triggered a vesting of their Yahoo options.

UPDATE: Scott Rosenberg posits that a Microsoft-Yahoo merger would be a replay of the AOL-Time Warner deal, signaling, among other things, an impending market top. Things do seem to be getting frothy out there.

Rough Type gains geek cred

Computerworld has named Rough Type one of the “top 15 geek blog sites,” which I’m pretty sure is a compliment. Here’s the full list:

1. Lifehacker

2. IT Toolbox Blogs

3. Valleywag

4. Kotaku

5. Danger Room

6. Gizmodo

7. O’Reilly Radar

8. Techdirt

9. Groklaw

10. Hack a Day

11. Engadget

12. Feedster

13. Forever Geek

14. Rough Type

15. Smorgasbord

Elegy for the photojournalist

Andrew Brown has an article in today’s Guardian about how three technological developments – newspapers’ shift from using black-and-white photographs to using color ones, the rise of digital photography, and the arrival of online amateur photo-sharing services like Flickr – have conspired to rob many professional photographers of their livelihoods. The phenomenon Brown describes is a good example of the interplay between technology and economics and how it can influence both the labor market and what might be called culture creation.

I was reminded of a blog post that Dan Gillmor wrote a few months back about the “inevitable” shift in the economics of photography – for penny-pinching publishers, the allure of free, serviceable amateur photos is irresistable – that is leading to the decline, if not the demise, of professional photojournalism. While Gillmor voiced some regret about the trend, he was in general happy about the way amateurs are elbowing the pros out of their jobs:

Is it so sad that the professionals will have more trouble making a living this way in coming years? To them, it must be — and I have friends in the business, which makes this painful to write in some ways. To the rest of us, as long as we get the trustworthy news we need, the trend is more positive … The photojournalist’s job may be history before long. But photojournalism has never been more important, or more widespread.

I would agree with Gillmor that this trend seems inevitable, but I’m not so sanguine about its effects. It’s not that I have anything against amateur photographers (being one myself); it’s that I think we’ll find – are finding already, in fact – that while amateur work may be an adequate economic substitute for professional work, there are things that pros can accomplish that amateurs cannot. We see in the decline of professional photojournalism how the Internet’s “abundance” can end up constricting our choices as well as expanding them.

Amazon re-prices S3

For any utility, profitability hinges on using your capital assets – your installed capacity – as efficiently as possible, and the way you do that is through sophisticated pricing schedules. In essence, you want to reward those customers whose usage patterns allow you to use your installed capacity efficiently (by cutting their prices) while penalizing those customers whose usage patterns undermine your ability to use your installed capacity efficiently (by raising their prices). If you do this effectively, you get the best possible return on every dollar of capital you invest in infrastructure and, as you grow, you get more profitable. If you do it poorly, you become less profitable as you grow and, ultimately, you croak.

By all accounts, Amazon Web Services’ path-breaking computing utilities, particularly its S3 storage utility, are fabulously popular. But they don’t yet seem to be profitable, and as Amazon CEO Jeff Bezos recently disclosed, they are now capacity-constrained. In other words, to continue to grow Amazon is having to expand its installed capacity through investments in data centers, drives, processors, bandwidth, and other plant and equipment. In order to become profitable as it makes those capital investments, it has to begin to more aggressively shape the way its services are used by customers. It can no longer treat all customers as equals.

In this light, Amazon’s original flat-rate pricing for its utility services, while having the advantage of simplicity, becomes unsustainable. Electric utilities, to take an earlier example, started off with flat-rate pricing, but they only became hugely successful when they began to customize their pricing schedules to the usage patterns of individual customers. So it’s no surprise that Amazon has announced that it will abandon its flat-rate pricing schedule for S3 on June 1 and introduce a more complex pricing schedule with tiered fees for bandwidth usage and a new fee for the number of requests made on the system. (Storage fees themselves will remain fixed – for the time being.) Amazon puts a customer-friendly spin on the change:

With Amazon S3 recently celebrating its one year birthday, we took an in-depth look at how developers were using the service, and explored whether there were opportunities to further lower costs for our customers. The primary area our customers had asked us to investigate was whether we could charge less for bandwidth.

There are two primary costs associated with uploading and downloading files: the cost of the bandwidth itself, and the fixed cost of processing a request. Consistent with our cost-following pricing philosophy, we determined that the best solution for our customers, overall, is to equitably charge for the resources being used – and therefore disaggregate request costs from bandwidth costs.

Making this change will allow us to offer lower bandwidth rates for all of our customers. In addition, we’re implementing volume pricing for bandwidth, so that as our customers’ businesses grow and help us achieve further economies of scale, they benefit by receiving even lower bandwidth rates. Finally, this means that we will be introducing a small request-based charge for each time a request is made to the service.

The end result is an overall price reduction for the vast majority of our customers. If this new pricing had been applied to customers’ March 2007 usage, 75% of customers would have seen their bill decrease, while an additional 11% would have seen an increase of less than 10%. Only 14% of customers would have experienced an increase of greater than 10%.

That’s all well and good, but the real reason for the pricing change is to shift usage patterns to Amazon’s benefit. It’s true, though, that ultimately the shift in usage patterns will generate broad benefits to customers, because the more efficiently Amazon uses its installed capacity, the lower it will be able to push its prices. (When a utility is in its early growth phase, it makes more money by cutting prices than by raising them – if it’s well managed.)

In the immediate term, though, some users, particularly those using S3 to store large quantities of fairly large files, will make out well, while other users, particularly those using the service as a web server for lots of small files, will get a whack to the wallet. You can see the contrast by reading, on the one hand, the response of SmugMug CEO Don MacAskill (a beneficiary) and, on the other, the posts on Amazon Web Services’ developer site from users who will suffer.

The big picture is simple: Amazon’s success in utility computing is forcing it to become more sophisticated as a utility operator and that inevitably means drawing distinctions between users and reflecting those distinctions in variable pricing. S3 may only be a year old, but it’s already growing up.

Google builds in Benelux

Google’s capital spending spree is not, needless to say, limited to the U.S. The company operates dozens of data centers around the world, many in secret locations, and it is expanding overseas as aggressively as it is at home. Google already reportedly runs a big server farm in the the town of Groningen in the Netherlands, and yesterday it announced that it is expanding its Benelux footprint by building a new center in Saint-Ghislain, Belgium. An article in the Belgian paper Le Soir says the company will invest between 250 and 300 million euros in the facility. Construction is expected to begin at the start of the summer and be completed next year. Two of the attractions of the site, according to a Google executive, are the close proximity of a canal (a source of water for cooling servers) and the availability of a rich internet connection.

Le Soir, by the way, was one of the newspapers that sued Google for including its content in the Belgian version of Google News, leading a Belgian court to rule earlier this year that Google violated copyright laws. Clearly, Google doesn’t hold a grudge against the country – at least not when it comes to infrastructure.

Microsoft is dead in theory

A few days ago, Paul Graham proclaimed, “Microsoft is dead.” He later explained that what he really meant was, “Microsoft doesn’t matter.”

But whether it’s dead or just irrelevant, the company remains an extraordinarily healthy economic organism. Its latest quarterly results, released late yesterday, blew past analysts’ expectations, thanks in large part to unexpectedly strong demand for the much-maligned Vista operating system as well as the new verion of Office. The stock popped in response to the numbers.

To put Microsoft’s results in context, it’s useful to compare them to those of the juggernaut that is Apple Inc. Apple’s sales in the last quarter soared 21% over year-earlier levels, rising from $4.4 billion to $5.3 billion. It was, as headline writers put it, a “blowout” quarter. But Apple’s growth pales in comparison to Microsoft’s. Microsoft, a much larger company than Apple, increased its sales by 32% in the quarter, from $10.9 to $14.4 billion. It’s true that Microsoft’s first-quarter numbers were goosed by deferred revenue from pre-sales of Vista in the prior quarter, but nevertheless Microsoft is growing at a remarkably robust pace for a company its size.

Of course, even Microsoft’s growth pales in comparison to Google’s, which posted a 66% rise in sales in the quarter, from $1.5 billion to $2.5 billion. But Google is still, of course, a much smaller business, and it’s worth noting that the $1 billion that it added to its sales is a fraction of the $3.5 billion that Microsoft added. To put it another way, the increase in Microsoft’s sales during the quarter is greater than Google’s total sales – by far ($3.5 billion vs. $2.5 billion).

Many writers, including this one, have argued that Microsoft is facing its biggest challenge ever, as the importance of its stronghold – the PC desktop – diminishes. And many writers, including this one, have also argued that the company’s continuing success may prove the biggest obstacle to its ability to adapt to the changing world of computers and software. But let’s face it: these are theories about an unknowable future. Right now, Microsoft remains a formidable company and competitor, with a whole lot of cash at its disposal. Take it for dead, or irrelevant, at your own risk.