Dealing with Google

It’s hard to get much insight into how Google goes about choosing locations for its data centers and negotiating deals. Usually, the company seals the lips of everyone involved with non-disclosure agreements. But in the wake of the search giant’s most recent deal to build a center in Pryor, Oklahoma – for which Google’s chief of global operations, Lloyd Taylor, received a rather blingy medallion from Oklahoma governor Brad Henry (see photo) – the head of the MidAmerica industrial park in which the data center complex will be constructed let slip a few details about the process. In an interview with the Tulsa Free Press, Sanders Mitchell described, among other things, how Google refused to disclose its identity until after the final contract was signed:

It was March 11, 2006 and Mitchell was basking in the glow of having just landed a major client in Gatorade when the call from Google came. After an initial period of disbelief, the MidAmerica people received a delegation from their mysterious suitor, a suitor that had some very specific needs but was light on such details as just who they were.

“For quite some time,” Mitchell recalls, “I had no confirmation as to who I was dealing with. I had it narrowed down quite a bit, but until the final contract was signed they (Google) wouldn’t admit who they were. It was an interesting time” …

“We never,” says a Google representative, “comment on who we’re talking to, who we’ve considered, who we’ve rejected. We feel that when we come to an agreement, that’s the time to make an announcement.”

So what were the lures that made Pryor so enticing? “We had all the things they needed,” says Mitchell. “We had 800 acres of prime land that they could use both for their initial data center and for whatever expansion they plan to make in the future. We have plenty of electricity, and we have plenty of water to cool their equipment if they have to generate their own electricity as sometimes happens when we get a power outage in Oklahoma.

“They haven’t really discussed this, but I think one of the things that made us attractive to Google was that we were ready to move on the spur of the moment. We had an 86,000 square foot building already in place, which we had built on speculation. That means they are going to be able to be up and running in a fraction of the time of any other place.

“We also had a lack of red tape I think they found a huge advantage. In other locations so many agencies have to sign off on a project that it can be a year or more before any real movement can be made. I don’t think the team at Google was willing to wait that long.”

So how did Google discover the industrial park in Pryor, Oklahoma? According to Lloyd Taylor, it found it by searching the Net.

What thoughts should I think?

The Financial Times reports on some revealing comments that Google CEO Eric Schmidt made to the press in London:

Asked how Google might look in five years’ time, Mr Schmidt said: “We are very early in the total information we have within Google. The algorithms will get better and we will get better at personalisation. The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow?’ and ‘What job shall I take?’ ”

That should make life a lot easier for all of us.

Happy Birthday, Cathedral & Bazaar

Yesterday, Tim O’Reilly noted that Eric Raymond’s book The Cathedral & the Bazaar, which O’Reilly’s firm published in 1999, has been listed as a favorite business book in a special section of US News and World Report. What I haven’t seen anybody note, though, is that today happens to mark the tenth anniversary of the day Eric Raymond first presented his original “Cathedral & Bazaar” paper, at the 1997 International Linux Kongress in Wurzburg, Germany.

So let me be the first (maybe) to say: Happy Birthday, Cathedral & Bazaar, and congratulations to Eric Raymond for writing such an influential work.

I discuss one aspect of the legacy of Raymond’s paper in an article in the new issue of Strategy & Business. I look in particular at how Raymond’s cathedral-and-bazaar metaphor has been widely applied beyond the world of software and how, in related fashion, the idea of “open source” has become a metaphor used to describe pretty much any sort of communal or peer-production means of creating goods or services.

O’Reilly’s post about Raymond’s book is a good example of the phenomenon. He writes:

People should give more thought to the straight line that connects open source and Web 2.0 … Open source developers were merely the canaries in the coal mine, the alpha geeks who told us something about what happens when a community adapts itself to the principles that drive the internet. Open source wasn’t about licensing or even about software. It was about viral distribution and marketing, network-enabled collaboration, low barriers to cooperation, and the wisdom of crowds.

I don’t have a problem with stretching metaphors, and I do think that Raymond’s cathedral-and-bazaar analogy is helpful in thinking about a lot things, but it seems to me that claiming that open source “wasn’t about licensing or even about software” ends up throwing more darkness than light. I mean, open source was about software, wasn’t it? When we lose sight of its origins, the concept starts to become very fuzzy very fast. As many others have pointed out, software production has unique characteristics that lend it particularly well to the peer-production model. When you apply the model elsewhere, you don’t get the same results. It’s important to maintain a distinction between the metaphor and the thing itself.

Google uncovers a million malware sites

Google’s program to identify internet sites that distribute computer viruses and other “malware” has so far uncovered one million web pages that are infecting the computers of unsuspecting surfers, report Panayiotis Mavrommatis and Niels Provos of the search company’s Anti-Malware Team. Most of the sites engaging in what the researchers term “drive-by downloads” are ordinary sites whose owners “are often unaware that their web servers have been compromised.”

Based on an analysis of a sample of sites, the Google researchers estimate that one in every thousand websites may be “malicious.”

The Google researchers also looked at the sources of the malware that is being distributed through drive-by downloads. They found that four countries seem to be responsible for “the majority of malware activity”: China, the United States, Germany, and Russia.

The findings were reported today on Google’s newly launched Online Security Blog, another sign of the company’s growing concern over the threat posed by the proliferation of malware and the compromised sites that distribute it.

Free AdSense

To competitors like Microsoft and Yahoo, Google must seem like a greased pig. You can see the damned thing running amok in your garden, but you can’t figure out a good way to get hold of it. The grease that Google has slathered on itself is mainly, I think, pricing, particularly the use of AdWords and other auctions to set the price of advertisements. In a rare but little noticed moment of extreme candor during a November 2005 interview with Fred Vogelstein (which was only recently published), Google CEO Eric Schmidt noted the great competitive importance of the company’s auction pricing:

Schmidt: Another example of Sergey [Brin]’s observations is that our advertising network is very powerful because it’s quite resistant to certain competitive attacks.

Vogelstein: Such as?

Schmidt: Because it’s an auction market you cannot under-price it. This point is lost on many, many people.

The barrier that an auction presents to a price war is, as Schmidt implies, crucial to Google’s strategy – and doubtless a big source of frustration to rivals, particularly Microsoft. Through superior technology, superior foresight, and a generous helping of luck, Google has built up a dominant position in the extremely lucrative market for serving search, or contextual, ads – and that’s providing the beachhead and the cash for its aggressive expansion. One of the best ways to attack a competitor, particularly in a market like automated ad serving where the marginal costs of executing a transaction are basically zero, is by undercutting the competitor’s price. That strategy can be particularly effective if the competitor is much more dependent on the line of business in question than you are – the precise situation that currently obtains between Google and Microsoft in ad-serving. (Slash the price of ads, and Google suffers greatly, whereas Microsoft suffers hardly any immediate material damage.) Needless to say, Microsoft has used precisely this strategy with devastating effectiveness in the past.

But the auction model effectively removes this option because there are no fixed prices to undercut. Every price is set dynamically by the market and is hence outside the supplier’s control. You could subsidize customers’ purchases, by, for instance, providing them with credits to use in an auction, but that would simply distort the auction market, artificially inflate prices, and drive buyers away. The inability to cut prices makes it very hard – not impossible, but very hard – to unseat a dominant rival like Google. You have to fight with one arm, probably your best arm, tied behind your back.

But if Google isn’t vulnerable to a direct pricing attack on its core AdWords service, it is vulnerable to a pricing attack on its AdSense service. AdSense’s customers aren’t the buyers of ads but rather the sellers of ads – publishers and other site owners (including, in its own trivial way, this blog). Google doesn’t directly charge publishers a price for running ads on their sites, but it does take a price, in the form of a percentage of the revenues that the ads it serves generate. Except when it negotiates a special deal with big publishers, Google doesn’t tell its AdSense customers what cut it’s taking, but it’s a substantial one, probably running somewhere around 30 or 40 percent.

So here’s what a Yahoo or a Microsoft or any other competitor could do: Introduce a free version of AdSense. By free, I mean that you tell publishers that if they run your ads on their sites, they get to keep all the advertising revenues. 100 percent. You won’t take any cut. Immediately, you put a lot of pricing pressure on an important source of revenues and profits for Google.

Now, practically speaking, this would be a tough strategy for a Yahoo to undertake, given its own dependence on advertising. But it would not be a tough strategy for a Microsoft to launch, because this is (at the moment) a fairly trivial business for Microsoft. And it would bring two potentially large benefits to the company:

1. It would hurt its competitor a lot more than it would hurt itself. Even allowing for the fact that a chunk of Google’s AdSense revenues come through negotiated arrangements with big publishers (through which Google may already be giving the publishers close to 100 percent of ad revenues), turning the delivery of contextual ads into a free service for publishers would put Google under financial pressure. (Google’s revenue from ads on third-party sites amounted to $1.3 billion in the last quarter, well over a third of its total revenues.) But it wouldn’t cause any harm, of a material nature, to Microsoft.

2. It would help promote Microsoft’s own ad-serving business. In essence what Microsoft would be doing in giving away its product to ad sellers is redefining that product (for itself and its competitors) not as a profit-making business in itself but as a complement to the core ad-serving product that it offers to ad buyers. Expand the presence of your ads on third-party sites, and you make your core ad-serving business more attractive to ad buyers. So Microsoft wouldn’t simply be sacrificing potential revenues; it would be promoting revenues in a complementary (and more important) area of its business.

Now, I’ve made this simpler than it really is, because publishers also have to take into account the productivity of the ads being served, and by all accounts Google continues to maintain a productivity edge. Nevertheless, turning the AdSense market into a free market would help neutralize that edge and generally redefine the competitive dynamic to Microsoft’s benefit. And, anyway, what does it have to lose?

Long player

I started reading David Weinberger’s new book, Everything Is Miscellaneous, this weekend. I’d been looking forward to it. Weinberger has a supple, curious mind and an easy way with words. Even though I rarely agree with his conclusions, he gets the brain moving – and that’s what matters. But I have to say I didn’t get very far in the book, at least not this weekend. In fact, I only reached the bottom of page nine, at which point I crashed into this passage about music:

For decades we’ve been buying albums. We thought it was for artistic reasons, but it was really because the economics of the physical world required it: Bundling songs into long-playing albums lowered the production, marketing, and distribution costs because there were fewer records to make, ship, shelve, categorize, alphabetize, and inventory. As soon as music went digital, we learned that the natural unit of music is the track. Thus was iTunes born, a miscellaneous pile of 3.5 million songs from a thousand record labels. Anyone can offer music there without first having to get the permission of a record executive.

“… the natural unit of music is the track”? Well, roll over, Beethoven, and tell Tchaikovsky the news.

There’s a lot going on in that brief passage, and almost all of it is wrong. Weinberger does do a good job, though, of condensing into a few sentences what might be called the liberation mythology of the internet. This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world – a world of profound physical and economic constraints – from which the web is now liberating us. We were enslaved, and now we are saved. In a bizarrely fanciful twist, the digital world is presented as a “natural” counterpoint to the supposed artificiality of the physical world.

I set the book aside and fell to pondering. Actually, the first thing I did was to sweep the junk off the dust cover of my sadly neglected turntable and pull out an example of one of those old, maligned “long-playing albums” from my shrunken collection of cardboard-sheathed LPs (arrayed alphabetically, by artist, on a shelf in a cabinet). I chose Exile on Main Street. More particularly, I chose the unnatural bundle of tracks to be found on side three of Exile on Main Street. Carefully holding the thin black slab of scratched, slightly warped, but still serviceable vinyl by its edges – you won’t, I trust, begrudge me a pang of nostalgia for the outdated physical world – I eased it onto the spindle and set the platter to spinning at a steady thirty-three-and-a-third revolutions per minute.

Now, if you’re not familiar with Exile on Main Street, or if you know it only in a debauched digital form – whether as a single-sided plastic CD (yuk) or as a pile of miscellaneous undersampled iTunes tracks (yuk squared) – let me explain that side three is the strangest yet the most crucial of the four sides of the Stones’ double-record masterpiece. The side begins, literally, in happiness – or Happyness – and ends, figuratively, in a dark night of the soul. (I realize that, today, it’s hard to imagine Mick Jagger having a dark night of the soul, but at the dawn of the gruesome seventies, with the wounds of Brian Jones’s death, Marianne Faithfull’s overdose, and Altamont’s hippie apocalypse still fresh in his psyche, Mick was, I imagine, suffering from an existential pain that neither a needle and a spoon nor even another girl could fully take away.)

But it’s the middle tracks of the platter that seem most pertinent to me in thinking about Weinberger’s argument. Between Keith’s ecstatic, grinning-at-death “Happy” and Mick’s desperate, shut-the-lights “Let It Loose” come three offhand, wasted-in-the-basement songs – “Turd on the Run,” “Ventilator Blues,” and “Just Wanna See His Face” – that sound, in isolation, like throwaways. If you unbundled Exile and tossed these tracks onto the miscellaneous iTunes pile, they’d sink, probably without a trace. I mean, who’s going to buy “Turd on the Run” as a standalone track? And yet, in the context of the album that is Exile on Main Street, the three songs achieve a remarkable, tortured eloquence. They become necessary. They transcend their identity as tracks, and they become part of something larger. They become art.

Listening to Exile, or to any number of other long-playing bundles – The Velvet Underground & Nico, Revolver, Astral Weeks, Every Picture Tells a Story, Mott, Blood on the Tracks, Station to Station, London Calling, Get Happy!, Murmur, Tim (the list, thankfully, goes on and on) – I could almost convince myself that the 20-minute-or-so side of an LP is not just some ungainly byproduct of the economics of the physical world but rather the “natural unit of music.” As “natural” a unit, anyway, as the individual track.

The long-playing phonograph record, twelve inches in diameter and spinning at a lazy 33 rpm, is, even today, a fairly recent technological development. (In fact, recorded music in general is a fairly recent technological development.) After a few failed attempts to produce a long-player in the early thirties, the modern LP was introduced in 1948 by a record executive named Edward Wallerstein, then the president of Columbia Records, a division of William Paley’s giant Columbia Broadcasting System. At the time, the dominant phonograph record had for about a half century been the 78 – a fragile, ten-inch shellac disk that spun at seventy-eight rpm and could hold only about three or four minutes of music on a side.

Wallerstein, being a record executive, invented the long-player as a way to “bundle” a lot of tracks onto a single disk in order to enhance the economics of the business and force customers to buy a bunch of songs that they didn’t want to get a track or two that they did want. Right? Wrong. Wallerstein in fact invented the long-player because he wanted a format that would do justice to performances of classical works, which, needless to say, didn’t lend themselves all that well to three-minute snippets.

Before his death in 1970, Wallerstein recalled how he pushed a team of talented Columbia engineers to develop the modern record album (as well as a practical system for playing it):

Every two months there were meetings of the Columbia Records people and Bill Paley at CBS. [Jim] Hunter, Columbia’s production director, and I were always there, and the engineering team would present anything that might have developed. Toward the end of 1946, the engineers let Adrian Murphy, who was their technical contact man at CBS, know that they had something to demonstrate. It was a long-playing record that lasted seven or eight minutes, and I immediately said, “Well, that’s not a long-playing record.” They then got it to ten or twelve minutes, and that didn’t make it either. This went on for at least two years.

Mr. Paley, I think, got a little sore at me, because I kept saying, “That’s not a long-playing record,” and he asked, “Well, Ted, what in hell is a long-playing record?” I said, “Give me a week, and I’ll tell you.”

I timed I don’t know how many works in the classical repertory and came up with a figure of seventeen minutes to a side. This would enable about 90% of all classical music to be put on two sides of a record. The engineers went back to their laboratories. When we met in the fall of 1947 the team brought in the seventeen-minute record.

The long-player was not, in other words, a commercial contrivance aimed at bundling together popular songs to the advantage of record companies and the disadvantage of consumers; it was a format specifically designed to provide people with a much better way to listen to recordings of classical works. In fact, in focusing on perfecting a medium for classical performances, Columbia actually sacrificed much of the pop market to its rival RCA, which at the time was developing a competing record format: the seven-inch, forty-five-revolutions-per-minute single. Recalls Wallerstein:

There was a long discussion as to whether we should move right in [to the market with the LP] or first do some development work on better equipment for playing these records or, most important, do some development work on a popular record to match these 12-inch classical discs. Up to now our thinking had been geared completely to the classical market rather than to the two- or three-minute pop disc market.

I was in favor of waiting a year or so to solve these problems and to improve the original product. We could have developed a 6- or 7-inch record and equipment to handle the various sizes for pops. But Paley felt that, since we had put $250,000 into the LP, it should be launched as it was. So we didn’t wait and in consequence lost the pops market to the RCA 45s.

A brief standards war ensued between the LP and the 45 – it was called “the battle of speeds” – which concluded, fortunately, with a technological compromise that allowed both to flourish. Record players were designed to accommodate both 33 rpm albums and 45 rpm singles (and, for a while, anyway, the old 78s as well). The 45 format allowed consumers to buy popular individual songs for a relatively low price, while the LP provided them with the option of buying longer works for a somewhat higher price. Of course, popular music soon moved onto LPs, as musicians and record companies sought to maximize their sales and provide fans with more songs by their favorite artists. The introduction of the pop LP did not force customers to buy more songs than they wanted – they could still cherry-pick individual tracks by buying 45s. The LP expanded people’s choices, giving them more of the music they clamored for.

Indeed, in suggesting that the long-player resulted in a big pile of “natural” tracks being bundled together into artificial albums, Weinberger gets it precisely backwards. It was the arrival of the LP that set off the explosion in the number of popular music tracks available to buyers. It also set off a burst of incredible creativity in popular music, as bands, songwriters, and solo performers began to take advantage of the new, extended format, to turn the longer medium to their own artistic purposes. The result was a great flowering not only of wonderful singles, sold as 45s, but of carefully constructed sets of songs, sold as LPs. Was there also a lot of filler? Of course there was. When hasn’t there been?

Weinberger also gets it backwards in suggesting that the LP was a record industry ploy to constrain the supply of products – in order to have “fewer records to make, ship, shelve, categorize, alphabetize, and inventory.” The album format, combined with the single format, brought a huge increase in the number of records – and, in turn, in the outlets that sold them. It unleashed a flood of recorded music. It’s worth remembering that the major competitor to the record during this time was radio, which of course provided music for free. (The arrival of radio nearly killed off the recorded music industry, in fact.) The best way – the only way – for record companies to compete against radio was to increase the number of records they produced, to give customers far more choices than radio could send over the airwaves. The long-playing album, in sum, not only gave buyers many more products to choose from; it gave artists many more options for expressing themselves, to everyone’s benefit. Far from being a constraint on the market, the physical format of the long-player was a great spur to consumer choice and, even more important, to creativity. Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night – or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.

And yet it is the wholesale unbundling of LPs into a “miscellaneous pile” of compressed digital song files that Weinberger would have us welcome as some kind of deliverance from decades of apparent servitude to the long-playing album. One doesn’t have to be an apologist for record executives – who in recent years have done a great job in proving their cynicism and stupidity – to recognize that Weinberger is warping history in an attempt to prove an ideological point. Will the new stress on discrete digital tracks bring a new flowering of creativity in music? I don’t know. Maybe we’ll get a pile of gems, or maybe we’ll get a pile of crap. Probably we’ll get a mix. But I do know that the development of the physical long-playing album, together with the physical single, was a development that we should all be grateful for. We probably shouldn’t rush out to dance on the album’s grave.

As for the individual track being the “natural unit of music,” that’s a fantasy. Natural’s not in it.

The digital dump

Foreign Policy has a striking photoessay on how the third world has become the preferred dumping ground for the millions of tons of electronic waste the developed world pumps out every year. Because the scrap contains a lot of valuable metals like gold and copper – as well as toxic ones like lead and mercury – scavenging digital dumps has become a major industry for the poor.