Silicon Death Valley

The Register’s Ashlee Vance wriggles through a Las Vegas Mantrap to get the goods on the SuperNAP, a vast, superpowered data center being constructed in the Nevada desert by a shadowy hosting firm called Switch Communications. Writes Vance:

This 407,000 square foot computing compound will house servers and storage systems owned by many of the world’s most prominent companies. And, unlike most centers of its kind, the SuperNAP will not rely on raised floors or liquid cooling systems to keep the hardware humming. Instead, it will be fueled by custom designs that allow it to maintain an astonishing 1,500 watts per square foot – or close to three times the industry standard.

That’s a lot of juice. In fact, according to one estimate, the SuperNAP will suck up more power than is used by the Bellagio, the Venetian, and Caesar’s Palace combined.

Switch CEO Rob Roy claims the design of the SuperNAP is far more advanced than that of the massive server farms being erected by Google, Microsoft, and other tech leaders. “What we are building has three or four times the power and cooling of the other guys,” he boasts. According to Roy, the company’s investors are urging him to build 10 more SuperNAPs around the world. “Such an undertaking,” writes Vance, “could strap actual muscle to the cloud and utility computing buzzwords that have become commonplace in the technology industry.” It could also tap out a lot of local power grids.

In the meantime, aficionados of data center porn will get a charge out of the SuperNAP video. It’s totally Vegas.

Cloud may squeeze margins, says Microsoft exec

Microsoft expects corporate customers to accelerate their shift to the cloud computing model over the next five years, bringing changes in the company’s financial model, says Chris Capossela, the senior vice president who manages Microsoft’s Office business. In one of the most forthright statements of Microsoft’s view of the shift from in-house to utility-run software, Capossela said, as summarized by Reuters, that “the company will see more and more companies abandon their own in-house computer systems and shift to ‘cloud computing,’ a less expensive alternative.”

The shift to the cloud will be felt most immediately in Microsoft’s big Exchange business for running corporate email and messaging systems:

“In five years, 50 percent of our Exchange mailboxes will be Exchange Online,” said Capossela, who expects a portion of Exchange Online customers to come from customers switching from International Business Machines’ Lotus Domino system. According to research firm Radicati, Exchange will run about 210 million corporate e-mail accounts in 2008, growing to 319 million mailboxes in 2012.

The shift from software licenses to software subscriptions is likely to squeeze Microsoft’s profit margins in its business division, says Capossela, though he expects it will also bring higher and more consistent sales:

The shift to cloud computing will introduce some changes, according to Capossela, in the earnings model at Microsoft’s business division, which generated revenue of $16.4 billion and operating profit of $10.8 billion in fiscal 2007.

Currently, customers pay Microsoft a licensing fee for the software, then buy their own computer and hire their own technology staff to manage those systems. In a services business, the customer will pay Microsoft a larger fee, since Microsoft also runs and maintains all the hardware. But Microsoft’s profit margins may not be “as high,” Capossela said, even though revenue may be more consistent.

Capossela’s assumption that Microsoft will be able to charge companies more under the cloud model seems optimistic, given the different economics of providing software as a web service and the aggressive pricing strategies of cloud pioneers like Google, Zoho, and Amazon.

Capossela told Reuters that the key to competing successfully in the cloud will lie in the company’s ability “to run its computers systems as efficiently as possible to reduce hardware costs.” He says that Microsoft is now adding 10,000 servers a month to its cloud-computing data centers – “a staggering amount of computing power” equivalent to the total hardware currently powering Facebook.

Von Ahn’s Gwap

God, I love saying that headline.

Von Ahn’s Gwap.

Will a computer ever experience the kind of pleasure I derive from saying “Von Ahn’s Gwap,” or will that be reserved for humans?

As The Register notes, a new site was launched this week, by Carnegie Mellon’s School of Computer Science, that aims to entice humans into playing simple games that will help computers get smarter. The site, called Gwap (an acronym for “games with a purpose”), is the brainchild of computer scientist Luis von Ahn (who also cofathered the Captcha). “We have games that can help improve Internet image and audio searches, enhance artificial intelligence and teach computers to see,” he explains. “But that shouldn’t matter to the players because it turns out these games are super fun.”

The site includes the already familiar ESP Game, an image-tagging competition that Google previously launched as Google Image Labeler. It’s intended to create computer-readable metadata about pictures to facilitate image searches. And it offers four new multiplayer games:

Matchin, a game in which players judge which of two images is more appealing, is designed to eventually enable image searches to rank images based on which ones look the best

Tag a Tune, in which players describe songs so that computers can search for music other than by title – such as happy songs or love songs

Verbosity, a test of common sense knowledge that will amass facts for use by artificial intelligence programs

Squigl, a game in which players trace the outlines of objects in photographs to help teach computers to more readily recognize objects

One thing the Internet enables, which wasn’t possible before, at least not on anywhere near the same scale, is the transfer of human intelligence into machine intelligence. (Google’s search engine, which aggregates the human intelligence embedded in links, is a great example.) That capability can also, in theory, help train computers to do things that they haven’t been able to do before, such as identify the contents of pictures or make subjective or qualitative distinctions between similar things. If you can get enough people to tag enough photos of mountains as “mountains” in a machine-readable way, then eventually the machine will start to “see” the mountains in images without needing people’s help.

The challenge, of course, is to figure out a way to get people to do these kinds of routine chores – to work for the machine. (Tagging pictures gets old fast.) Amazon’s Mechanical Turk uses small payments to get people to contribute their time to extending computer intelligence. Von Ahn’s Gwap uses the pleasure of gaming as a lure. As Von Ahn says:

Unlike computer processors, humans require some incentive to become part of a collective computation. Online games are a seductive method for encouraging people to participate in the process. Such games constitute a general mechanism for using brain power to solve open problems.

In fact, designing such a game is much like designing an algorithm – it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. Instead of using a silicon processor, these “algorithms” run on a processor consisting of ordinary humans interacting with computers over the Internet.

In other words, we become part of the processor, part of the machine. In Gwap and similar web-based tools, we see, in admittedly rudimentary form, the next stage in cybernetics, in which it becomes ever more difficult to discern who’s in charge in complex man-machine systems – who’s the controller and who’s the controllee.

“Human computation doesn’t work unless you have people,” says von Ahn. “That’s why we’ve made the games on gwap.com as fun as possible. We need people.” For the time being, anyway.

Beer money

If you think the daily increases in gas prices are painful, just be glad you’re not living in Zimbabwe, where the inflation’s so bad that they just added a half-billion-dollar note to their currency. A contributor to the economics blog Daily Speculations reports on a recent lunch in the country:

During the meal, one of my mates was drinking beer – 750ml bottles of Castle Lager (fondly called bombers) he ordered a 5th one, was advised that the price, which when he ordered his 1st, 2nd 3rd and 4th ones was 160 million per bottle, had gone up to 340 million per bottle. That’s right: During lunch – there was a price increase…

The price increase is amazing, but the beer intake is pretty impressive, too. If my calculations are right, a 750 ml bottle is a little more than two 12-oz bottles, so the guy had already downed eight beers and was heading for ten when he was so rudely interrupted by the price hike. Pretty strong effort for an economist.

Gilligan’s web

Despite the party-pooperism of the Deletionists, the true glory of Wikipedia continues to lie in the obscure, the arcane, and the ephemeral. Nowhere else will you find such painstakingly detailed descriptions of TV shows, video games, cartoons, obsolete software languages, Canadian train stations, and the workings of machines that exist only in science fiction. When I recently felt an unexpected pang of nostalgia for the animated canine inventor Mr. Peabody and his Wayback machine, I knew exactly what to do: head to Wikipedia. Among the gems I unearthed: the Wayback machine was actually the WABAC machine (“a play on early computers such as UNIVAC and ENIAC”), Sherman was Mr. Peabody’s adopted son, Mr. Peabody was not only a genius but “arguably a polymath,” and Sherman’s “personality was that of a naive but fairly bright, energetic young boy.” Whatever else it may be, Wikipedia is a monument to the obsessive-compulsive fact-mongering of the adolescent male. (Never has sexual sublimation been quite so wordy.)

My favorite example is the Wik’s wonderfully panoramic coverage of the popular sixties sitcom Gilligan’s Island. Not only is there an entry for the show itself, but there are separate articles for each of the castaways – Gilligan, the Skipper, the Professor, Mary Ann, Ginger, Thurston Howell III, and Eunice “Lovey” Howell – as well as the actors that played the roles, the ill-fated SS Minnow, and even the subsequent TV movies that were based on the show, including the 1981 classic The Harlem Globetrotters on Gilligan’s Island. Best of all is the annotated list of all 98 of the episodes in the series, which includes a color-coded guide to “visitors, animals, dreams, and bamboo inventions.” (I need to pause here to point out that some nebbish of a Deletionist – pardon the redundancy – has put at the top of the main entry for Gilligan’s Island a notice saying that “this article resembles a fan site” and calling on wikipedians to “please help improve this article by removing excessive trivia.” Fie on you, you wikifascist! Fie, I say!)

It goes deeper than Wikipedia, though. Gilligan’s Island has been a great motivator of user-generated content across the breadth of the web. Check out this YouTube take on the eternal question “Mary Ann or Ginger?”:

In fact, if I were called in to rename Web 2.0, I think I’d call it Gilligan’s Web, if only to underscore the symbiosis between the pop-culture artifacts of the mass media and so much of the user-generated content found online.

So imagine my bewilderment when, a few days ago, I read a transcript of a recent speech that the new-media scholar Clay Shirky gave to a big Web 2.0 confab in which he argued that Gilligan’s Island and Web 2.0 are actually opposing forces in the grand sweep of human history. Whoa, nelly. Is Professor Shirky surfing a different web than the rest of us?

To Shirky, the TV sitcom, as exemplified by Gilligan’s Island, was “the critical technology for the 20th century.” Why? Because it sucked up all the spare time that people suddenly had on their hands in the decades after the second world war. The sitcom “essentially functioned as a kind of cognitive heat sink, dissipating thinking that might otherwise have built up and caused society to overheat.” I’m not exactly sure what Shirky means when he speaks of society overheating, but, anyway, it wasn’t until the arrival of the World Wide Web and its “architecture of participation” that we suddenly gained the capacity to do something productive with our “cognitive surplus,” like edit Wikipedia articles or play the character of an elf in a World of Warcraft clan. Writes Shirky:

Did you ever see that episode of Gilligan’s Island where they almost get off the island and then Gilligan messes up and then they don’t? I saw that one. I saw that one a lot when I was growing up. And every half-hour that I watched that was a half an hour I wasn’t posting at my blog or editing Wikipedia or contributing to a mailing list. Now I had an ironclad excuse for not doing those things, which is none of those things existed then. I was forced into the channel of media the way it was because it was the only option. Now it’s not, and that’s the big surprise. However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it’s worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

Shirky’s calculus seems to go something like this:

Spending a lot of time watching Gilligan’s Island episodes: bad

Spending a lot of time watching Gilligan’s Island episodes and then spending a lot more time writing about the contents of those episodes on Wikipedia: good

But that’s not quite fair, because Shirky is making a larger argument about society and its development. He’s got bigger fish to fry than Gilligan and his quirky mates. Scott Rosenberg does a nice job of summing up Shirky’s argument:

In brief, he suggests that [during the early years of the Industrial Revolution] the English were so stunned and disoriented by the displacement of their lives from the country to the city that they anesthetized themselves with alcohol until enough time had passed for society to begin to figure out what to do with these new vast human agglomerations — how to organize cities and industrial life such that they were not only more tolerable but actually employed the surpluses they created in socially valuable ways.

This is almost certainly an oversimplification, but a provocative and fun one. It sets up a latter-day parallel in the postwar U.S., where a new level of affluence created a society in which people actually had free time. What could one possibly do with that? Enter television — the gin of the 20th century! We let it sop up all our free time for several decades until new opportunities arose to make better use of our spare brain-cycles — Shirky calls this “the cognitive surplus.” And what we’re finally doing with it, or at least a little bit of it, is making new stuff on the Web.

What Shirky is doing here, in essence, is repackaging the liberation mythology that has long characterized the more utopian writings about the Web. That mythology draws a sharp distinction between our lives before the coming of the Web (BW) and our lives after the Web’s blessed birth (AW). In the dark BW years, we were passive couch potatoes who were, in Shirky’s words, “forced into the channel of media the way it was because it was the only option.” We were driftwood, going with whatever flow “the media” imposed on us. We were all trapped in Shirky’s musty cellar.

The Web, the myth continues, emancipated us. We no longer were forced into the channel of passive consumption. We could “participate.” We could “share.” We could “produce.” When we turned our necks from the TV screen to the computer screen, we were liberated:

Media in the 20th century was run as a single race – consumption. How much can we produce? How much can you consume? Can we produce more and you’ll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it ‘s three different events. People like to consume, but they also like to produce, and they like to share. And what’s astonished people who were committed to the structure of the previous society, prior to trying to take this [cognitive] surplus and do something interesting, is that they’re discovering that when you offer people the opportunity to produce and to share, they’ll take you up on that offer.

I think we’d all agree that the Web is changing the structure of media, and that’s going to have many important ramifications. Some will be good, and some will be bad, and the way they will all shake out remains unknown. But what about Shirky’s idea that in the BW years we were unable to do anything “interesting” with our “cognitive surplus” – that the “only option” was watching TV? That, frankly, is bullshit. It may well be that Clay Shirky spent all his time pre-1990 watching sitcoms in his cellar (though I very much doubt it) but I was also alive in those benighted years, and I seem to remember a whole lot more going on.

Did my friends and I watch Gilligan’s Island? You bet your ass we did – and thoroughly enjoyed it (though with a bit more ironic distance than Shirky allows). Watching sitcoms and the other drek served up by the boob tube was certainly part of our lives. But it was not the center of our lives. Most of the people I knew were doing a whole lot of “participating,” “producing,” and “sharing,” and, to boot, they were doing it not only in the symbolic sphere of the media but in the actual physical world as well. They were making 8-millimeter films, playing drums and guitars and saxophones in bands, composing songs, writing poems and stories, painting pictures, making woodblock prints, taking and developing photographs, drawing comics, souping up cars, constructing elaborate model railroads, reading great books and watching great movies and discussing them passionately well into the night, volunteering in political campaigns, protesting for various causes, and on and on and on. I’m sorry, but nobody was stuck, like some pathetic shred of waterborne trash, in a single media-regulated channel.

Tom Slee, in a trenchant review of Shirky’s new book, Here Comes Everybody, strips some of the bright varnish from the Net’s liberation mythology. In the book, Shirky describes, with great intelligence and clarity, the social and economic dynamics of virtual communities. But he also, as Slee notes, indulges his enthusiasm for the Web in a way that draws, once again, an overly bright line between BW and AW:

Clay looks at the Internet and sees lots of groups forming (and things are easy to see on the Internet because even our most casual utterances get stored on someone’s servers for posterity to investigate) and he concludes that the world is alight with a new groupiness, the likes of which we have never seen … While Clay is telling us all about the use of digital technology to spark innovative forms of protest in Belarus, which is a fascinating story, we really need … to ask why, with all these group-forming tools at our disposal and despite the documented disillusionment with the war in Iraq, there is so little coherent protest happening compared to previous wars? Is it really the case that society now is becoming, thanks to the internet, more democratic, more collaborative, and more cooperative than before? I am not convinced.

As Slee suggests, the liberation mythology evaporates when you actually take a hard look at history. It’s worth remembering that Gilligan’s Island originally ran on television from late 1964 to late 1967, a period noteworthy not for its social passivity but for its social activism. These were years not only of great cultural and artistic exploration and inventiveness but also of widespread protest, when people organized into very large – and very real – groups within the civil rights movement, the antiwar movement, the feminist movement, the folk movement, the psychedelic movement, and all sorts of other movements. People weren’t in their basements; they were in the streets.

If everyone was so enervated by Gilligan’s Island, how exactly do you explain 1968? The answer is: you don’t, and you can’t.

Indeed, once you begin contrasting 1968 with 2008, you might even find yourself thinking that, on balance, the Web is not an engine for social activism but an engine for social passivity. You might even suggest that the Web funnels our urges for “participation” and “sharing” into politically and commercially acceptable channels – that it turns us into play-actors, make-believe elves in make-believe clans.

As for the bigger question: Mary Ann.

HP rolls up EDS

Oracle has enjoyed considerable success by rolling up the software side of the the now-mature client-server model of corporate computing. With its $13.9 billion acquisition of sluggish outsourcing giant EDS, Hewlett-Packard is playing the same game on the services side. It’s buying vast tracts of data-center space in which run the computers and other IT machinery that power the operations of lots of large companies and government agencies. The addition of EDS more than doubles the size of HP’s services business, giving it a scale closer to that of the leading IT outsourcing company, IBM.

Om Malik argues that the acquisition is a forward-looking move, aimed at building up HP’s cloud-computing infrastructure for the next generation of corporate IT. I would argue it’s backward-looking: an acquisition aimed at boosting profitability through consolidation and cost reduction in a mature business. The transition to the cloud will, for big companies, be a slow one, and there will continue to be much money made in running client-server infrastructures for many years.

Vinnie Mirchandani, noting that EDS’s business is dominated by infrastructure outsourcing, calls the acquisition “a scale play,” and HP CEO Mark Hurd would seem to agree. In a conference call this morning, he highlighted “a leaner cost structure” as one of the major benefits of the merger. “There’s a tremendous leverage you get from scale,” he said. With this buy, Hurd doesn’t have his head in the clouds. His concerns are altogether earthly.

It’s worth noting that cloud computing promises to turn many traditional systems-outsourcing businesses into pure commodity businesses – undifferentiated utility services. But that’s still well out into the future, at least when it comes to enterprise-scale IT. In the meantime, there’s a lot of cash to be made in running client-server systems for big clients, particularly if you can significantly push down your costs by combining accounts, consolidating and automating data centers, and trimming staff. This deal is likely to set off an aggressive period of acquisitions in IT outsourcing – just as we’ve seen in recent years in enterprise software. Make hay before the sun sets.

Is Office the new Netscape?

As Microsoft and Yahoo continue with their interminable modern-dress staging of Hamlet – it’s longer than Branagh’s version! – the transformation of the software business goes on. We have new players with new strategies, or at least interesting new takes on old strategies.

One of the cornerstones of Microsoft’s competitive strategy over the years has been to redefine competitors’ products as features of its own products. Whenever some upstart PC software company started to get traction with a new application – the Netscape browser is the most famous example – Microsoft would incorporate a version of the application into its Office suite or Windows operating system, eroding the market for the application as a standalone product and starving its rival of economic oxygen (ie, cash). It was an effective strategy as well as a controversial one.

Now, though, the tables may be turning. Google is trying to pull a Microsoft on Microsoft by redefining core personal-productivity applications – calendars. word processing, spreadsheets, etc. – as features embedded in other products. There’s a twist, though. Rather than just incorporating the applications as features in its own products, Google is offering them up to other companies, particularly big IT vendors, to incorporate as features in their products.

We saw this strategy at work in the recent announcement that Google Apps would be incorporated into Salesforce.com’s web applications (as well as the applications being built by others on the Salesforce platform). And we see it, at least in outline, in the tightening partnership between Google and IT behemoth IBM. Eric Schmidt, Google’s CEO, and Sam Palmisano, IBM’s CEO, touted the partnership yesterday in a joint appearance at a big IBM event. “IBM is one of the key planks of our strategy; otherwise we couldn’t reach enterprise customers,” Schmidt said. Dan Farber glosses:

As more companies look for Web-based tools, mashups, and standard applications, such as word processors, Google stands to benefit … While IBM isn’t selling directly for Google in the enterprise, IBM’s software division and business partners are integrating Google applications and widgets into custom software solutions based on IBM’s development framework. The “business context” is the secret of the Google and IBM collaboration, Schmidt said. Embedding Google Gadgets in business applications, that can work on any device, is a common theme for both Google and IBM.

Google’s advantage here doesn’t just lie in the fact that it is ahead of Microsoft in deploying Web-based substitutes for Office applications. Microsoft can – and likely will – neutralize much of that early-mover advantage by offering its own Web-based versions of its Office apps. Its slowness in rolling out full-fledged web apps is deliberate; it doesn’t see Google Apps, or similar online offerings from other companies, as an immediate threat to its Office franchise, and it wants to avoid, for as long as possible, cannibalizing sales of the highly profitable installed versions of Office.

No, Google’s main advantage is simply that it isn’t Microsoft. Microsoft is a much bigger threat to most traditional IT vendors than is Google, so they are much more likely to incorporate Google Apps into their own products than to team up with Microsoft for that purpose. (SAP is an exception, as it has worked with Microsoft, through the Duet initiative, to blend Office applications into its enterprise systems. That program, though, lies well outside the cloud.) Undermining the hegemony of Microsoft Office is a shared goal of many IT suppliers, and they are happy to team up to further that goal. As Salesforce CEO Marc Benioff pithily put it in announcing the Google Apps tie-up, “The enemy of my enemy is my friend, so that makes Google my best friend.”

Like Microsoft, Google is patient in pursuing its strategy. (That’s what very high levels of profitability will do for you.) It knows that, should traditional personal-productivity apps become commonplace features of the cloud, supplied free or at a very low price, the economic oxygen will slowly be sucked out of the Office business. That doesn’t necessarily mean that customers will abandon Microsoft’s apps; it just means that Microsoft won’t be able to make much money from them anymore. Microsoft may eventually win the battle for online Office applications, but the victory is likely to be a pyrrhic one.

Of course, there are some long-run risks for other IT vendors in promoting Google Apps, particularly for IBM. A shift to cheap Web apps for messaging and collaboration poses a threat to IBM’s Notes franchise as well as to Microsoft’s Office franchise. “The enemy of my enemy is my friend.” If I remember correctly, that’s what the US government used to say about Saddam Hussein.