« Showdown in the trailer park | Main | The best Odd Couple episode ever »

Showdown in the trailer park II

March 27, 2007

Greg Linden points to an excellent paper by Microsoft's James Hamilton that puts trailer park computing into the broader context of the evolution of the data center in the emerging age of utility computing. Hamilton presented his paper at the Biennial Conference on Innovative Data Systems Research in January (PowerPoint slides also available). Hamilton's ideas provide a complement and counterpoint to a presentation on the future of computing demand that Sun's Greg Papadopoulos gave in February (pdf of slides).

Both Hamilton and Papadopoulos see a similar set of trends - notably, the growing popularity of software as a service, the growing demand for supercomputing in certain industries like pharmaceuticals and financial services, and the expansion of computer-intensive "Web 2.0" applications aimed at consumers - spurring demand for extremely powerful, extremely reliable, and extremely efficient data centers. The big question is: How will this demand be filled? What will turn out to be the default architecture for the utility data center? Will it be constructed out of cheap commodity components (a la Google and Amazon), or will it be constructed out of specialized utility-class machinery supplied by companies like Sun, Rackable, and 3PAR? (Google itself is rumored to have its own fleet of homemade containerized data centers.)

Papadopoulos, true to his Sun heritage, argues for the specialized model, while Hamilton, true to his Microsoft heritage, argues for the commodity model. But for the commodity model to work, contends Hamilton, the commodity components will need to be combined into a meticulously engineered system that achieves high power, reliability, and efficiency - a data center in a box, or, more precisely, a data center in a container. The portability of a containerized data center, Hamilton further argues, will allow for a utility operation to be based on a network of smaller data centers, deployed in many different places, which he believes will offer social and political advantages over large, monolithic centers. As he puts it:

Commodity systems substantially reduce the cost of server-side computing. However, they bring new design challenges, some technical and some not. The technical issues include power consumption, heat density limits, communications latencies, multi-thousand-system administration costs, and efficient failure management with large server counts. The natural tendency is towards building a small number of very large data centers and this is where some of the non-technical issues come to play. These include social and political constraints on data center location, in addition to taxation and economic factors. All diminish the appeal of the large, central data center model. Multiple smaller data centers, regionally located, could prove to be a competitive advantage.

To address these technical and non-technical challenges, we recommend a different granule of system purchase, deployment, and management ... we propose using a fully-populated shipping container as the data-center capitalization, management, and growth unit. We argue that this fundamental change in system packaging can drive order-of-magnitude cost reductions, and allow faster, more nimble deployments and upgrades.

As Hamilton notes, the shift to viewing an entire data center as a "unit" of computing would be consistent with the ongoing shift from buying individual servers to buying racks of servers. He makes a good case that the standard shipping container provides the ideal form factor for this new computing unit:

Shipping containers are ideal for the proposed solution: they are relatively inexpensive and environmentally robust. They can be purchased new for $1,950 each, while remanufactured units range around $1,500. The units are designed to successfully transport delicate goods in extreme conditions and routinely spend weeks at a time on the decks of cargo ships in rough ocean conditions and survive severe storms. More importantly, they are recognized world-wide: every intermodal dispatch center has appropriate cargo handling equipment, a heavy duty fork lift is able to safely move them, and they can be placed and fit in almost anywhere. The container can be cost-effectively shipped over the highway or across oceans. Final delivery to a data center is simple using rail or commercial trucking. The container can be placed in any secure location with network capacity, chilled water, and power and it will run without hardware maintenance for its service life ...

At the end of its service life, the container is returned to the supplier for recycling. This service life will typically be 3 years, although we suspect that this may stretch out in some cases to 5 years since little motivation exists to be on the latest technology.

If Papadopoulos and Hamilton take different views of the kinds of components - commoditized or specialized - that will be assembled into these boxes, they seem to be in agreement that what will be important is not the component but the entire system. If they're right, then the the computer industry may, as I've suggested before, be going back to more of the mainframe model, where profits lie in high-end engineering rather than low-end assembly.

That remains to be seen. In the meantime, Hamilton's paper is a must-read for anyone interested in the future of computing.

Comments

For those who would prefer to read the paper without leaving your browser, here's a converted version.

Posted by: Sergey Schetinin [TypeKey Profile Page] at March 27, 2007 04:49 PM

With other words the industry is evolving from selling motorbikes (i.e. laptops) and cars (i.e. desktops) to selling buses and trucks (i.e. datacenter containers).

Posted by: Dragos [TypeKey Profile Page] at March 28, 2007 04:10 AM

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?


carrshot5.jpg Subscribe to Rough Type

Now in paperback:
shallowspbk2.jpg Pulitzer Prize Finalist

"Riveting" -San Francisco Chronicle

"Rewarding" -Financial Times

"Revelatory" -Booklist

Order from Amazon

Visit The Shallows site

The Cloud, demystified: bigswitchcover2thumb.jpg "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Greatest hits

The amorality of Web 2.0

Twitter dot dash

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

Is Google Making Us Stupid?

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force

Hypermediation

More

The limits of computers: Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

JavaScript must be enabled to display this email address.

What?