Google lifts its skirts

Yesterday was a remarkable day for the small, slightly obsessed band of Google data-center watchers of which I am one. Around each of the company’s sprawling server farms is a high metal fence patrolled by a particularly devoted squad of rent-a-cops, who may or may not be cyborgian in nature. Ordinary humans seeking a peek at the farms have been required to stand at the fence and gaze at the serene exteriors of the buildings, perhaps admiring the way the eponymous clouds of steam rise off the cooling towers in the morning:

steam.jpg

[photo by Toshihiko Katsuda]

Everything inside the buildings was left to the imagination.

No more. Yesterday, without warning, Google lifted its skirts and showed off its crown jewels. (I think you may need to be Scottish to appreciate that rather grotesquely mixed metaphor.) At the company’s Data Center Energy Summit, it showed a video of the computer-packed shipping containers that it confirmed are the building blocks of its centers (proving that Robert X. Cringely was on the money after all), provided all sorts of details about the centers’ operations, and, most shocking of all, showed off one of its legendary homemade servers.

When Rich Miller, of Data Center Knowledge fame, posted a spookily quiet video of the server yesterday – the video looks like a Blair Witch Project outtake – I initially thought it was an April Fools joke:

But then I saw some sketchy notes about the conference that Amazon data-center whiz James Hamilton had posted on his blog, and it started to become clear that it was no joke:

Containers Based Data Center

· Speaker: Jimmy Clidaras

· 45 containers (222KW each/max is 250Kw – 780W/sq ft)

· Showed pictures of containerized data centers

· 300×250’ of container hanger

· 10MW facility

· Water side economizer

· Chiller bybass …

The server pictured in Miller’s video was the real deal – down to the ingeniously bolted-on battery that allows short-term power backup to be distributed among individual servers rather than centralized in big UPS stacks, as is the norm in data-center design.

Now, CNET’s Stephen Shankland provides a further run-down of the Google disclosures, complete with a diagram of the container-based centers and close-up shots of those idiosyncratic servers, the design of which, said Googler Ben Jai, was “our Manhattan Project.”

GoogleServer.jpg

[photo by Stephen Shankland]

I was particularly surprised to learn that Google rented all its data-center space until 2005, when it built its first center. That implies that The Dalles, Oregon, plant (shown in the photo above) was the company’s first official data smelter. Each of Google’s containers holds 1,160 servers, and the facility’s original server building had 45 containers, which means that it probably was running a total of around 52,000 servers. Since The Dalles plant has three server buildings, that means – and here I’m drawing a speculative conclusion – that it might be running around 150,000 servers altogether.

Here are some more details, from Rich Miller’s report:

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the hot aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to [Jimmy] Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment …

[Urs] Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

I have to confess that I suddenly feel kind of empty. One never fully appreciates the pleasure of a good mystery until it’s uncloaked.

UPDATE: In an illuminating follow-up post, James Hamilton notes that both the data-center design and the server that Google showed off at the meeting are likely several generations behind what Google is doing today. So it looks like the mystery remains at least partially cloaked.