In its March issue, Harper’s publishes one section of the official blueprints of the site plan for Google’s giant The Dalles data center on the banks of the Columbia River in Oregon. (The project goes by the codename 02 on the plan.) Some stats: The warehouses holding the computers are each 68,680 square feet, while the attached cooling stations are 18,800 square feet. The blueprint also shows an administration building and a sizable “transient employee dormitory.”
In comments on the plan, Harper’s estimates, roughly, that once all three server buildings are operating in 2011 the plant “can be expected to demand about 103 megawatts of electricity – enough to power 82,000 homes, or a city the size of Tacoma, Washington.” The Web, the magazine says, “is no ethereal store of ideas, shimmering over our heads like the aurora borealis. It is a new heavy industry, an energy glutton that is only growing hungrier.”
A few salient points that this article fails miserably to mention:
1. 103 MW is a lot of electricity. However, the better indicator is how much power is consumed over a definitive period of time. Is this 103 MW per day? Per hour? Per month? Without the time dimension, the implications from the statement are pretty useless.
2. The Pacific Northwest generates a lot of power because a great deal of it is hydro from our systems of dams and rivers, coupled with an increasing emergent renewable load from wind farms. Any implication that carbon or other global warming tripwires will increase simply because of Google is misguided folly.
3. Even if the 103 MW load occurs in short time-duration, it is simply a replacement for the various aluminum smelters that have operated in that region for decades – until the Chinese figured out how to smelt high quality aluminum and ship it here much cheaper than we could produce it locally.
So the gluttonous aluminum smelters are replaced by Google’s data center. Our energy infrastructure handled that load back then, and will handle Google’s just fine, thank you very much. Just a swap from one now-dead “heavy” industry to another.
And the point here is what again? This appears like a comparison between Google’s power consumption, and that of the homes of Al Gore and John Edwards.
Bob, MW is a measure of power, not energy, so 103 MW is 103 megawatt-hours per hour.
Sergey, you are correct with respect to the difference between power and energy, but there’s a difference between a watt (and KW or MW) and watt-hours (and KWh, Mwh), which is the amount of energy running for one hour. This is the unit of measure that electric companies bill customers on, and I will admit, gets a bit confusing.
I don’t understand what you’re criticizing. Watt, as you say, are a measure of power or energy divided by time.
103 MW is the power (not the energy) that the facility needs: although ‘power’ and ‘energy’ are sometimes used as synonyms in english, this is wrong.
Think a practical example: when you drive a 200 horsepower (also a measure of power, not energy) car for an hour, you consume about 150 watthours of energy, but if you leave it in the garage, you consume no energy at all as you can see by looking at the level of gasoline in the tank.
I think there are a couple points that the author (and most who are providing comments on this topic) are missing. First is the notion of utilization.
The real metric here isn’t that – as the Harper’s author puts it – “thousands of servers” spring into action on each search query, or that “tens of billions of CPU cycles” are allocated to that task. Since “frivolous” power consumption is the putative consequence here, the real metric is closer to CPU seconds per useful task divided by CPU seconds doing nothing and waiting for something to do. Okay maybe you can argue that the “American Idle” – sic – query example isn’t “useful”, but never mind…
Given this metric, many best practices for server-side software application and infrastructure development and deployment are intended to optimize things so that the CPUs and other hardware consuming power (and capital dollars) are utilized as highly as possible. This means that these CPUs don’t sit idle very long, and can be quickly and efficiently reallocated from sub-second to sub-second as they complete their (subdivided) work for any given search query.
Yes it’s true that these data centers are sized to peak loads, which are probably significantly higher than the average ones. But these can do something else (like serving Gmail or Google Docs or YouTube, which don’t in general share peaks) or can be put into a lower-power ready-to-use standby state.
This should be contrasted with what happens on the receiving end of these queries. Powerful CPUs in end-user PCs essentially sit idle most of the time waiting for the human to click the next web link (or even sit down at the keyboard). Multiply this times the tens of millions of PCs out there (even when in low-power standby mode) and the data center power consumption is rather dwarfed. That’s why mobile devices and PC power-saving techniques are so important.
Finally there’s economy of scale working for the data center. In Google’s case (I don’t work for Google BTW, but I do use their search and mail products often), they noticed that a significant source of power drain in commodity server hardware is the cheap power supply, which apparently wastes somewhere around 30% of the power consumed downstream in the form of heat (which requires additional power for cooling in the data center). So purportedly Google invested in their own power supply designs that are apparently worth the expense in saved power expense. This is a good example of economies of scale.