This past week, Jonathan Koomey, of the Lawrence Berkeley National Laboratory, released an update to his study of the electricity consumed by server computers. The research, funded by AMD, underscores the rapid increases in data-center energy use, showing that the power consumed by servers and related cooling gear doubled over the first five years of this decade, reaching 123 billion kWh in 2005. Koomey expects servers’ energy use to jump by another 76% between 2005 and 2010.
Such figures provide a good backdrop for a new article on server energy efficiency, by Google engineers Luiz André Barroso and Urs Hölzle, that appears in the December issue of IEEE’s Computer magazine. The Google duo argue that achieving a big increase in efficiency will require a new way of thinking about server design.
Current efforts to improve computer efficiency, such as improved power supplies, sleep modes, and chip designs, are important, but don’t go far enough, they write: “Long-term technology trends invariably indicate that higher performance means increased energy usage. As a result, energy efficiency must improve as fast as computing performance to avoid a significant growth in computers’ energy footprint.”
One of the shortcomings of current efforts is that they often focus on the extremes in the usage profile of a computer, “emphasizing high energy efficiency at peak performance levels and in idle mode.” But servers are rarely either idle or operating at peak output. They usually run “at between 10 and 50 percent of their maximum utilization levels,” and at these levels energy efficiency is weak: “We see that peak energy efficiency occurs at peak utilization and drops quickly as utilization decreases. Notably, energy efficiency in the 20 to 30 percent utilization range – the point at which servers spend most of their time – has dropped to less than half the energy efficiency at peak performance. Clearly, such a profile matches poorly with the usage characteristics of server-class applications.”
Barroso and Hölzle argue that addressing this problem requires a new commitment, on the part of the makers of servers and server components, to create what they call “energy-proportional machines” – servers that operate efficiently at every stage of capacity utilization, rather than just at the extremes. Microprocessors are already doing a pretty good job of managing consumption across the usage spectrum, they write, so the onus for improving efficiency falls mainly on the manufacturers of other subcomponents like memory modules and disk drives.
Energy-proportional servers, write the engineers, would lead to dramatically greener data centers. They “could cut by one-half the energy used in data center operations. They would also lower peak power at the facility level by more than 30 percent, based on simulations of real-world data center workloads. These are dramatic improvements, especially considering that they arise from optimizations that leave peak server power unchanged.” As a first step, they recommend that energy consumption benchmarks be expanded to include readings at different levels of capacity utilization. That would put pressure on component manufacturers to green up their act.
This article looks like some kind of graduate student report. The concepts of wide dynamic power range, active low-power modes and etc have around a while: Transmeta Efficeon and Pentium M. I’d like to know why we need a couple of engineers from Google to tell us what we already know? It would have been interesting if they had proposed something NEW. For example: super conducing ceramics or light based processors that would eliminate the main problem – heat caused by “electron friction”. Come on, guys! You are the best and brightest! Let’s see something new! ;)
@Linuxguru: nowadays, if a couple of google engineers wrote a paper titled “We farted”, businessweek would cover it: “they did it on their 20% time”
123 TWh is the output of a medium-sized nuclear power-plant: not a lot, but just for IT? Wow!
Is it difficult to drive all the computing needs to some processors, to have a smaller fraction work at close to 100%, and the rest sleeping?
Bertil,
From the authors’ article:
“Even during periods of low service demand, servers are unlikely to be fully idle. Large-scale services usually require hundreds of servers and distribute the load over these machines. In some cases, it might be possible to completely idle a subset of servers during low-activity periods by, for example, shrinking the number of active front ends. Often, though, this is hard to accomplish because data, not just computation, is distributed among machines. For example, common practice calls for spreading user data across many databases to eliminate the bottleneck that a central database holding all users poses.
“Spreading data across multiple machines improves data availability as well because it reduces the likelihood that a crash will cause data loss. It can also help hasten recovery from crashes by spreading the recovery load across a greater number of nodes, as is done in the Google File System. 6 As a result, all servers must be available, even during low-load periods. In addition, networked servers frequently perform many small back- ground tasks that make it impossible for them to enter a sleep state.”
Nick
Nick,
the issue of greener servers is extremely important and definitely needs to be addressed.
However, reducing the power consumption of data centres in many other simple ways.
I am a director of Cork Internet eXchange and we designed our data centre to be hyper-energy efficient from the ground up. Typical data centres operate at 30% energy efficiency (for every 100KW taken in, 30KW gets to power the servers).
Our data centre operates at 80% energy efficiency! What is more, we blogged how we did it on the data centre blog (see http://www.cix.ie/air-conditioning-efficiency-at-the-cix-data-centre/).
Not often you see a data centre with a blog, is it?
It’s not just about green servers. The web itself is greener according to ecogeek
I don’t know if we should take ecogeek’s post seriously, but it sure is interesting.
[…]could cut by one-half the energy used in data center operations. They would also lower peak power at the facility level by more than 30 percent, based on simulations of real-world data center workloads.[…] It’s all about awareness. The more managers know about this topic (an only buy green products), the more producers are willing to produce greener products. Our business-datacentre will be fully replaced in Q2, with energy efficient servers. Here is a link to more info about pollution from computers and how we can make a better world together.