Why data center temperatures have moderated

The heat rose, then fell -- and is poised to rise again

In the span of two years -- from 2003 to 2005 -- operating power density in data centers jumped from an average of 25 watts per square foot to 52 watts, says Peter Gross, vice president and general manager at HP Critical Facilities Services. At Industrial Light & Magic, Gary Meyer, systems engineer and project manager, was just finishing up a new data center designed for 200 W per square foot. He wondered openly back in 2005 if the company should have pursued a 400-watt-per-square-foot design instead.

Then the growth curve suddenly slowed as several moderating forces came into play. One was the development and use of better power-management tools. Another dramatic change has been advanced power supplies, which increased in efficiency from a low of 65% to more than 90%, even when utilization levels are low. Another improvement: the adoption of variable-speed fans in everything from servers to computer-room air handlers.

[Read our related story, "Data center density hits the wall," which includes tips for how to save on energy costs.]

But the most visible reason for the moderation of the power-density growth curve has been chip makers' move away from increased processor clock speeds in favor of multicore designs. "Over the last several generations, we've kept thermal design points consistent and managed power moderation more efficiently within the [processor] itself," says Dylan Larson, director of data center technology initiatives at Intel. But if power consumption per socket has remained relatively flat at the system level, density has continued to rise, although more slowly than before.

"Intel spread out the heat with multicores and contained the heat flux problem at the chip level for a short period of time," says Roger Schmidt, IBM fellow and chief engineer for data center efficiency. "But the power in the rack still seems to be going up because we try to get more out of the box." Vendors say the servers they ship today are more heavily configured, with more heat-generating processors and memory chips than in the past.

Then, too, customers have loaded up on memory, and while memory density per DIMM (dual in-line memory module) continues to rise, power consumption per socket isn't going down. Manufacturers have gradually increased the number of sockets they offer on the servers so that users can host more virtual machines on them. IBM's BladeCenter HS22, for example, can hold up to 96GB of memory in 12 DIMM slots. "Memory has been one of our major problems," says Schmidt. "As far as the solution, we're struggling with this. It's a tough one."

Ironically, server virtualization has contributed to the heat problem. While IT organizations have gained efficiencies and freed up floor space by consolidating physical servers by factors of up to 30-to-1 or more, the new platforms that host all of those virtual servers have a higher power density than do the systems they replaced.

With virtualization driving such dramatic physical server consolidations, net energy use in the data center should be on the decline across the board. But it's not happening. "Everyone is virtualizing, and I have yet to see any specific reduction in power consumption," Gross says. Why doesn't power drop as the number of physical machines goes down? "It's not clear whether this is because the virtualization scale is still relatively low or because new applications are added," he says.

Copyright © 2010 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon