Data Center Density Hits the Wall

Why the era of packing more servers into the same space may have to end.

Industrial Light & Magic has been replacing its servers with the hottest new IBM BladeCenters -- literally, the hottest. For every new rack ILM brings in, it cuts overall power use in the data center by a whopping 140 kilowatts -- a staggering 84% drop in overall energy use.

But power density in the new racks is much higher: Each consumes 28 kW of electricity, versus 24 kW for the previous generation. Every watt of power consumed is transformed into heat that must be removed from each rack -- and from the data center.

The new racks are equipped with 84 server blades, each with two quad-core processors and 32GB of RAM. They are powerful enough to displace seven racks of older BladeCenter servers that the special-effects company purchased about three years ago for its image-processing farm.

To cool each new 42U rack, ILM's air conditioning system must remove more heat than would be produced by nine household ovens running at the highest temperature setting.

These days, most new data centers have been designed to support an average density of 100 to 200 watts per square foot, and the typical cabinet is about 4 kW, says Peter Gross, vice president and general manager of Hewlett-Packard Co.'s Critical Facilities Services. A data center designed for 200 watts per square foot can support an average rack density of about 5 kW. With carefully engineered airflow optimizations, a room air conditioning system can support some racks at up to 25 kW, he says.

At 28 kW per rack, ILM is at the upper limit of what can be cooled with today's computer room air conditioning systems, says Roger Schmidt, an IBM fellow and chief engineer for data center efficiency. "You're hitting the extreme at 30 kW. It would be a struggle to go a whole lot further," he says.

Is This Sustainable?

The question is, what happens next? "In the future, are watts going up so high that clients can't put that box anywhere in their data centers and cope with the power and cooling? We're wrestling with that now," Schmidt says. High-density computing beyond 30 kW will have to rely on water-based cooling, he says. But other experts say that data center economics may make it cheaper for many organizations to spread out servers rather than concentrate them in racks with ever-higher energy densities.

Kevin Clark, director of information technologies at ILM, likes the gains in processing power and energy efficiency he has achieved with the new BladeCenters, which have followed industry trends to deliver more bang for the buck. According to IDC, the average server price since 2004 has dropped 18%, while the cost per core has dropped by 70%, to $715.

But Clark wonders whether continually doubling compute density is sustainable. "If you double the density on our current infrastructure, from a cooling perspective, it's going to be difficult to manage," he says.

He's not the only one who's concerned. For more than 40 years, the computer industry's business model has been built on the assumption that Moore's Law will prevail and that compute density will double every two years in perpetuity. Now some engineers and data center designers question whether that's feasible -- and whether a threshold has been reached.

The threshold isn't just about whether chip makers can overcome the technical challenges of packing transistors even more densely, but whether it will be economical to run large numbers of extremely high density server racks in modern data centers.

1 2 3 4 5 Page 1
Page 1 of 5
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon