Data Center Density Hits the Wall

Why the era of packing more servers into the same space may have to end.

1 2 3 4 5 Page 3
Page 3 of 5

ILM's data center, completed in 2005, was designed to support an average load of 200 watts per square foot. The design has plenty of power and cooling capacity overall. It just doesn't have a way to efficiently cool the high-density racks.

ILM uses a hot aisle/cold aisle design, and the staff has successfully adjusted the number and position of perforated tiles in the cold aisles to optimize airflow around the carefully sealed BladeCenter racks. But to avoid hot spots, the room air conditioning system is cooling the entire 13,500-square-foot raised floor space to a chilly 65 degrees.

Clark knows it's inefficient; today's IT equipment is designed to run at temperatures as high as 81 degrees, so he's looking at a technique called cold-aisle containment.

Other data centers are experimenting with containment -- high-density zones on the floor where doors seal off the ends of either the hot or cold aisles. Barriers may also be placed along the top of each row of cabinets to prevent hot and cold air from mixing near the ceiling. In other cases, cold air may be routed directly into the bottom of each cabinet, pushed up to the top and funneled into the return-air space in the ceiling plenum, creating a closed-loop system that doesn't mix with room air at all.

"The hot/cold aisle approach is traditional but not optimal," says Rocky Bonecutter, manager of data center technology and operations at Accenture PLC. "The move now is to go to containment."

HP's Gross estimates that data centers that use such techniques can support up to about 25 kW per rack with a computer room air conditioning system. "It requires careful segregation of cold and hot, eliminating mixing, optimizing the airflow. These are becoming routine engineering exercises," he says.

While redesigning data centers to modern standards has helped reduce power and cooling problems, the newest blade servers are already exceeding 25 kW per rack. IT has spent the past five years tightening up racks, cleaning out raised floor spaces and optimizing airflows. The low-hanging fruit is gone in terms of energy efficiency gains. If densities continue to rise, containment will be the last gasp for computer-room air cooling.

Time for Liquid Cooling?

Some data centers have already begun to move to liquid cooling to address high-density hot spots. The most common technique, called closely coupled cooling, involves piping chilled liquid, usually water or glycol, into the middle of the raised floor space to supply air-to-water heat exchangers within a row or rack. Kumar estimates that 20% of Gartner's corporate clients use this type of liquid cooling for at least some high-density racks.

IBM's Schmidt says data centers with room-based cooling -- especially those that have moved to larger air handlers to cope with higher heat densities -- could save considerable energy by moving to liquid cooling.

But Microsoft's Belady thinks liquid's appeal will be limited to a single niche: high-performance computing. "Once you bring liquid cooling to the chip, costs start going up," he contends. "Sooner or later, someone is going to ask the question: Why am I paying so much more for this approach?"

The best way to take the momentum away from ever-increasing power density is to change the chargeback method for data center use, says Belady. Microsoft changed its cost allocation strategy and started billing users based on power consumption as a portion of the total power footprint of the data center, rather than basing it on floor space and rack utilization. After that, he says, "the whole discussion changed overnight." Power consumption per rack started to dip. "The whole density thing gets less interesting when your costs are allocated based on power consumed," he says.

1 2 3 4 5 Page 3
Page 3 of 5
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon