Data center density hits the wall

Why the era of packing more servers into the same space may have to end

1 2 3 4 5 6 Page 4
Page 4 of 6

Liquid makes its entrance

While redesigning data centers to modern standards has helped reduce power and cooling problems, the newest blade servers are already exceeding 25 kW per rack. IT has spent the past five years tightening up racks, cleaning out raised floor spaces and optimizing air flows. The low-hanging fruit is gone in terms of energy efficiency gains. If densities continue to rise, containment will be the last gasp for computer-room air cooling.

Some data centers have already begun to move to liquid cooling to address high-density "hot spots" in data centers. The most common technique, called closely coupled cooling, involves piping chilled liquid, usually water or glycol, into the middle of the raised floor space to supply air-to-water heat exchangers within a row or rack. Kumar estimates that 20% of Gartner's corporate clients use this type of liquid cooling for at least some high-density racks.

These closely coupled cooling devices may be installed in a cabinet in the middle of a row of server racks, as data center vendor APC does with its InRow Chilled Water units, or they can attach directly onto each cabinet, as IBM does with its Rear Door Heat eXchanger.

Closely coupled cooling may work well for addressing a few hot spots, but it is a supplemental solution and doesn't scale well in a distributed computing environment, says Gross. IBM's Rear Door Heat eXchanger, which can cool up to 50,000 BTUs -- or 15 kW -- can remove about half of the waste heat from ILM's 28-kW racks. But Clark would still need to rely on room air conditioners to remove the remaining BTUs.

HP's Peter Gross
HP's Peter Gross says most IT managers won't want to pay the extra money needed for super-high densities and will look to distribute servers instead of crunching more into the same amount of space.

Closely coupled cooling also requires building out a new infrastructure. "Water is expensive and adds weight and complexity," Gross says. It's one thing to run water to a few mainframes. But the network of plumbing required to supply chilled water to hundreds of cabinets across a raised floor is something most data center managers would rather avoid. "The general mood out there is, as long as I can stay with conventional cooling using air, I'd rather do that," he says.

"In the distributed model, where they use 1U or 2U servers, the power needed to support thousands of these nodes may not be sustainable," Schmidt says. He thinks data centers will have to scale up the hardware beyond 1U or 2U distributed x86-class servers to a centralized model using virtual servers running on a mainframe or high-performance computing infrastructure.

One way to greatly improve heat-transfer efficiency is through direct-liquid cooling. This involves piping chilled water through specialized cold plates that make direct contact with the processor. This is important because as processor temperatures rise, transistors suffer from an increase in leakage current. Leakage is a phenomenon in which a small amount of current continues to flow through each transistor, even when the transistor is off.

Using cold plates reduces processor leakage problems by keeping the silicon cooler, allowing servers to run faster -- and hotter. In a test of a System p 575 supercomputer, Schmidt says IBM used direct-liquid cooling to improve performance by one-third while keeping an 85 kW cabinet cool. Approximately 70% of the system was water-cooled.

Few data center managers can envision moving most of their server workloads onto expensive, specialized supercomputers or mainframes.

But IBM's Bradicich says incremental improvements such as low-power chips or variable-speed fans aren't going to solve the problem alone. Architectural improvements to the fundamental x86 server platform will be needed.

1 2 3 4 5 6 Page 4
Page 4 of 6
  
Shop Tech Products at Amazon