The Liquid Data Center

Liquid cooling is coming to your data center. It’s not a matter of whether you want it or not. A migration from air to direct liquid cooling is simply the only option that can address surging data center energy costs and allow the power densities of servers to continue to increase into the next decade. It will be too expensive not to adopt it. And it’s coming sooner than you might think.

If it were up to engineers, direct liquid cooling would have been here five years ago, says 25-year IBM veteran Roger R. Schmidt, a distinguished engineer with experience designing water-cooled mainframes. He expects distributed systems to follow in the mainframe’s footsteps.

Some data center managers may not fully grasp the problem, because over the past eight years, server performance has increased by a factor of 75 while performance per watt of power has increased 16 times, according to Hewlett-Packard Co. But data centers aren’t using fewer processors — they’re using more than ever. Meanwhile, the power density of equipment has increased to the point where power and cooling systems vendor Liebert Corp. is supporting clients with state-of-the-art server racks exceeding 30 kilowatts (kW).

That creates two problems. First, energy costs are spiraling upward. Many data center managers don’t see that today, because their power use isn’t metered separately and isn’t part of the IT budget. As costs rise, that’s likely to change, forcing IT to retrofit data centers to the new reality.

Second, all that energy gets converted to heat. If you want to know what the heat coming off a 30 kW rack feels like, turn your broiler oven on full blast and open the door. That’s 3.4 kW. Now imagine jamming nine broiler ovens, all running full tilt, into the confines of a single rack in your data center and trying to maintain the internal temperature at or below 75 degrees. Dave Kelley, manager of environmental application engineering at Liebert, says current air-cooling technologies can perhaps handle racks in the “mid-30s.” But equipment vendors say that 50 kW racks could be a reality within five years.

Christian Belady, a distinguished engineer at HP, is passionate about educating data center managers about the problem and establishing standards for liquid-cooled data centers. “If you look at the energy costs associated with not driving toward density and taking advantage of these densities, there will be huge penalties from an efficiency standpoint,” Belady says.

But all that heat will have to be removed from the data center, which is one reason why data center infrastructure costs per server have risen. In fact, while the cost of server hardware has remained flat or declined slightly, Belady estimates that the cost of the data center infrastructure to support a server over a three-year life span exceeded the hardware cost back in 2003. This year, the cost of energy (power and cooling) required per server, amortized over that same three years, has pulled even with the equipment cost. By 2008, it will surpass it, becoming the single largest component of server TCO.

That’s where liquid cooling comes in. Direct cooling of servers by piping liquid refrigerant or chilled water directly to components within racks is far more efficient than using air, and it will become a requirement.

How soon? Kelley says his company has projects under way with IT equipment vendors that he can’t discuss. But he predicts that “within a couple of years, somebody will have something where you can plug [a line containing liquid coolant] directly into a processor.”

More efficient designs could substantially cut cooling costs, which today can account for more than half of data center energy use. Best practices and optimizations of existing infrastructure can bring immediate savings. On racks approaching 30 kW, users are turning to spot-cooling systems that run liquid refrigerant or chilled water to a heat exchanger that blows cool air from directly above or adjacent to server racks. That’s more efficient than room air-conditioning units because the chilled air travels a shorter distance. These designs pipe liquid coolant, already used by computer room air-conditioning units at the outer edges of the data center, up to the racks themselves. It’s not hard to imagine extending those lines into the racks to deliver direct liquid cooling. The heat exchanger goes away, perhaps replaced in an IBM BladeCenter chassis with a hookup that accepts a chilled water or liquid refrigerant feed.

Today, spot-cooling systems typically require ad hoc copper piping overhead or under the floor to reach individual racks. As more and more racks require such cooling, data center managers face a potential mess. What’s worse, since few standards exist, things as basic as liquid coolant specifications and pipe couplings remain proprietary. Belady is pushing for common standards. “If we wait,” he says, “everything is going to be much more proprietary, and when that happens, you lose the opportunity for interoperability.”

Robert L. Mitchell is a Computerworldnational correspondent. Contact him at robert_mitchell@computerworld.com.

Copyright © 2007 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon