Liquid cooling returns to the data center

Liquid cooling is making its way back to the data center, mostly in the form of rack-level gear. Rapidly escalating power demands and rising utility rates are requiring users to take a more proactive approach to cooling.

The shift is already in motion. Servers preconfigured with liquid-based cooling modules sit on top of the hottest components in a system, slashing cooling demands in half. This in turn will lead to entire motherboards being "hosed down" directly with treated water or refrigerants, eliminating or dramatically reducing the need for the less efficient air conditioning units that populate most data centers today. (See related story.)

For US Internet, a provider of collocation data center services, the use of rack-level cooling systems from Liebert Corp. has become critical. The "creeping" nature of increasingly dense and hot server equipment inside its Minneapolis data center almost destroyed the future of the fast-growing company, said Travis Carter, co-founder and chief technology officer of US Internet. As its customers began to move increasingly higher-density gear into the data center, temperatures began to slowly rise, reaching 100 degrees, and equipment failure began to escalate.

"We didn't even know we had a problem at first," Carter said. "Quite frankly, it snuck up on us over a period of months [in 2005], and we found ourselves with no available space for traditional cooling. It became embarrassing. You can't bring a potential customer into a sauna and expect them to add their gear to the problem."

US Internet redesigned its data center to incorporate Liebert XD cooling systems that pump refrigerant into cabinet racks, and it deployed new air conditioning units and environmental monitoring systems. The data center is now maintained at 70 degrees, and the customer base is back on the growth curve. The company plans to add a data center later this year that will also incorporate the new cooling equipment.

"As far as I'm concerned, everything we invested in the cooling equipment has been revenue generation right to the bottom line," Carter said. "We would either be out of business today or operating at a much smaller base without that equipment."

Data center energy efficiency is top of mind. Industry roundtables with the U.S. Department of Energy are being held across the country as a potential crisis in the data center has mushroomed over the past few years. Enterprises across the country are struggling to meet demands in data centers that were designed for a different era of computing.

A study published by Gartner Inc. in November projects that by next year, half of all current data centers will have insufficient power and cooling capacity to meet the demands of high-density computing equipment.

Vendors have been scrambling to create a wide range of components, systems and software to try and keep a lid on the bubbling cauldron. Virtualization will buy many enterprises valuable years to develop next-generation cooling strategies. But those new products and technologies, including virtualization, are primarily designed to buy time for data center environments that are already overextended, and they can even add to total energy demands.

A survey of attendees at the most recent Gartner data center conference in November 2006 showed that 80% of respondents said they currently have power and cooling issues on their computer room floors, said Michael Bell, a Gartner analyst. More than a third said they will have to invest in a new data center within the next few years.

In the past few years, energy demands for the average data center have grown from one to three kilowatts per rack to around six kilowatts per rack, with 10- to 12-kilowatt racks becoming increasingly common, Bell said. Implementations of 20- to 30-kilowatt racks early next decade seem increasingly reasonable.

But not everyone is buying the need for water-cooled data centers, trying other options instead. Highmark Inc., a health insurance provider, recently received certification from the U.S. Green Building Council for meeting Leadership in Energy and Environmental Design (LEED) guidelines its creation of a 28,000-sq.-ft. data center. Now that it has adequate floor space, the company relies primarily on saving energy in a more traditional cooling design that uses the latest in targeted airflow and rack-level environmental monitoring systems.

"Everyone knows that data centers are power hogs, and one of our corporate strategies is to reduce energy use," said Mark Wood, Highmark's director of data center infrastructure.

Part of the LEED certification included adding items like bike racks to encourage employees to leave their automobiles at home and a 100,000-gallon underground rainwater-collection system that is used for backup water supply and in the facility's toilet systems.

But many businesses have just run out of floor space and can no longer rely solely on "hot" and "cold" aisles to spread the heat load adequately. They face the prospect of expensive overhauls of existing infrastructure or an even more expensive construction of a new facility.

For these types of customers, vendors including cooling-gear specialists such as Liebert and American Power Conversion Corp., as well as server manufacturers Hewlett-Packard Co., IBM and Sun Microsystems Inc., have all introduced systems that move the cooling process into close proximity to the servers. Chilled water or refrigerant is pumped around or inside the server cabinets, while localized fans distribute cold air directly to hot regions. The platforms are great for allowing companies to increase density and target "hot spots," but they can actually add to total energy demands for a facility.

William Dick, executive director for the computational science and engineering program at the University of Illinois, was faced with trying to breathe more life into a 2,000-sq.-ft. data center on campus that had been built in the 1960s and gone though several generations of high-density computer installations. The university wanted to upgrade the computer systems in 2005 but found itself physically unable to expand the building, and it couldn't add any high-density machines without using new cooling techniques.

The university chose a computing array of 1,560 processing nodes based on Apple Inc.'s G5 server systems. It also installed a Liebert XD cooling system in which a refrigerant is pumped to a rack as a liquid, converted to gas within heat exchangers in the platform and then returned to a pumping station where it is recondensed to liquid. The XD cooling systems can provide cooling of heat densities up to 500 watts per square foot.

The computer array will be replaced again in two to three years, "and cooling at whatever densities we are working with at that point remains my biggest concern," Dick said.

Other strategies work, too. At Sustainability Victoria, a government agency in Australia that provides resources to help residents and businesses use energy more efficiently, switching to notebook computers for its 100-person staff helped it practice what it preaches.

Sustainability Victoria merged three operations into a single campus, which has netted energy savings of up to 50%. Part of those savings came from the selection of laptop computers that required 20 watts of electricity, as opposed to desktop PCs with LCD screens that required 110 watts. The move to notebooks has also allowed the business to develop telecommuting capabilities.

"Our mobility effort fits within the whole Green Star-rated building strategy we are creating," O'Brien said. "Mobility, reduction in paper, encouraging new methods of working -- those are all a part of our mission."

(To read more about this topic, see "The Liquid Data Center".) Darrell Dunn is a freelance reporter based in Fort Worth, Texas, with 20 years of experience covering business technology and enterprise IT. Contact him at darrelldunn@sbcglobal.net.

Copyright © 2007 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon