Skip the navigation

Moving Toward Meltdown

Ultradense 1U and blade server racks are blazingly fast -- and amazingly hot. Keeping them from burning up is more complicated than you might think.

October 6, 2003 12:00 PM ET

Computerworld - John Sawyer manages data centers for corporate clients every day. His company, Johnson Controls Inc., has plenty of experience in data center design and management. Nonetheless, the Milwaukee-based company's carefully designed and planned data center recently experienced overheating problems after installing blade servers.


Many data center managers are just beginning to contemplate large-scale deployments with multiple racks of ultracompact blade servers. These new systems take up far less space than traditional rack-mounted servers, but they dramatically increase heat density. Throwing multiple racks of them into a data center can result in problems ranging from outright failures to unexplained slowdowns and shortened equipment life.


"Today, because of the way the air handlers are configured, we can't handle more than 2 kilowatts per rack," says Sawyer, head of critical facility management services at Johnson Controls. Sawyer says new air-handling equipment can boost that figure into the 3-to-4-kw range. But new blade servers could consume 15 kw or more when fully loaded. That equates to more British thermal units per square foot than a typical household oven and requires a cooling capacity sufficient to air-condition two homes, facilities engineers say. So Sawyer can spread out the racks or partially fill each one to reduce overall wattage per square foot, or he can add localized, spot-cooling systems.


Although most data centers don't have many high-density racks today, data center managers are beginning to replace server racks with more compact designs, some of which accommodate more than 300 servers in a single 42U rack. (1U is 1.75 in.) "You can see a train wreck coming," says Kenneth Brill, executive director at The Uptime Institute Inc. in Santa Fe, N.M.


And while vendors say their systems are designed to run efficiently in fully loaded racks, they don't necessarily take into account the broader impact that large numbers of such racks will have on the rest of the data center.


"We can deal with one or two of these things, but we don't know how to deal with lots of them," says Brill.


The problem is compounded by two facts: Every data center is designed differently, and the industry has yet to agree on a standard for designing data center cooling systems that can handle 15 to 20 kw per rack.


The current guidelines from the American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc. (ASHRAE) are outdated, says Edward Koplin, president of Jack Dale Associates PC, an engineering consulting firm in Baltimore. "Design engineers are using standards from the days of punch cards and water-cooled mainframes," he says. Atlanta-based ASHRAE is working hard on new thermal guidelines, says Don Beaty, chair of the group's High-density Electronic Equipment Facility Cooling Committee. He expects a published standard by year's end.



Our Commenting Policies