Power struggle: How IT managers cope with the data center power demands

CIOs say power and cooling are their biggest data center problems

When Tom Roberts oversaw the construction of a 9,000-square-foot data center for Trinity Health, a group of 44 hospitals, he thought the infrastructure would last four or five years. A little more than three years later, he's looking at adding another 3,000 square feet and re-engineering some of the existing space to accommodate rapidly changing power and cooling needs.

Power Struggle
1pixclear.gif
Image Credit: Belle Mellor
1pixclear.gif

As in many organizations, Trinity Health's data center faces pressures from two directions. Growth in the business and a trend toward automating more processes as server prices continue to drop have stoked the demand for more servers. Roberts says that as those servers continue to get smaller and more powerful, he can get up to eight times more units in the same space. But the power density of those servers has exploded.

"The equipment just keeps chewing up more and more watts per square foot," says Roberts, director of data center services at Novi, Mich.-based Trinity. That has resulted in challenges meeting power-delivery and cooling needs and has forced some retrofitting.

"It's not just a build-out of space but of the electrical and the HVAC systems that need to cool these very dense pieces of equipment that we can now put in a single rack," Roberts says.

Power-related issues are already a top concern in the largest data centers, says Jerry Murphy, an analyst at Robert Frances Group Inc. in Westport, Conn. In a study his firm conducted in January, 41% of the 50 Fortune 500 IT executives it surveyed identified power and cooling as problems in their data centers, he says.

Murphy also recently visited CIOs at six of the nation's largest financial services companies. "Every single one of them said their No. 1 problem was power," he says. While only the largest data centers experienced significant problems in 2005, Murphy expects more data centers to feel the pain this year as administrators continue to replenish older equipment with newer units that have higher power densities.

In large, multimegawatt data centers, where annual power bills can easily exceed $1 million, more-efficient designs can significantly cut costs. In many data centers, electricity now represents as much as half of operating expenses, says Peter Gross, CEO of EYP Mission Critical Facilities Inc., a New York-based data center designer. Increased efficiency has another benefit: In new designs, more-efficient equipment reduces capital costs by allowing the data center to lower its investment in cooling capacity.

Pain Points

Trinity's data center isn't enormous, but Roberts is already feeling the pain. His data center houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300-kilowatt uninterruptible power supplies.

"We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data center has surpassed 250 watts per square foot.

At Industrial Light & Magic's brand-new 13,500-square-foot data center in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data center has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200 watts per square foot.

Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built." Each rack of blade servers consumes between 18kw and 19kw when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

The other limiting factor is cooling. At both ILM and Trinity, the equipment with the highest power density is the blade servers. Trinity uses 8-foot-tall racks. "They're like furnaces. They produce 120-degree heat at the very top," Roberts says. Such racks can easily top 20kw today, and densities could exceed 30kw in the next few years.

What's more, for every watt of power used by IT equipment in data centers today, another watt or more is typically expended to remove waste heat. A 20kw rack requires more than 40kw of power, says Brian Donabedian, an environmental consultant at Hewlett-Packard Co. In systems with dual power supplies, additional power capacity must be provisioned, boosting the power budget even higher. But power-distribution problems are much easier to fix than cooling issues, Donabedian says, and at power densities above 100 watts per square foot, the solutions aren't intuitive.

For example, a common mistake data center managers make is to place exhaust fans above the racks. But unless the ceiling is very high, those fans can make the racks run hotter by interfering with the operation of the room's air conditioning system. "Having all of those produces an air curtain from the top of the rack to the ceiling that stops the horizontal airflow back to the AC units," Roberts says.

Trinity addressed the problem by using targeted cooling. "We put in return air ducts for every system, and we can point them to a specific hot aisle in our data center," he says.

ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the load—partially filling each rack—is inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans," which use a lot of power, he says.

Bermender would also prefer not to use spot cooling systems like IBM's Cool Blue, because they take up floor space and result in extra cooling systems to maintain. "Unified cooling makes a big difference in power," he says.

Ironically, many data centers have more cooling than they need but still can't cool their equipment, says Donabedian. He estimates that by improving the effectiveness of air-distribution systems, data centers can save as much as 35% on power costs.

Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 12-inch raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data center over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

Even those radical steps provided only temporary relief, because the room was so overloaded with equipment. Had ILM not moved, Bermender says, it would have been forced to move the data center to a collocation facility. Managers of older data centers can expect to run into similar problems, he says.

That suits Marvin Wheeler just fine. The chief operations officer at Terremark Worldwide Inc. manages a 600,000-square-foot collocation facility designed to support 100 watts per square foot.

"There are two issues. One is power consumption, and the other is the ability to get all of that heat out. The cooling issues are the ones that generally become the limiting factor," he says.

With 24-inch floors and 20-foot-high ceilings, Wheeler has plenty of space to manage airflows. Terremark breaks floor space into zones, and airflows are increased or decreased as needed. The company's service-level agreements cover both power and environmental conditions such as temperature and humidity, and it is working to offer customers Web-based access to that information in real time.

Terremark's data center consumes about 6 megawatts of power, but a good portion of that goes to support dual-corded servers. Thanks to redundant power designs, "we have tied up twice as much power capacity for every server," Wheeler says.

Terremark hosts some 200 customers, and the equipment is distributed based on load. "We spread out everything. We use power and load as the determining factors," he says.

But Wheeler is also feeling the heat. Customers are moving to 10- and 12-foot-high racks, in some cases increasing the power density by a factor of three. Right now, Terremark bills based on square footage, but he says collocation companies need a new model to keep up. "Pricing is going to be based more on power consumption than square footage," Wheeler says.

According to EYP's Gross, the average power consumption per server rack has doubled in the past three years. But there's no need to panic—yet, says Donabedian.

"Everyone gets hung up on the dramatic increases in the power requirements for a particular server," he says. But they forget that the overall impact on the data center is much more gradual, because most data centers only replace one-third of their equipment over a two- or three-year period.

Nonetheless, the long-term trend is toward even higher power densities, says Gross. He points out that 10 years ago, mainframes ran so hot that the systems moved to water cooling before a change from bipolar to more efficient CMOS technology bailed them out.

"Now we're going through another ascending growth curve in terms of power," he says. But this time, Gross adds, "there is nothing on the horizon that will drop that power."

Related Opinion:

1pixclear.gif
Where Data Center Power Goes
Where Data Center Power Goes

Source: EYP Mission Critical Facilities Inc., New York

Big Problem in the Biggest Corporations

Do you have a problem with power and cooling in your IT data center?

Do you have a problem with power and cooling in your IT data center?
Source: Robert Frances Group Inc., Westport, Conn.

Base: 50 Fortune 500 IT executives, January 2006

1pixclear.gif

How to Spend 450 Watts

Based on a typical dual-processor 450w 2U server, approximately 160w out of 450w (35%) are losses in the power-conversion process.

AC/DC losses
red_bullet.gif
131w
DC/DC losses
red_bullet.gif
32w
Fans
red_bullet.gif
32w
Drives
black_bullet.gif
72w
PCI cards
black_bullet.gif
41w
Processors
black_bullet.gif
86w
Memory
black_bullet.gif
27w
Chip set
black_bullet.gif
32w

Source: EYP Mission Critical Facilities Inc.; Intel Corp.

Copyright © 2006 IDG Communications, Inc.

  
Shop Tech Products at Amazon