Seven steps to a green data center

Editor's note: We inadvertently omitted the full name and title of Bogomil Balkansky, senior director of product marketing at VMware Inc. This information was added on Tuesday, April 24, around 5:45 p.m. Computerworld regrets the error.

How green is your data center? If you don't care now, you will soon. Most data center managers haven't noticed the steady rise in electricity costs, since they don't usually see those bills. But they do see the symptoms of surging power demands.

High-density servers are creating hot spots in data centers that have surpassed 30 kilowatts per rack for some high-end systems. As a result, some data center managers are finding that they can't get enough power distributed out to those racks on the floor. Still others are finding that they can't get more power to the building: they've maxed out the power utility's ability to deliver additional capacity to that location.

The problem already has Mallory Forbes' attention. "Every year, as we revise our standards, the power requirements seem to go up," says Forbes, senior vice president and manager of mainframe technology at Regions Financial Corp. in Birmingham, Ala. "It creates a big challenge in managing the data center because you continually have to add power."

Energy efficiency savings can add up. A watt saved in data center power consumption saves at least a watt in cooling. IT managers who take the long view are already paying attention to the return on investment associated with acquiring more energy-efficient equipment. "Energy becomes important in making a business case that goes out five years," says Robert Yale, principal of technical operations at The Vanguard Group Inc. in Valley Forge, Pa. His 60,000-square-foot data center caters mostly to Web-based transactions. While security and availability come first, he says Vanguard is "focusing more on the energy issue than we have in the past."

Green data centers don't just save energy, they also reduce the need for expensive infrastructure upgrades to deal with increased power and cooling demands. Some organizations are also starting to take the next step and are looking at the entire data center from an environmental perspective. (See "Greening up is about more than just energy.")

Following these steps will keep astute data center managers ahead of the game.

Consolidate your servers, and consolidate some more

Existing data centers can achieve substantial savings by making just a few basic changes, and consolidating servers is a good place to start, says Ken Brill, founder and executive director of The Uptime Institute, a consultancy in Santa Fe, N.M., that has studied this issue for several years. In many data centers, he says, "between 10% and 30% of servers are dead and could be turned off."

Cost savings from removing physical servers can add up quickly -- up to $1,200 in energy costs per server per year, according to one estimate. "For a server, you'll save $300 to $600 each year in direct energy costs. You'll save another $300 to $600 a year in cooling costs," says Mark Bramfitt, senior program manager in customer energy management at PG&E Corp. The San Francisco-based utility offers a "virtualization incentive" program that pays $150 to $300 per server removed from service as a result of a server consolidation project.

Once idle servers have been removed, data center managers should consider moving as many server-based applications as feasible into virtual machines. That allows IT to substantially reduce the number of physical servers required while increasing the utilization levels of remaining servers.

Most physical servers today run at about 10% to 15% utilization. Since an idle server can consume as much as 30% of the energy it consumes at peak utilization, you get more bang for your energy dollar by increasing utilization levels, says Bogomil Balkansky, senior director, product marketing at VMware Inc.

To that end, VMware Inc. is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads between physical servers that are treated as a single resource pool. Distributed Power Management will "squeeze virtual machines on as few physical machines as possible," Balkansky says, and then automatically power down servers that are not being used. The system makes adjustments dynamically as workloads change. In this way, workloads might be consolidated in the evening during off-hours, and then reallocated across more physical machines in the morning, as activity increases.

Turn on power management

Although power management tools are available, administrators today don't always make use of them. "In a typical data center, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chairman and chief scientist at the Rocky Mountain Institute, an energy and sustainability research firm in Snowmass, Colo.

Just taking full advantage of power management features and turning off unused servers can cut data center energy requirements by about 20%, he adds.

That's not happening in many data centers today because administrators focus almost exclusively on uptime and performance, and IT staffers aren't comfortable yet with available power management tools, says Christian Belady, distinguished technologist at Hewlett-Packard Co. He argues that turning on power management can actually increase reliability and uptime by reducing stresses on data center power and cooling systems.

Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager at Advanced Micro Devices Inc.'s server team. While AMD and other chip makers are implementing new power management features, "in Microsoft Windows, support is inherent, but you have to adjust the power scheme to take advantage of it," he says. Kerby says that should be turned on by default. "Power management technology is not leveraged as much as it should be," he adds.

The potential savings of leveraging power management with the latest processors are significant. AMD's newest designs will scale back voltage and clock frequency on a per-core basis and will reduce the power to memory, another rapidly rising power hog. "At 50% CPU utilization, you'll see a 65% savings in power. Even at 80% utilization, you'll see a 25% savings in power," just by turning on power management, says Kerby. Other chip makers are working on similar technologies.

In some cases, power management may cause more problems than it cures, says Jason Williams, chief technology officer at DigiTar, a messaging logistics service provider in Boise, Idaho. He runs Linux on AMD64 servers. "We use a lot of Linux, and [power management] can cause some very screwy behaviors in the operating system," he says. "We've seen random kernel crashes primarily. Some systems seem to run Linux fine with ACPI turned on, and others don't. Its really hard to predict, so we generally turn it and any other power management off."

ACPI is Advanced Configuration and Power Interface, a specification co-developed by HP, Intel Corp., Microsoft Corp. and other industry players.

Upgrade to energy-efficient servers

The first generation of multicore chip designs showed a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40% less power," says Lori Wigle, director of server technology and initiatives marketing at Intel. Moving to servers based on these designs should increase energy efficiency.

Future gains, however, are likely to be more limited. Sun Microsystems Inc., Intel and AMD all say they expect their servers' power consumption to remain flat in the near term. AMD's current processor offerings range from 89 W to 120 W. "That's where we're holding," says AMD's Kerby. For her part, Wigle also doesn't expect Intel's next-generation products to repeat the efficiency gains of the 5100. "We'll be seeing something slightly more modest in the transition to 45-nanometer products," she says.

Chip makers are also consolidating functions such as I/O and memory controllers onto the processor platform. Sun's Niagra II includes a Peripheral Component Interconnect Express bridge, 10 Gigabit Ethernet and floating-point functions on a single chip. "We've created a true server on a chip," says Rick Hetherington, chief architect and distinguished engineer at Sun.

But that consolidation doesn't necessarily mean lower overall server power consumption at the chip level, says an engineer at IBM's System x platform group who asked not to be identified. Overall, he says, net power consumption will not change. "The gains from integration ... are offset by the newer, faster interconnects, such as PCIe Gen2, CSI or HT3, FBDIMM or DDR3," he says.

Go with high-efficiency power supplies

Power supplies are a prime example of the lack of focus on total cost of ownership in the server market because inefficient units that ship with many servers today waste more energy than any other component in the data center, says John Koomey, a consulting professor at Stanford University and staff scientist at Lawrence Berkeley National Laboratory. He led an industry effort to develop a server energy management protocol (download PDF).

Progress in improving designs has been slow. "Power-supply efficiencies have increased at about one half percent a year," says Intel's Wigle. Newer designs are much more efficient, but in the volume server market, they're not universally implemented because they're more expensive.

With the less-efficient power supplies found in many commodity servers, efficiency peaks at 70% to 75% at 100% utilization but drops into the 65% range at 20% utilization -- and the average server load is in the 10% to 15% range. That means that inefficient power supplies can waste nearly half of the power before the power even gets to the IT equipment. The problem is compounded by the fact that every watt of energy wasted by the power supply requires another watt of cooling system power just to remove the resulting waste heat from the data center.

Power supplies are available today that attain 80% or higher efficiency -- even at 20% load -- but they cost significantly more. High-efficiency power supplies carry a 15% to 20% premium, says Lakshmi Mandyam, director of marketing at power supply vendor ColdWatt Inc. in Austin.

Still, moving to these more energy-efficient power supplies reduces both operating costs and capital costs. "If they spent $20 on [an energy-efficient] power supply, you would save $100 on the capital cost of cooling and infrastructure equipment," says RMI's Lovins. Any power supply that doesn't deliver 80% efficiency across a range of low load levels should be considered unacceptable, he says.

To make matters worse, Sun's Hetherington says, server manufacturers have traditionally overspecified power needs, opting for a 600 W power supply for a server that really should only need 300 W. "If you're designing a server, you don't want to be close to threatening peak [power] levels. So you find your comfort level above that to specify the supply," he says. "At that level, it may only be consuming 300 W, but you have a 650-W power supply taxed at half output, and it's at its most inefficient operating point. The loss of conversion is huge. That's one of the biggest sinners in terms of energy waste," he says.

All of the major server vendors say they already offer or are phasing in more efficient power supplies in their server offerings.

HP is in the process of standardizing on a single power supply design for its servers. Paul Perez, vice president of storage, network and infrastructure, spoke at a recent Uptime Institute conference. "Power supplies will ship this summer with much higher efficiency," he said, adding that HP is trying to increase efficiency percentages into the "mid-90s." HP's Belady says all of his employer's servers use power supplies that are at least 85% efficient.

Smart power management can also increase power supply utilization levels. For example, HP's PowerSaver technology turns off some of the six power supplies in a C-class blade server enclosure when total load drops; this saves energy and increases efficiency.

One resource IT can use when determining power-supply efficiency are the results at 80Plus.org. This certification program, initiated by electric utilities, lists power supplies that consistently attain an 80% efficiency rating at 20%, 50% and 100% loads.

Stanford University's Koomey says that Google Inc. took an innovative approach to improving power-supply efficiency in its server farms. Part of the expense of power-supply designs lies in the fact that you need multiple outputs at different DC voltages. "In doing their custom motherboards ... they went to the power supply people and said, 'We don't need all of those DC outputs. We just need 12 volts.'" By specifying a single, 12-volt output, Google saved money in the design that then went toward delivering a higher efficiency power supply. "That is kind of thinking that's needed," he says.

Break down internal business barriers

1 2 Page 1
Page 1 of 2
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon