Seven Steps to a Green Data Center

These tips will help you minimize power consumption, heat, waste and chaos.

How green is your data center? If you don't care now, you will soon. Most data center managers haven't noticed the steady increase in electricity costs, since in most cases they don't see those bills. But they do see the symptoms of surging power demands.

High-density servers are creating hot spots in data centers that have surpassed 30 kilowatts per rack for some high-end systems. As a result, some data center managers are finding that they can't get enough power distributed out to those racks on the floor. Others are finding that they've maxed out the power utility's ability to deliver additional capacity to their location.

Ken Brill, founder and executive director of The Uptime Institute Inc., sees the beginnings of a potential crisis. "The benefits of [Moore's Law] are eroding as the costs of data centers rise dramatically," he says. Increasing demand for power is the culprit, driven by both higher power densities and strong growth in the number of servers in use. Server electricity consumption in data centers has quietly doubled in the past five years, according to study sponsored by Advanced Micro Devices Inc. that was conducted by John Koomey, a consulting professor at Stanford University and a staff scientist at Lawrence Berkeley National Laboratory.

Server performance is improving faster than energy efficiency is advancing. "If we're going to get energy efficiency rising faster than the rate of performance increase, we're going to have to do something radically different than what we're doing today," Brill says.

Fortunately, there are many steps that data center managers can take to start reducing power consumption in existing data centers without making a huge investment -- or sacrificing performance or availability.

1. Consolidate, consolidate, consolidate.

Consolidating servers is a good place to start. In many data centers, "between 10% and 30% of servers are dead and could be turned off," Brill says.

Removing one physical server from service saves $560 annually in electricity costs, assuming a cost of 8 cents per kilowatt-hour, says Bogomio Balkansky, director of product marketing for Virtual Infrastructure 3 at VMware Inc. in Palo Alto, Calif.

Once idle servers have been removed, data center managers should consider moving as many server-based applications as feasible into virtual machines. That allows IT to substantially reduce the number of physical servers required while increasing the utilization levels of remaining servers.

Most physical servers today run at about 10% to 15% utilization. Since an idle server can consume as much as 30% of the energy it uses at peak utilization, you get more bang for your energy buck by increasing utilization levels, says Balkansky.

To that end, VMware is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads among physical servers in a resource pool to maximize energy efficiency. Distributed Power Management will "squeeze virtual machines on as few physical machines as possible," Balkansky says, and then power down servers that aren't in use. It will make adjustments dynamically as workloads change. Workloads might be consolidated in the evening during off hours, for example, then reallocated across more physical machines in the morning, as activity increases.

2. Turn on power management.

Although power management tools are available, administrators don't always use them. "In a typical data center, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chairman and chief scientist at Rocky Mountain Institute in Snowmass, Colo. Just taking full advantage of power management features and turning off unused servers can cut data center energy requirements by about 20%, he adds.

That's not happening in many data centers today because administrators focus almost exclusively on uptime and performance and aren't comfortable with available power management tools, says Christian Belady, distinguished technologist at Hewlett-Packard Co. But turning on power management can actually increase reliability and uptime by reducing stresses on data center power and cooling systems, he says.

Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager on AMD's server team. "Power management technology is not leveraged as much as it should be," Kerby says. "In Microsoft Windows, support is inherent, but you have to adjust the power scheme to take advantage of it." Instead, he says, that should be turned on by default.

You can realize significant savings by leveraging power management in the latest processors. With AMD's newest designs, "at 50% CPU utilization, you'll see a 65% savings in power. Even at 80% utilization, you'll see a 25% savings in power," just by turning on power management, says Kerby. Other chip makers are working on similar technologies.

But power management can cause more problems than it cures, says Jason William, chief technology officer at DigiTar, a messaging logistics service provider in Boise. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and [power management] can cause some very screwy behaviors in the operating system," he says.

Growing Awareness

How important are environmental concerns in planning your IT operations?

North America

Somewhat important
Very important
Not important


Somewhat important
Very important
Not important
Are green factors written into your evaluation and selection criteria for IT systems and devices?

North America



Source: Forrester Research Inc.,

Survey of 91 North American and 33 European IT Procurement Professionals

February to April, 2007

3. Upgrade to energy-efficient servers.

1 2 Page 1
Page 1 of 2
7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon