Reducing data center energy consumption

While the acquisition cost for servers is declining, the total cost of ownership for housing, powering and cooling them has increased by 500% since 2000. Yet research from the Uptime Institute reveals that 60% of the available cooling in a typical computer room is wasted due to airflow losses, also called "bypass airflow."

The upshot: You are likely spending more money on energy than necessary because of the inefficiencies created by over-capacity and poor conditioned airflow management.

What does a real green data center look like?

But the good news is that optimizing your airflow represents the greatest opportunity for reducing operating costs and deferring capital costs. In addition, when you manage airflow more effectively, you can increase server density without adding new cooling infrastructure.

To optimize your existing computer room infrastructure, consider the following steps to resolve airflow inefficiencies:

1. Get a computer room cooling efficiency health check

If you have not embraced comprehensive data center monitoring -- documenting conditions such as utility load, equipment intake temperatures, UPS load, redundant cooling capacity and power usage effectiveness calculations -- a computer room airflow efficiency health check may be an appropriate place to start.

There are a range of diagnostic assessments available, and most will identify energy inefficiencies and offer a targeted remediation strategy. If followed, the plan could save you operating costs immediately, result in simple payback within a few months, and permit you the option of increasing server density sustainably.

At the very least, an expert computer room cooling health check should involve an examination of the following three aspects of data center health: IT equipment air-intake hotspots; percentage of bypass airflow; and cooling capacity factor (CCF), or the margin of installed cooling capacity vs. load.

To do this, the engineer will perform the following:

* Count and measure raised floor openings.

* Measure cabinet air-intake temperatures.

* Measure relative humidity of any identified hotspots.

* Sum cooling unit rated cooling capacity.

* Sum cooling unit rated airflow.

* Sum computing equipment power load in kW.

* Determine the presence of latent cooling and its associated latent cooling penalty.

* Check all return air temperature and relative humidity sensors for calibration.

These assessments will result in a remediation plan that will, among other things, likely advise sealing all cable and IT equipment cabinet openings to properly channel airflow.

Case in point: A company with a 6,996 square foot data center did a cooling efficiency health check by measuring bypass airflow and hotspots (cabinet intake-air temperatures which exceed maximums), and collecting data to calculate the CCF and make comparisons to the critical load.

By implementing the remediation strategy, the hotspots were eliminated and there was a 60% improvement in bypass airflow, which meant the reliability of the equipment would improve. In addition, because of improvements in airflow management, the company was able to put two cooling units into inactive standby mode, reducing electrical consumption by $27,024 per year ($2,252 per month based on $0.08/kWhr). Simple payback occurred between the second and third months.

2. Seal the computer room envelope and the raised floor

Depending on what a health check of your data center reveals, remediation will probably involve sealing up the following areas:

* Openings in the perimeter walls, in particular, cable trays and conduits passing through the perimeter walls. Also, inspect the area around columns to make sure conditioned air is not escaping through column facades to adjacent floors. Look for other openings, including air leaks through entrance doors and elevators; loading dock doors; windows; overhead wall openings where cables pass through; and holes in the perimeter walls above the dropped ceiling.* Openings in the raised floor that do not deliver conditioned airflow directly to the face, or intakes, of IT equipment. The most common openings that require sealing are cable openings under or behind cabinets. Other openings that should be sealed are holes under Power Distribution Units or for conduit penetrations.

Case in point: In a financial impact study of a 10,000 square foot data center that had 400 special grommets installed to keep cold air in, simple OpEx payback occurred within the first two months and there were annual OpEx energy savings of $50,896. The capacity improvements made it possible to turn off 18% of computer room air conditioning units (CRAC) at an annual operating cost savings of approximately $5,000 per unit. With the recaptured cooling capacity, the data center managers can increase server density without incurring the capital costs of additional cooling units.

3. Improve the above-the-floor airflow management

Depending on the unique conditions of your data center, remediation measures may include installation of internal blanking panels; vertical end row panels; horizontal partitions over rows; cold aisle containment and hot aisle containment.

Installing blanking panels in unused rack unit openings prevent rear-to-front circulation of hot exhaust air from the servers. As equipment load densities continue to increase, hot air circulation into the cold aisle through open spaces in cabinets, as well as around the ends of rack rows and across the top of racks, becomes more significant. Installing blanking panels helps ensure the computer equipment air-intake temperature, especially at the top of racks, is below the The American Society of Heating, Refrigerating and Air-Conditioning Engineers' recommended maximum of 80.6°F.

Case in point: Two financial impact case studies, one for a high-density facility and one for a lower-density facility, were performed to demonstrate how installing the blanking panels yield cost savings by allowing data center managers to raise computer room temperatures to take advantage of the increased cooling unit capacities that result from higher return air temperatures. This lowers operating costs and defers capital expenditures on cooling. Further, calculations determined that simple payback can be expected in a few months.

For the high-density facility (400 cabinets in a room with 10,000 square feet of raised floor), the total annual cost savings was $137,395 and payback occurred in the second month. Because 29.5% of the CRAC units were placed on inactive standby, there was a 29% reduction in annual operating and maintenance costs. With the recaptured cooling capacity, data center managers can increase server density and defer capital costs, all while reducing operating expenses.

In the lower-density facility, with 12 water-cooled CRAC units and 7.5 hp fan motors, the total annual cost savings was $30,594, and payback occurred in the fourth month. This represents a 15% reduction in the annual operating and maintenance costs of the cooling units.

4. Tune the computer room

After installing the recommended sealing technology, it's critical to re-examine the heat load and all cooling unit settings to ensure you have taken advantage of all the efficiencies afforded by sealing openings that once permitted conditioned air to escape or hot exhaust air to circulate.

The computer room should also be evaluated for other opportunities to increase equipment reliability and further reduce operating cost. This requires an on-site investigation by an engineer who will physically open up the equipment and make detailed performance measurements. The engineer should:

* Determine the heat load by adding together all of the PDU or Remote Power Panel (RPP) outputs or by summing the UPS system(s) outputs.

* Evaluate the configuration of the cooling units on the raised floor by checking temperature and relative humidity set points and sensitivities. Are they at the correct setting and are they consistent throughout the room?

* Check the calibration of the return-air sensors. A key factor is to ensure that the instrument being used to monitor the calibration is properly calibrated.

* Check each cooling unit to verify if it is delivering its rated cooling capacity. Both airflow volume and temperature drop need to be measured to determine the delivered cooling capacity.

* Determine the required number of operational cooling units from the heat load data and the cooling capacity information. There should be redundant cooling capacity in every area of the room.

* Determine the proper number and placement of perforated tiles. Their arrangement must be adjusted within the cold aisle based on careful monitoring of IT equipment air-intake temperatures.

* Use an infrared camera to identify airflow circulation patterns and equipment performance issues and the options for improvement.

Driving energy consumption down means costs go down. Expert computer room remediation strategies provide near-instant energy savings, which in turn make it possible to increase server density without adding cooling infrastructure. Recommendations will likely include an investment in sealing technology and ongoing temperature and environmental monitoring, but the money and time spent will pay you back in significant energy savings.

Strong is professional engineer and senior consultant with Upsite Technologies and has been with the company since its inception in 2001.

This story, "Reducing data center energy consumption" was originally published by Network World.

Copyright © 2009 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon