Today's data center managers are struggling to juggle the business demands of a more competitive marketplace with budget limitations imposed by a soft economy. They seek ways to reduce opex (operating expenses), and one of the fastest growing -- and often biggest -- data center operation expenses is power, consumed largely by servers and coolers.
Alas, some of the most effective energy-saving techniques require considerable upfront investment, with paybacks measured in years. But some oft-overlooked techniques cost next to nothing -- they're bypassed because they seem impractical or too radical. The eight power savings approaches here have all been tried and tested in actual data center environments, with demonstrated effectiveness. Some you can put to work immediately with little investment; others may require capital expenditures but offer faster payback than traditional IT capex (capital expenses) ROI.
[ Unlearn the untrue and outdated data center practices in Logan G. Harbough's "10 power-saving myths debunked." | Use server virtualization to get highly reliable failover at a fraction of the usual cost. Find out how in InfoWorld's High Availability Virtualization Deep Dive PDF special report. ]
The holy grail of data center energy efficiency metrics is the Power Usage Effectiveness (PUI) rating, in which lower numbers are better and 1.0 is an ideal objective. PUI compares total data center electrical consumption to the amount converted into useful computing tasks. A not-uncommon value of 2.0 means two watts coming into the data center falls to one watt by the time it reaches a server -- the loss is power turned into heat, which in turn requires power to get rid of via traditional data center cooling systems.
As with all simple metrics, you must take PUI for what it is: a measure of electrical efficiency. It doesn't consider other energy sources, such as ambient environmental, geothermal, or hydrogen fuel cells, many of which can be exploited to lower total power costs. The techniques that follow may or may not lower your measurable PUI, but you can evaluate their effectiveness more simply by checking your monthly utility bill. That's where it'll really matter anyhow.
You won't find solar, wind, or hydrogen power in the bag of tricks presented here. These alternative energy sources require considerable investment in advanced technologies, which delays cost savings too much for the current financial crisis. By contrast, none of the following eight techniques requires any technology more complex than fans, ducts, and tubing.
The eight methods are:
Radical energy savings method 1: Crank up the heat The simplest path to power savings is one you can implement this afternoon: Turn up the data center thermostat. Conventional wisdom calls for data center temperatures of 68 degrees Fahrenheit or below, the logic being that these temperatures extend equipment life and give you more time to react in the event of a cooling system failure.
Experience does show that server component failures, particularly for hard disks, do increase with higher operating temperatures. But in recent years, IT economics crossed an important threshold: Server operating costs now generally exceed acquisition costs. This may make hardware preservation a lower priority than cutting operating costs.
At last year's GreenNet conference, Google energy czar Bill Weihl cited Google's experience with raising data center temperatures, stating that 80 degrees Fahrenheit can be safely used as a new setpoint, provided a simple prerequisite is met in your data center: separating hot- and cold-air flows as much as possible, using curtains or solid barriers if needed.
Although 80 degrees Fahrenheit is a "safe" temperature upgrade, Microsoft's experience shows you could go higher. Its Dublin, Ireland, data center operates in "chiller-less" mode, using free outside-air cooling, with server inlet temperatures as high as 95 degrees Fahrenheit. But note there is a point of diminishing returns as you raise the temperature, owing to the higher server fan velocities needed that themselves increase power consumption.
Radical energy savings method 2: Power down servers that aren't in use Virtualization has revealed the energy saving advantages of spinning down unused processors, disks, and memory. So why not power off entire servers? Is the increased "business agility" of keeping servers ever ready worth the cost of the excess power they consume? If you can find instances where servers can be powered down, you can achieve the lowest power usage of all -- zero -- at least for those servers. But you'll have to counter the objections of naysayers first.
For one, it's commonly believed that power cycling lowers the servers' life expectancy, due to stress placed on non-field-swappable components such as motherboard capacitors. That turns out to be a myth: In reality, servers are constructed from the same components used in devices that routinely go through frequent power cyclings, such as automobiles and medical equipment. No evidence points to any decreased MTBF (mean time between failure) as a result of the kinds of power cycling servers would endure.
A second objection is that servers take too long to power up. However, you can often accelerate server startup by turning off unnecessary boot-time diagnostic checks, booting from already-operational snapshot images, and exploiting warm-start features available in some hardware.
A third complaint: Users won't wait if we have to power up a server to accommodate increased load, no matter how fast the things boot. However, most application architectures don't say no to new users so much as simply process requests more slowly, so users aren't aware that they're waiting for servers to spin up. Where applications do hit hard headcount limits, users have shown they're willing to hang in there as long as they're kept informed by a simple "we're starting up more servers to speed your request" message.
Radical energy savings method 3: Use "free" outside-air cooling. Higher data center temperatures help you more readily exploit the second power-saving technique, so-called free-air cooling that uses lower outside air temperatures as a cool-air source, bypassing expensive chillers, as Microsoft does in Ireland. If you're trying to maintain 80 degrees Fahrenheit and the outside air hits 70, you can get all the cooling you need by blowing that air into your data center.
The effort required to implement this is a bit more laborious than in method 1's expedient cranking up of the thermostat: You must reroute ducts to bring in outside air and install rudimentary safety measures -- such as air filters, moisture traps, fire dampers, and temperature sensors -- to ensure the great outdoors don't damage sensitive electronic gear.
In a controlled experiment, Intel realized a 74% reduction in power consumption using free-air cooling. Two trailers packed with servers, one cooled using traditional chillers and the other using a combination of chillers and outside air with large-particle filtering, were run for 10 months. The free-air trailer was able to use air cooling exclusively 91% of the time. Intel also discovered a significant layer of dust inside the free-air-cooled server, reinforcing the need for effective fine-particle filtration. You'll likely have to change filters frequently, so factor in the cost of cleanable, reusable filters.
Despite significant dust and wide changes in humidity, Intel found no increase in failure rate for the free-air cooled trailer. Extrapolated to a data center consuming 10 megawatts, this translates to nearly $3 million in annual cooling cost savings, along with 76 million fewer gallons of water, which is itself an expensive commodity in some regions.
Radical energy savings method 4: Use data center heat to warm office spaces You can double your energy savings by using data center BTUs to heat office spaces, which is the same thing as saying you'll use relatively cool office air to chill down the data center. In cold climes, you could conceivably get all the heat you need to keep people warm and manage any additional cooling requirements with pure outside air.
Unlike free-air cooling, you may never need your existing heating system again; by definition, when it's warm out you won't require a people-space furnace. And forget worries of chemical contamination from fumes emanating from server room electronics. Modern Restriction of Hazardous Substances (RoHS)-compliant servers have eliminated environmentally unfriendly contaminants -- such as cadmium, lead, mercury, and polybromides -- from their construction.
As with free-air cooling, the only tech you need to pull this off is good old HVAC know-how: fans, ducts, and thermostats. You'll likely find that your data center puts out more than enough therms to replace traditional heating systems. IBM's data center in Uitikon, Switzerland, was able to heat the town pool for free, saving energy equal to that for heating 80 homes. TelecityGroup Paris even uses server waste heat to warm year-around greenhouses for climate change research.
Reconfiguring your furnace system may entail more than a weekend project, but the costs are likely low enough that you can reap savings in a year or less.
Radical energy savings method 5: Use SSDs for highly active read-only data setsSSDs have been popular in netbooks, tablets, and laptops due to their speedy access times, low power consumption, and very low heat emissions. They're used in servers, too, but until recently their costs and reliability have been a barrier to adoption. Fortunately, SSDs have dropped in price considerably in the last two years, making them candidates for quick energy savings in the data center -- provided you use them for the right application. When employed correctly, SSDs can knock a fair chunk off the price of powering and cooling disk arrays, with 50% lower electrical consumption and near-zero heat output.
One problem SSDs haven't licked is the limited number of write operations, currently around 5 million writes for the single-level-cell (SLC) devices appropriate for server storage. Lower-cost consumer-grade multilevel-cell (MLC) components have higher capacities but one-tenth of SLCs' endurance.
The good news about SSDs is that you can buy plug-compatible drives that readily replace your existing power-hungry, heat-spewing spinners. For a quick power reduction, replace large primarily read-only data sets, such as streaming video archives, with SSD. You won't encounter SSD wear-out problems, and you'll gain an instant performance boost in addition to reduced power and cooling costs.
Go for drives specifically designed for server, rather than desktop, use. Such drives typically have multichannel architectures to increase throughput. The most common interface is SATA 2.0, with 3Gbps transfer speeds. Higher-end SAS devices, such as the Hitachi/Intel Ultrastar SSD line, can achieve 6Gbps speeds, with capacities up to 400GB. Although SSD devices have encountered some design flaws, these have been primarily with desktop and laptop drivers involving BIOS passwords and encryption, factors not involved in servers' storage devices.
Do plan to spend some brain cycles monitoring usage on your SSDs, at least initially. Intel and other SSD makers provide analysis tools that track read and write cycles, as well as write failure events. SSD disks automatically remap writes to even out wear across a device, a process called load leveling, which can also detect and recover from some errors. When actual significant write failures begin occurring, it's time to replace the drive.
Radical energy savings method 6: Use direct current in the data center Yes, direct current is back. This seemingly fickle energy source enjoys periodic resurgences as electrical technologies ebb and flow. The lure is a simple one: Servers use direct current internally, so feeding that power to them directly should reap savings by eliminating the AC-to-DC conversion performed by a server's internal power supply.
Direct current was popular in the early 2000s because the power supplies in servers of that era had data center conversion efficiencies as low as 75%. But then power supply efficiencies improved, and data centers shifted to also-more-efficient 208-volt AC. By 2007, direct current fell out of favor. InfoWorld even counted it among the myths in our 2008 article "10 power-saving myths debunked." Then in 2009 direct current bounced back, owing to the introduction of high-voltage data center products.
In the earliest data centers, utility-supplied 16,000 VAC (volts of alternating current) electricity was first converted to 440 VAC for routing within a building, then to 220 VAC, and finally to the 110 VAC used by the era's servers. Each conversion wasted power by dint of being less than 100% efficient, with the lost power being cast off as heat (which had to be removed by cooling, incurring yet more power expense). The switch to 208 VAC eliminated one conversion, and with in-server power supplies running at 95% efficiency, there wasn't any longer much to gain.