Data center design is undergoing a significant transformation. The fundamentals of the data center -- servers, cooling systems, UPSs -- remain the same, but their implementations are rapidly changing, thanks in large part to the one variable cost in the server room: energy.
Still in its infancy, though growing up fast, server virtualization is increasingly being relied on as a power-saving outlet for enterprises rolling out cost-effective data centers or retrofitting existing data centers to cut power costs considerably. What may come as a surprise, however, is that hidden energy costs await those who do not plan the layout of their virtualized data center wisely. And the chief culprit is heat.
Consolidating the workload of a dozen 1-kilowatt servers onto one 2-kW machine means that most virtualization hardware platforms produce more heat per rack unit than individual servers do. Moreover, collecting several virtualized servers into a single, high-density rack can create a data center hot spot, causing the rack and adjacent ones to run at significantly higher temperatures than the rest of the room, even when the room is centrally cooled to 68 degrees.
Blade servers are notorious for this because they run extremely heavy power supplies and tend to move an enormous amount of air through the chassis. Virtualizing them will indeed significantly reduce data center energy costs, but it won't provide a complete solution for reining in your data center's energy needs. For that, you have to retrofit your thinking about cooling.
Cooling on demand
For the most part, big, beefy air conditioning units that push air through dropped ceilings or raised floors remain regular fixtures in the data center. But for enterprises building out for energy efficiency or seeking to retrofit for added energy relief, localized cooling -- mainly in the form of in-row cooling systems -- is making a splash.
"We originally designed our in-row cooling solutions to address hot spots in the data center, specifically for blade servers. But it's grown far beyond that," says Robert Bunger, director of business development for North America at American Power Conversion Corp. (APC). "They've turned out to be very efficient, due to their proximity to the heat loads."
Bucking the "big air conditioner" paradigm, in-row cooling systems such as APC's are finding their place between racks, pumping out cold air through the front and pulling in hot air from the back. Because cooling is performed by units just inches away from the source rather than indiscriminately through the floor or ceiling, data center hot spots run less hot.
What's more, rather than relying on a central thermostat, these units function autonomously, tapping temperature-monitoring leads placed directly in front of a heat source to ensure that the air remains within a specified temperature range. If a blade chassis starts running hot because of an increased load, the in-row unit ramps up its airflow, dropping the air temperature to compensate.
Moreover, the unit ratchets down its cooling activities during idle times, saving even more money. All told, the cost-cutting benefits of localized cooling are quickly proving convincing, so much so that Gartner Inc. predicts in-rack and in-row cooling will become the predominant cooling method for the data center by 2011.
Modular air conditioning
For enterprises considering localized cooling, APC's in-row units are available in both air- and water-cooled models that provide from 8 kW to 80 kW of cooling output. The smaller APC units -- the ACRC100 and the ACSC100 -- are the same height and depth of a standard 42U rack, but half the width. The company's larger ACRP series retains the full 42U-rack form factor but pushes out far more air than the smaller units do.
Liebert Corp. is another vendor offering localized cooling solutions. Its XD series in-row and spot-cooling systems are similar in form and function to their APC counterparts. Liebert also offers units that mount on top of server racks, drawing hot air up and out. Both APC and Liebert have rear-mounted rack ventilation and cooling units that exhaust hot air into the plenum or cool the air before passing it back into the room.
The modularity of these systems translates to significant start-up savings. Whereas whole-room solutions must be sized for anticipated growth, localized cooling units can be deployed as needed. A large room that starts out only 30% full will require only 30% of projected full-room cooling hardware upon initial deployment.
There are downsides to these units, to be sure. The water-cooled systems require much more piping than centralized units do, and water pipes must be within the ceiling or floor of the room. The air-cooled units can place large heat loads into the plenum above the data center, resulting in airflow and heat exhaust problems. Moreover, because these solutions are built to provide just enough just-in-time cooling, the failure of a single unit could be taxing.
Either way, whether you're rolling out a new energy-efficient data center or retrofitting one already in place, a comprehensive understanding of your building's environmental systems and the expected heat load of the data center itself is required before implementing any localized cooling solutions.
Cool to the core
For some enterprises, individual high-load servers bring the kind of heat worthy of a more granular approach to cooling. For such instances, several vendors are making waves with offerings that bring a chill even closer than nearby racks: in-chassis cooling.
SprayCool Inc.'s M-Series is a water-cooling solution that captures heat directly from the CPUs and directs it through a cooling system built into the rack. The heat is then pushed through a water loop to completely remove the heat from both rack and room. Cooligy Inc. is another vendor offering a similar in-chassis water-cooling solution. SprayCool's G-Series takes the direct approach to cooling a step further: It functions like a car wash for blade chassis, spraying nonconductive cooling liquid through the server to reduce heat load.
Enterprises intrigued by in-chassis cooling should keep in mind that these solutions are necessarily more involved than whole-room or in-row cooling units and have very specific server compatibility guidelines.
The high-voltage switch
Virtualization and improved cooling efficiency are not the only ways to bring down the energy bill. One of the latest trends in data center power reduction -- at least here in the U.S. -- is to use 208-volt power rather than the traditional 120-volt power source.
When the U.S. rolled out the first electrical grid, light bulb filaments were quite fragile and burned out fast on 220-volt lines. Dropping the voltage to 110/120 volts increased filament life -- thus, the U.S. standard of 120 volts. By the time Europe and the rest of the world built out their power grids, advances in filament design had largely eliminated the high-voltage problem, hence the 208/220-volt power systems across most of the rest of the globe.
What's important to note is that each time voltage is stepped down, a transformer is used, and power is lost. The loss may be as little as 1% or 2% per transformer. But over time and across a large data center, the penalty for transformer use adds up. By switching to a 208-volt system, you need one less transformer in the chain, thereby reducing wasted energy.
Moreover, 208/220-volt systems are safer and more efficient; more current is required to push the same wattage through 120 volts than 208/220, increasing the risk of injury and loss of additional power in transit.
For those considering capitalizing on the switch, rest assured that nearly all server, router and switch power supplies can handle 120- or 208-volt power and most are autoswitching, meaning no modifications are necessary to transfer that gear to 208 volts. Of course, the benefits of 208-volt power in the data center are not the kind to cause a sea change. But as energy costs continue to rise, the switch to 208 volts will become increasingly attractive.
This story, "The cool new look in data center design" was originally published by InfoWorld.