Ever-higher energy densities are "not sustainable from an energy use or cost perspective, says Rakesh Kumar, analyst at Gartner Inc. Fortunately, most enterprises still have a ways to go before they see average per-rack loads in the same range as ILM's. Some 40% of Gartner's enterprise customers are pushing beyond the 8 to 10 kW per rack range, and some are as high as 12 to 15 kW per rack. However, those numbers continue to creep up.
In response, some enterprise data centers, and managed services providers like Terremark Inc., are starting to monitor power use and factor it into what they charge for data center space. "We're moving toward a power model for larger customers," says Ben Stewart, senior vice president of engineering at Terremark. "You tell us how much power, and we'll tell you how much space we'll give you."
But is it realistic to expect customers to know not just how much equipment they need hosted but how much power will be needed for each rack of equipment?
"For some customers, it is very realistic," Stewart says, In fact, Terremark is moving in this direction in response to customer demand. "Many of them are coming to us with a maximum-kilowatt order and let us lay the space out for them," he says. If a customer doesn't know what its energy needs per cabinet will be, Terremark sells power per "whip," or power cable feed to each cabinet.
Containment: The last frontier
IBM's Schmidt thinks further power-density increases are possible, but the methods by which data centers cool those racks will need to change.
ILM's data center, completed in 2005, was designed to support an average load of 200 W per square foot. The design has plenty of power and cooling capacity overall. It just doesn't have a method for efficiently cooling high-density racks.
ILM uses a hot aisle/cold aisle design, and the staff has successfully adjusted the number and position of perforated tiles in the cold aisles to optimize airflow around the carefully sealed BladeCenter racks. But to avoid hot spots, the room air conditioning system is cooling the entire 13,500-square-foot raised floor space to a chilly 65 degrees.
Clark knows it's inefficient; today's IT equipment is designed to run at temperatures as high as 81, so he's looking at a technique called cold-aisle containment.
Other data centers are already experimenting with containment -- high-density zones on the floor where doors seal off the ends of either the hot or cold aisles. Barriers may also be placed along the top of each row of cabinets to prevent hot and cold air from mixing near the ceiling. In other cases, cold air may be routed directly into the bottom of each cabinet, pushed up to the top and funneled into the return-air space in the ceiling plenum, creating a closed-loop system that doesn't mix with room air at all. "The hot/cold aisle approach is traditional but not optimal," says Rocky Bonecutter, data center technology and operations manager at Accenture. "The move now is to go to containment."
Using such techniques, HP's Gross estimates that data centers can support up to about 25 kW per rack using a computer room air conditioning system. "It requires careful segregation of cold and hot, eliminating mixing, optimizing the airflow. These are becoming routine engineering exercises," he says.