Cost, convergence and economies of scale
Like HP and other IT vendors, IBM is working on what Bradicich calls "operational integration" -- a converged infrastructure that combines compute, storage and networking in a single package. While the primary goal of converged infrastructure is to make systems management easier, Bradicich sees power and cooling as part of that package. In IBM's view, the x86 platform will evolve into highly scalable, and perhaps somewhat more proprietary, symmetric multiprocessing systems designed to dramatically increase the workloads supported per server -- and per rack. Such systems would require bringing chilled water to the rack to meet cooling needs.
But HP's Gross says things may be going the other direction. "Data centers are going bigger in footprint, and people are attempting to distribute them," he says. "Why would anyone spend the kind of money needed to achieve these super-high densities?" he asks -- particularly when they may require special cooling.
IBM's Schmidt says data centers with room-based cooling -- especially those that have moved to larger air handlers to cope with higher heat densities -- could save considerable energy by moving to water.
But Microsoft's Belady thinks liquid cooling's appeal will be limited to a single niche: high-performance computing. "Once you bring liquid cooling to the chip, costs start going up," he contends. "Sooner or later, someone is going to ask the question: Why am I paying so much more for this approach?"
He doesn't see liquid cooling as a viable alternative in distributed data centers such as Microsoft's.
The best way to take the momentum away from ever-increasing power density is to change the chargeback method for data center use, says Belady. Microsoft changed its cost allocation strategy and started billing users based on power consumption as a portion of the total power footprint of the data center, rather than basing it on floor space and rack utilization. After that, he says, "the whole discussion changed overnight." Power consumption per rack started to dip. "The whole density thing gets less interesting when your costs are allocated based on power consumed," he says.
Once Microsoft began charging for power, its users' focus changed from getting the most processing power in the smallest possible space to getting the most performance per watt. That may or may not lead to higher-density choices -- it depends on the overall energy efficiency of the proposed solutions. On the other hand, Belady says, "if you're charging for space, the motivation is 100% about density."
Today, vendors design for the highest density, and most users select high-density servers to save on space charges. Users may pay more for a higher-density server infrastructure to save on floor space charges, even when performance per watt is lower because of added power distribution and cooling needs. But on the back end, 80% of operating costs scale with electricity use -- and the electromechanical infrastructure needed to deliver power and cool the equipment.
Run 'em hard, run 'em hot
Belady, who previously worked on server designs as a distinguished engineer at HP, argues that IT equipment should be designed to work reliably at higher operating temperatures. Current equipment is designed to operate at a maximum temperature of 81 degrees. That's up from 2004, when the official specification, set by the ASHRAE Technical Committee 9.9, was 72 degrees.