Data center density hits the wall

Why the era of packing more servers into the same space may have to end

1 2 3 4 5 6 Page 2
Page 2 of 6

He's not the only one expressing concerns. For more than 40 years, the computer industry's business model has been built on the rock-solid assumption that Moore's Law would continue to double compute density every two years into perpetuity. Now some engineers and data center designers have begun to question whether that's feasible -- and whether a threshold has been reached.

The threshold isn't just about whether chip makers can overcome the technical challenges of packing transistors even more densely than today's 45nm technology allows, but whether it will be economical to run large numbers of extremely high-density server racks in modern data centers. The newest equipment concentrates more power into a smaller footprint on the raised floor, but the electromechanical infrastructure needed to support every square foot of high-density compute space -- from cooling systems to power distribution equipment, UPSs and generators -- is getting proportionally larger.

Data center managers are taking notice. According to a 2009 IDC survey of 1,000 IT sites, 21% ranked power and cooling as the No. 1 data center challenge. Nearly half (43%) reported increased operational costs, and one-third had experienced server downtime as a direct result of power and cooling issues.

Christian Belady is the lead infrastructure architect for Microsoft's Global Foundation Services group, which designed and operates the company's newest data center in Quincy, Wash. He says the cost per square foot of a raised floor is too high. In the Quincy data center, he says, those costs accounted for 82% of the total project.

"We're beyond the point where more density is better," Belady says. "The minute you double compute density, you double the footprint in the back room."

HP's Gross has designed large data centers for both enterprises and Internet-based businesses like Google's or Yahoo's. Internet-based data centers consist of large farms of Web servers and associated equipment. Gross thinks Belady's costs are about average. Electromechanical infrastructure typically makes up about 80% of the cost of a new Tier 4 enterprise data center's cost, regardless of the size of the facility. That number is generally 65% to 70% for Internet-based data centers, he says. Those numbers haven't increased much as power densities have increased in recent years, he adds.

As compute density per square foot increases, overall electromechanical costs tend to stay about the same, Gross says. But because power density also increases, the ratio of electromechanical floor space needed to support a square foot of high-density compute floor space also goes up.

IBM's Schmidt says the cost per watt, not the cost per square foot, remains the biggest construction cost for new data centers. "Do you hit a power wall down the road where you can't keep going up this steep slope? The total cost of ownership is the bottom line here," he says. Those costs have for the first time pushed some large data center construction projects past the $1 billion mark. "The C suites that hear these numbers get scared to death because the cost is exorbitant," he says.

1 2 3 4 5 6 Page 2
Page 2 of 6
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon