Skip the navigation

6 reasons why Microsoft's container-based approach to data centers won't work

Are you listening, Microsoft?

By Eric Lai
May 9, 2008 12:00 PM ET

Computerworld - Microsoft Corp.'s plan to fill its mammoth Chicago data center with servers housed in 40-foot shipping containers has experts wondering whether the strategy will succeed. In Microsoft's plan, each container in the data center, still being built, will be filled with several thousand servers.

Computerworld queried several outside experts — including the president of a data center construction firm, a data center engineer-turned-CIO, an operations executive for a data center operator and a "green" data center consultant — to get their assessments of the strategy. While they were individually impressed with some parts of Microsoft's plan, they also expressed skepticism that the idea will work in the long term.

Here are some of their objections, along with the responses of Mike Manos, Microsoft's senior director of data center services. Manos talked with Computerworld in an interview after the Data Center World show at which Microsoft's plan was announced.

1. Russian-doll-like nesting (servers, on racks, inside shipping containers) may work out to less Lego-style modularity, as some proponents claim, and more mere ... moreness.

Server-filled containers are "nothing more than a bucket of power with a certain amount of CPU capacity," quipped Manos.

His point is that setting up several thousand servers inside a container in some off-site factory setting will make them nearly plug-and-play once the container arrives at the data center. By shifting the setup to the server vendor or system integrator and then wrapping it up inside a 40-foot metal box, containers become far easier and faster to deploy than individual server racks, which have to be moved one at a time.

But people like Peter Baker, vice president for information systems and IT at Emcor Facilities Services, argue that in other ways, containers still "add complexity."

"This is simply building infrastructure on top of infrastructure," he said.

One example, says Baker — who worked for many years as an electrical engineer building power systems for data centers before shifting over to IT management — is in the area of power management. Each container, he says, will need to come with some sort of UPS (uninterruptible power supply) that does three things: 1) converts the incoming high-voltage into lower usable DC voltages; 2) cleans up the power to prevent it from spiking and damaging the servers; 3) provides backup power in case of an outage.

The problem is that each UPS, in the process of "conditioning" the power, also creates "harmonics" that bounce back up the supply line and can "crap up power for everyone else," Baker said.

Harmonics is a well-known issue that's been managed in other contexts, so Baker isn't saying the problem is unsolvable. But, he argues, the extra infrastructure needed to alleviate the harmonics generated by 220 UPSs — the number of containers Microsoft thinks it can fit inside the Chicago data center — could easily negate the potential ROI from using containers.

Manos' rebuttal: "The harmonics challenges have long been solved [by Microsoft's] very smart electrical and mechanical folks," he said, though he declined to go into specifics. Manos added that he also "challenged the assumption" that Microsoft's solutions are bulky and non-cost-effective: "You can be certain that we have explored ROI and costs on this size of investment." He also admonished critics' speculation that relies too heavily on the "traditional way of thinking about data centers," again, without going into detail.

2. Containers are not as plug-and-play as they seem.

Servers normally get shipped from factory to customer in big cardboard boxes, protected by copious Styrofoam. Setting them up on vibration-prone racks before they travel cross-country by truck is a recipe for broken servers, argues Mark Svenkeson, president of Hypertect Inc., a Roseville, Minn., builder of data centers. At the very least, "verifying the functionality of these systems when they arrive is going to be a huge issue."

But damaged servers haven't been a problem, claimed Manos, since Microsoft began deploying containers at its data centers a year ago.

"Out of tens of deployments, the most servers we've had come DOA is two," he said. Manos also downplayed the labor of testing and verifying each server. "We can know pretty quick if the boxes are up and running with a minimum of people," he said.

He also pointed out that Microsoft plans to make its suppliers liable for any transit-related damage.

So let's say Microsoft really has solved this issue of transporting server-filled containers. But part of what makes the containers so plug-and-play is that they will, more or less, sport a single plug from the container to the "wall" for power, cooling, networking and so forth.

But, Svenkeson pointed out, that also means that an accident such as a kicked cord or severed cable would result in the failure of several thousand servers, not several dozen. It's like those server rooms that go dark because somebody flicks the uncovered emergency "off" switch out of curiosity or spite.



Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!