6 reasons why Microsoft's container-based approach to data centers won't work

Are you listening, Microsoft?

1 2 3 4 Page 2
Page 2 of 4

"If you're plugging all of the communications and power into a container at one point, then you've just identified two single points of failure in the system," Svenkeson said.

While Manos conceded the general point, he also argued that a lot "depends on how you architect the infrastructure inside the container."

Outside the container, Microsoft is locating services worldwide — similar to Google's infrastructure — in order to make them redundant in case of failure. In other words, users accessing a hosted Microsoft application, including Hotmail, Dynamics CRM or Windows Live, may connect to any of the company's data centers worldwide.

That means that "even if I lose a whole data center, I've still got nine others," Manos said. "So I'll just be at 90% serving capacity, not down hard."

Microsoft is so confident its plan will work that it's installing diesel generators in Chicago to provide enough electricity to back up only some, not all, of its servers.

Few data centers dare to make that choice, said Jeff Biggs, senior vice president of operations and engineering for data center operator Peak 10 Inc., despite the average North American power uptime of 99.98%.

"That works out to be about 17 seconds a day," said Biggs, who oversees 12 data centers in southeastern states. "The problem is that you don't get to pick those 17 seconds."

3. Containers leave you less, not more, agile.

Once containers are up and running, Microsoft's system administrators may never go inside them again, even to do a simple hardware fix. Microsoft's research shows that 20% to 50% of system outages are caused by human error. So rather than attempt to fix malfunctioning servers, it's better to let them die off.

To keep sysadmins from being tempted to tinker with dying servers, Microsoft plans to keep its Chicago IT staff to a total of 35. With multiple shifts, that works out to fewer than 10 techs on-site at any given time. That's despite the 440,000 or more servers Microsoft envisions scattering across the equivalent of 12 acres of floor space.

But where Manos sees lean and mean, others envision potential disaster.

"It seems pretty thin to me," said Svenkeson, who has been building data centers for 20 years. "These are complex systems to operate. To watch them remotely and do a good job of it is not cheap."

As more and more servers go bad inside the container, Microsoft plans to simply ship the entire container back to the supplier for a replacement.

It becomes a problem, then, of defining the tipping point. As more servers die, the opportunity cost of not replacing the container grows bigger and bigger.

"Say 25% of the servers have failed inside a container after a year. You may say you don't need that compute capacity — fine," said Dave Ohara, a data center consultant and blogger. "But what's potentially expensive is that 25% of the power committed to that container is doing nothing. Ideally, you want to use that power for something else.

"Electrical power is my scarce resource, not processing power," Ohara concluded.

Biggs agreed.

"Intel is trying to get more and more power efficient with their chips," Biggs said. "And we'll be switching to solid-state drives for servers in a couple of years. That's going to change the power paradigm altogether."

But replacing a container after a year or two when a fraction of the servers are actually broken "doesn't seem to be a real green approach, when diesel costs $3.70 a gallon," Svenkeson said.

Manos acknowledged that power is somewhat "hard-wired" within the data center, making it difficult to redistribute. But he asserted that if a data center is "architected smartly on the backside, you can get around on those challenges, by optimizing your power components and your overall design." He declined to elaborate.

If containers need to be swapped out before expectation, that cost will be borne by the container vendor, not Microsoft, said Manos.

But he hinted that Microsoft is willing to tolerate a fairly large opportunity cost — that is, hold onto containers even if a large percentage of the servers have failed and are taking up valuable power and real estate as a result. "I don't know too many people who are depreciating server gear over 18 months. Rather, I see pressure to move out to a five-to-six-year cycle."

1 2 3 4 Page 2
Page 2 of 4
  
Shop Tech Products at Amazon