Walking the talk: Microsoft builds first major container-based data center
Vendor plans to install up to 220 server-filled shipping containers at Chicago facility
Computerworld - Google Inc. and Sun Microsystems Inc. both may claim to have pioneered the "data center in a box" concept, but Microsoft Corp. appears to be the first company that is rolling out container-based systems in a major way inside one of its data centers.
At a conference in Las Vegas last week, Michael Manos, Microsoft's senior director of data center services, said in a keynote speech that the first floor of a data center being built by the software vendor in the Chicago area will hold up to 220 shipping containers, each preconfigured to support between 1,000 and 2,000 servers, according to various news reports and blog posts.
That means the $500 million, 550,000-square-foot facility in the Chicago suburb of Northlake, Ill., could have as many as 440,000 Windows servers on the first floor alone — or up to 11 times more than the total of 40,000 to 80,000 servers that conventional data centers of the same size typically can hold, according to Manos. He was quoted as saying that Microsoft also plans to install an undisclosed number of servers on the building's second floor, which will have a traditional raised-floor layout.
Microsoft's public relations staff didn't immediately respond to a request for comment today about the speech that Manos gave at the Data Center World conference. But James Hamilton, a technical architect on Microsoft's Windows Live Platform Services team, has posted multiple entries about Manos' speech on his public blog.
Microsoft has said that it plans to begin operations at the Northlake data center by the end of the summer. The company is on a data center building spree aimed at meeting the sharp growth in processing demand that its Windows Live and Office Live online services are expected to generate. Other IT facilities are being built in San Antonio, Dublin and rural Quincy, Wash., the latter of which would be Microsoft's largest data center at 1.5 million square feet.
Cooled by the oft-chilly winds blowing off of Lake Michigan, Chicago was rated in a study conducted last year as the most energy-efficient U.S. city in which to build a data center. But the density of Microsoft's data center in Northlake is requiring the company to construct three electrical substations that will provide a total of 198 megawatts of electricity for powering and cooling systems, according to a story posted by the Data Center Knowledge online news site.
That's enough electricity to power almost 200,000 homes, and Manos told Data Center Knowledge that about 82% of the $500 million bill for the Northlake data center is going toward the facility's mechanical and electrical infrastructure.
Long used by the U.S. military, containers filled with preconfigured, ready-to-run servers are being touted as a quicker, more modular way to expand data centers on the fly than installing racks of servers one by one. Google and Sun have both filed patent claims on server-filled containers, although the former isn't thought to be actively deploying them. Besides Sun, other vendors of container-based setups include IBM, Dell Inc. and Rackable Systems Inc.
Despite its huge size and 24-by-7 operations, Microsoft's Northlake data center won't provide much of a lift to the IT job market in the Chicago area. Manos has said that the new facility would employ only about 30 people, including systems administrators as well as building security and janitorial staffers. In contrast, Google has said that a $600 million data center it is building in Council Bluffs, Iowa, will have about 200 employees when it opens next year.
Microsoft's theory, according to a 2007 presentation by Hamilton (download Word document), is that a smaller staff will actually boost the data center's reliability. Hamilton claimed that between 20% and 50% of system outages are because of "human administrative error," and argued that letting malfunctioning hardware die off was a wiser strategy for a redundantly networked data center than trying to fix the systems and thus potentially risking a larger failure."As parts fail, surviving nodes continue to support the load," Hamilton wrote. "In this modified model, the constituent components are never serviced and the entire module just slowly degrades over time as more and more systems suffer nonrecoverable hardware errors." He added that even if 50 of the servers in a 1,000-system module suffered fatal hardware failures, the module would still be "operating with 95% of its original design capacity."
Read more about Hardware in Computerworld's Hardware Topic Center.
- Accelerating Cloud Deployment and Operations with Managed Services Companies that do not have sufficient in-house expertise to either deploy or maintain an IaaS cloud should turn to Managed Service Providers .
- Enable secure remote access to 3D data without sacrificing visual perfomance Design and manufacturing companies must adapt quickly to the demands of an increasingly global and competitive economy. To speed time to market for...
- Simplifying Product Design In A Complex World Product design engineering has moved far beyond the confines of ever-more powerful workstations. Companies can't afford to restrict projects to using only local...
- A Reference Architecture for the Internet of Things The aim of this is to provide Architects and Developers of IoT projects with an effective starting point that covers the major requirements...
- What Does it Take to Deliver a Superior Customer Experience? The Two Top-Rated Online Retailers, B&H Photo and Crutchfield Electronics, Share Their Secrets Discuss practical CX tools and service methods such as contact center agents and the use of realtime speech analytics to help contact center...
- Keep Servers Up and Running and Attackers in the Dark An SSL/TLS handshake requires at least 10 times more processing power on a server than on the client. SSL renegotiation attacks can readily... All Hardware White Papers | Webcasts