Call it the Data Center Land Grab of 2007. Big-name companies like Microsoft, Google and HSBC have already ponied up hundreds of millions of dollars this year to stake their claim to acres of land across the country, their first step toward building state-of-the-art, next-generation data centers.
These behemoths are not alone.
"Data centers are the fastest growing sector of site selection in the technology industry. I've never seen this much growth," says John Boyd, president of Princeton, N.J.-based The Boyd Co. and a 32-year veteran of the location scouting market.
Boyd says two factors are primarily responsible: an onslaught of compliance mandates that require better handling and storage of data and a swell in government pressure to make data centers more energy efficient.
Microsoft last month announced it would spend $550 million for a 44-acre lot to build a 400,000-square-foot, two-building data center in San Antonio. Meanwhile, Google announced it would build a $600 million facility in Lenoir, N.C., and a $750 million data center in Goose Creek, S.C. For its part, HSBC North America also has big data center plans -- a $166 million project in Buffalo.
Much of the rest of IT is set to follow suit. In a 2005 AFCOM study of the organization's 3,000 data center members, more than 60% of the respondents said they plan to expand the physical footprint of their data centers within 10 years.
Rakesh Kumar, an analyst at Gartner Inc., says he's not surprised by this flurry of activity. "Many large enterprises either are running out of space or they have space that can't accommodate the needs of the newer technology. That means today's data centers are functionally obsolete," he says.
He points to the power and cooling demands of data center darlings like blade servers. "They require more energy and cooling than older data centers can accommodate," he says. Blade racks are supposed to accommodate up to a hundred individual blade servers, but the limitations of older data center infrastructure, such as electrical and HVAC systems, often force IT crews to dramatically decrease that number. "Instead, they are putting only 20 blades in a rack," Kumar says.
Throwing out the old
A big reason for the land grab is the enormous expense of trying to retrofit today's data centers with more efficient facilities infrastructure.
"A lot of organizations are going through cost-driven consolidation to reduce the number of data centers they might have. The one big data center they end up with has to accommodate all the storage, servers, etc.," Kumar says.
But most data centers can't accommodate that type of setup, and it's often much more cost effective to move rather than try to retrofit. "You'll need a high level of voltage coming in to provide a high amount of energy to the rack. You may need to rip out air conditioners and put in chilled liquid cooling. To do this once you've got everything going is incredibly expensive," he says.
It's also highly disruptive to the workflow. "To bring up floors and lay piping while operations are running -- it would take a brave CIO to justify that project," he says. Robert McFarlane, a principal at Shen Milsom and Wilke, a technology consulting firm in New York, agrees that most data center projects should be started from scratch. "The data center has been designed and built using the same techniques and practices that have been used for the past 20 to 25 years. We didn't know any better and there wasn't anything else to use. Now we've got big problems such as tremendous heat and power densities. Just retrofitting these data centers doesn't take advantage of what's available today to deal with these issues," he says.
Location, location, location
Boyd says there are several important factors to consider when relocating your data center: geographic distance from headquarters, proximity to disaster zones, cost efficiencies such as price of power, and the availability of a skilled labor force.
In a recent study, The Boyd Co. found that Sioux Falls, S.D., ranked best among data center sites, with $16.1 million in total annual operating costs. The worst was New York City, with $22.5 million in total annual operating costs.
Boyd says data center sites are often "inordinately large" because of security. "Our clients want a buffer zone around their bricks and mortar," he says. This means they have to scout areas with low-cost land.
Proximity to disaster zones is a critical factor, as well. "These data centers have to be up and running 24 by 7-by 365 days a year without fail. You don't want them in hurricane or earthquake areas. Nor do you want them anywhere where bad weather puts the population at a standstill like ice storms do in Atlanta," Boyd says.
Often companies choose a site in the country's "midsection" so that the data center is easily accessible from either coast, he says.
It is also important to site your data center near a viable labor pool. "Data centers require highly skilled individuals," he says. He points out that South Dakota constantly has a pool of skilled IT folks coming out of Dakota State University in Madison, which has a major that focuses on information assurance for data centers.
Brian Pickett, data center manager at Pasadena, Calif.-based IndyMac Bank, says site selection was a big factor in the company's decision to build a data center in Arizona. The facility, which was started in 2004 and completed in 2006, accounts for business continuity, disaster recovery and the cost of energy. "All three of those variables came into play when we moved our facility outside of California," he says.
Pickett says the Arizona locale, which is now the company's primary data center, is far enough away from California to be out of the earthquake zone, but close enough that employees can easily get back and forth in the event of an outage or disaster. He says that power prices are cheaper than they are in California, as well.
Everything in modules
Surprises for the IndyMac team included higher voltage requirements and greater demands on air conditioning. "We didn't plan for technologies like blade servers when we started this project three years ago," Pickett says.
To help accommodate these changing needs, the team did not do a full buildout. Instead, IndyMac followed the current trend of modular buildouts -- where, even though companies buy a lot of real estate, the technology is rolled out in stages. In fact, the Arizona site is not expected to reach full capacity until 2010 -- four years after its completion.
The idea is to build out one section at a time. That way, if cabling or power needs change, companies need to revamp only some of the data center and not the entire thing.
Another benefit of modularization is the savings on power and cooling to the whole facility. "Power, cooling, racks, cabling and all the additional equipment are on an as-needed basis," Pickett says.
Gartner's Kumar says modularization, which requires companies to be OK with a certain amount of unused space, allows data centers greater longevity because they can more easily account for significant technology changes. "Say they have 100,000 square feet of space. They only have to build for 20,000 square feet at a time," he says.
The greatest advantage modular data centers provide is that companies can prioritize the criticality of operations, creating hot and cold areas. For instance, one modular area might house business-critical applications with direct links to backup and failover operations. Another area might be marked as less important with a longer time to recovery. Designs don't have to be homogenous, Kumar says.
Befriend your facilities pros
Though the notions of efficiently heating and cooling a data center may be new to ITers, McFarlane says, facilities teams have been dealing with this for years. "IT managers aren't educated in the ways of energy consumption because they haven't come out of mechanical and engineering backgrounds," he says.
He recommends that IT teams form close relationships with facilities crews to learn about the demands of air conditioning, power and other infrastructure elements.
McFarlane teaches a class at Marist College in Poughkeepsie, N.Y., that helps IT managers understand facilities requirements. "Even if they are not responsible for facilities they have to understand it," he says. For instance, he says IT folks are used to doing things overnight or within a day -- and that just doesn't happen in the facilities world. "Sometimes the part you need isn't available for weeks. You have to plan things out," he says.
Gartner's Kumar says he sees this communication already happening. "The traditional areas of demarcation are dissolving. Operationally, everyone is getting smarter. In the end, the CIO's best friend is going to be the facilities manager," he says.
Sidebar: Seeing "green"
Another trend in data centers is to make everything as energy efficient as possible. Michael Manos, senior director of data center services at Microsoft, says that in his company's new data centers, "everything from the applications to the hardware to the facilities is taking into account how much energy is consumed. It's really about optimizing efficiency," he says.
Brian Pickett, data center manager at IndyMac Bank, says his company is also worried about the effects of the data center on the environment. "We're starting to look at energy efficiency based on the types of servers and architectures we need. We're starting to design our operations to be green," he says.
This turn toward the environment comes none too soon. Robert McFarlane, a principal at Shen Milsom and Wilke, says the government has been studying the effect of data center energy consumption and could lay down federal mandates for conservation relatively soon. In fact, the Environmental Protection Agency has already extended its EnergyStar certification to computers, and manufacturers such as Hewlett-Packard are introducing wares to fit the bill. Private initiatives including The Green Grid are also coming to pass.
"They're trying to figure out what incentives would convince businesses to use energy-saving technologies," McFarlane says. Today, some power companies offer rebates for EnergyStar-certified product use. "But the paperwork to prove your usage can be so vexatious," he says.
But he says the breaking point is coming. "Going green has to take hold. Businesses are running up against a point where IT can't grow to support the needs of the data center," McFarlane says. In fact, that day may already be here: Partners Healthcare System Inc. is expecting to save over $2 million with software to help shut down client computers when they're not in use.
Gittlen is a free-lance technology editor in the greater Boston area. She can be reached at email@example.com.