Sal Azzaro, director of facilities for Time Warner Cable Inc., is trying to cram additional power into prime real estate at the company's 22 facilities in New York.
"Its gone wild," says Azzaro "Where we had 20-amp circuits before, we now have 60-amp circuits." And, he says, "there is a much greater need now for a higher level of redundancy and a higher level of fail-safe than ever before."
If Time Warner Cable's network loses power, not only do televisions go black, but businesses can't operate and customers can't communicate over the company's voice-over-IP and broadband connections.
When it comes to the power crunch, Time Warner Cable is in good company. In February, Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University, published a study showing that in 2005, organizations worldwide spent $7.2 billion to provide their servers and associated cooling and auxiliary equipment with 120 billion kilowatt-hours of electricity. This was double the power used in 2001.
According to Koomey, the growth is occurring among volume servers (those that cost less than $25,000 per unit), with the aggregate power consumption of midrange ($25,000 to $500,000 per unit) and high-end (over $500,000) servers remaining relatively constant.
One way Time Warner Cable is working on this problem is by installing more modular power gear that scales as its needs grow. Oversized power supplies, power distribution units (PDU) and uninterruptible power supplies (UPS) tie up capital funds, are inefficient and generate excess heat. Time Warner Cable has started using Liebert Corp.'s new NX modular UPS system, which scales in 20-kilowatt increments, to replace some of its older units.
"The question was how to go forward and rebuild your infrastructures when you have a limited amount of space," Azzaro says.
With the NX units, instead of setting up two large UPSs, he set up five modules -- three live and the other two on hot standby. That way, any two of the five modules could fail or be shut down for service and the system would still operate at 100% load.
Other approaches
Some users are trying innovative approaches. One new way of approaching the power issue is a technique called combined heat and power, or cogeneration, which combines a generator with a specialized chiller that turns the exhausted waste heat into a source of chilled water. (See related story.)
Another new approach is to build data centers that operate off DC rather than AC power. In a typical data center, the UPSs convert the AC power coming from the utility's main power supply into DC power, then back into AC again. Then the server power supplies again convert the power to DC for use within the server.
Each time the electricity is switched between AC and DC, some of that power is converted into heat. Converting the AC power to DC power just once, as it comes into the data center, eliminates that waste. Rackable Systems Inc. in Fremont, Calif. has a rack-mounted power supply that converts power from 220-volt AC power to -48-volt DC power in the cabinet, then distributes the power via a bus bar to the servers.
On a larger scale, last summer the Lawrence Berkeley lab set up an experimental data center, hosted by Sun Microsystems Inc., that converted incoming 480 VAC power to 380 VDC power for distribution to the racks, eliminating the use of PDUs altogether. Overall, the test system used 10% to 20% less power than a comparable AC data center.
For Rick Simpson, president of Belize Communication and Security Ltd. in Belmopan, Belize, power management means using wind and solar energy.
Simpson's company supports wireless data and communications relays in the Central American wilderness for customers including the U.K. Ministry of Defence and the U.S. embassy in Belize. He builds in enough battery power -- 10,000 amp hours -- to run for two weeks before even firing up the generators at the admittedly small facility.
"We have enough power redundancy at hand to make sure that nothing goes down -- ever," Simpson says. So even though the country was hit by Category 4 hurricanes in 2000 and 2001, "we haven't been down in 15 years," he says.
Belize Communications equipment all runs directly off the batteries and UPSs from Falcon Electric Inc. in Irwindale, Calif. The electric utility's power is used only to charge the batteries.
Scaling up
While there is a lot of talk lately about building green data centers, and many hardware vendors are touting the efficiency of their products, the primary concern is still just ensuring you have a reliable source of adequate power.
Even though each core on a multicore processor uses less power than it would if it was on its own motherboard, a rack filled with quad-core blades consumes more power than a rack of single-core blades, according to Intel Corp.
"It used to be you would have one power cord coming into the cabinet, then there were dual power cords," says Bob Sullivan, senior consultant at The Uptime Institute in Santa Fe, N.M. "Now with over 10 kilowatts being dissipated in a cabinet, it is not unusual to have four power cords, two A's and two B's."
With electricity consumption rising, data centers are running out of power before they run out of raised floor space. A Gartner Inc. survey last year showed that half of data centers will not have sufficient power for expansion by 2008.
"Power is becoming more of a concern," says Dan Agronow, chief technology officer at The Weather Channel Interactive in Atlanta. "We could put way more servers physically in a cabinet than we have power for those servers."
The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment -- generators, UPSs, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability -- a Tier 4 data center -- for every kilowatt used for processing, the Uptime Institute says that some $22,000 is spent on power and cooling infrastructure.
To cut down on costs and ensure that there is enough power requires a close look at each component individually. The next step is to figure out how each component impacts the data center as a whole. Steve Yellen, vice president of product marketing strategies at Aperture Technologies Inc., a data center software firm in Stamford, Conn., says that managers need to consider four separate elements that contribute to overall data center efficiency -- the chip, the server, the rack and the data center as a whole. Savings in any one of these components yields savings in each of the higher area above it.
"The big message is that people have to get away from thinking about pieces of the system," Stanford University's Koomey says. "When you start thinking about the whole system, then spending that $20 extra on a more-efficient power supply will save you money in the aggregate."
Going modular
There are strategies for cutting power in each area Yellen outlined above. For example, multicore processors with lower clock speeds reduce power at the processor level. And server virtualization, better fans and high-efficiency power supplies -- such as those certified by the 80 Plus program -- cut power utilization at the server level.
Five years ago, the average power supply was operating at 60% to 70% efficiency, says Kent Dunn, partnerships director at PC power-management firm Verdiem Corp. in Seattle, Wash. and program manager for 80 Plus. He says that each 80 Plus power supply will save data center operators about 130 to 140 kilowatt-hours of power per year.
Rack-mounted cooling and power supplies such as Liebert's XD CoolFrame and American Power Conversion Corp.'s InfraStruXure cut waste at the rack level. And at the data center level, there are more efficient ways of distributing air flow, using outside air or liquid cooling, and doing computational fluid dynamics modeling of the data center for optimum placement of servers and air ducts.
"We've deployed a strategy within our facility that has hot and cold aisles, so the cold air is where it needs to be and we are not wasting it," says Fred Duball, director of the service management organization for the Virginia state government's IT agency, which just opened a 192,000-square-foot data center in July and will be ramping up the facility over the next year or so. "We are also using automation to control components and keep lights off in areas that don't need lights on."
Finding a fit
There is no single answer that meets the needs of every data center operator.
When Elbert Shaw, a project manager at Science Applications International Corp. in San Diego, consolidated U.S. Army IT operations in Europe for several dozen locations into four data centers, he had to come up with a unique solution for each location. At a new facility, he was able to put in 48-inch floors and run the power and cooling underneath. But one data center being renovated only had room for a 12-inch floor and two feet of space above the ceiling. So instead of bundling the cables, which could have eaten up eight of those 12 inches, blocking most of the airflow, he got permission to unbundle and flatten out the cables. In other instances he used 2-inch underfloor channels, rather than the typical 4-inch variety, and turned to overhead cabling at one location.
"Little tricks that are OK in the 48-inch floor cause problems with the 12-inch floor when you renovate a site," says Shaw.
"These facilities are unique, and each has its own little quirks," Koomey says.
Power Management Tips
Various experts suggest the following ways of getting all you can from your existing power setup:
- Don't oversize. Adopt a modular strategy for power and cooling that grows with your needs instead of buying a monolithic system that will meet your needs years down the road.
- Plan for expansion. Although you don't want to buy the extra equipment yet, install conduits that are large enough to accommodate additional cables to meet future power needs.
- Look at each component. Power-efficient CPUs, power supplies and fans reduce the amount electricity used by a server. But be sure to look at their impact on other components. For example, quad-core chips use less power than four single chips but may require additional memory.
- Widen racks. Use wider racks and run the cables to the side, rather than down the back where they block the air flow. Air flows from the front of a server, through the box and out the back. There are no inlet and outlet vents on the sides. It is similar to a PC, and you can put a piece of plywood along the side of server without affecting airflow through the machine. But, if you put the wood along the back, it will overheat.
- Install a UPS bypass. This is a power cable that goes around the UPS rather than through it. That way, if the UPS is taken offline, there's still a route available for the electricity to flow through and you have power redundancy when you bring a UPS device down for maintenance.
Robb is a Computerworld contributing writer.