As vendors continue to pack more servers into a smaller footprint, keeping a lid on power requirements -- and keeping server racks cool -- has become a huge challenge. And the lowly AC power supply remains the toughest part of the problem to solve.
A typical power supply, which converts AC power into the various DC voltages required by individual server components, has an efficiency range of just 65% to 85%, vendors say. Just one 1-kilowatt power supply may generate 300 watts of waste heat, and today's blade servers can consume more than 14 kilowatts per rack.
"That's bad," says Scott Tease, product marketing manager for eServer BladeCenter at IBM. "One, I paid for that electricity, and two, I've released the heat into the environment and I have to pay to air-condition it."
To make matters worse, AC power-supply efficiency drops with the utilization level. In servers with redundant power supplies, where the load is shared, best-case utilization levels are below 50%. As a result, power supplies in most servers tend to operate at the low end of the efficiency range, says Ken Baker, data center infrastructure technologist at Hewlett-Packard Co.
Some data center managers have responded by using DC-based power distribution systems, eliminating the need for AC power supplies for server racks. IBM and HP both offer servers that can accept bulk DC power from a centralized, telecommunications-grade -48-volt DC power distribution unit (PDU) and then step it down to the voltages required at the server level.
Rackable Systems Inc.'s products support both bulk power and an option that moves the AC/DC converter away from individual servers to the top of each rack, where heat can be vented into the air-handling system.
Milpitas, Calif.-based Rackable claims that its DC-powered servers reduce heat by up to 30%. HP makes more modest claims of 15% reduction, which can add up across many racks of servers, Baker says.
Data393 Holdings LLC has made the leap to DC-powered servers. The company, which operates a collocation center in Englewood, Colo., uses a DC power distribution system inherited from a previous tenant to power 140 servers from Rackable. Data393's DC power plant includes rectifiers that convert incoming AC power to DC and charge a bank of uninterruptible power supply batteries as well as its servers and network equipment.
Chris Leebelt, senior vice president at Data393, says the IT services provider chose DC-powered equipment because it needed to make the most of its available square footage and its ability to cool that space. While the power distribution system must still convert incoming power to DC, that conversion occurs outside the data center.
DC-powered systems from Rackable cost about the same as traditional AC-powered servers while allowing more servers in each rack, according to Leebelt.
DC rectifiers also have a mean time between failures of 7 million hours -- 70 times longer than AC power supplies, says Geoffrey Noer, senior director of product marketing at Rackable.
"Some of our largest customers host almost exclusively in DC-related environments," says Baker. But he also points out that most are telecommunications companies and hosted service providers. "The number is very small in corporate data centers," he says.
So why don't more enterprise data centers use DC PDUs?
Tease claims that the relationship between utilization and efficiency issues is overstated, and IBM's BladeCenter power supply designs are 90% efficient. In contrast, the converters required to step down DC power are 93% efficient. "Unless the infrastructure is already in place, it just doesn't make sense," he says.
Baker says inertia and familiarity keep data centers on AC power, and the standards for AC are well established and understood. "It takes specialized talent to manage [DC] correctly," he says.
And because DC power has more resistance, the distribution system requires larger conductors. Neil Rasmussen, chief technical officer at American Power Conversion Corp., an UPS and data center rack system manufacturer in West Kingston, R.I., says that adds to infrastructure costs. "DC wiring at these power levels is too expensive and complex, requiring specialized contractors and design," he says.
But Baker and Rackable's Noer say the costs overall are about the same.
Baker says the adoption of DC as an alternative power source could become a trend, particularly in new data centers where such infrastructure choices are being made. "We have customers that have chosen native DC from the ground up," he says. But Baker adds that the lion's share of enterprise data centers will continue to center around AC power.
Meanwhile, IBM is focusing its power-saving efforts on areas such as the CPU, which accounts for 25% of the power budget in a BladeCenter, Tease says. IBM offers a 2.8-GHz Xeon DP processor that adds $200 to the cost of a dual-processor blade but cuts power from 103 watts to 55 watts.
Noer claims that ultimately, the combination of low-voltage parts and DC power will have the biggest payoff: It can cut power requirements by half.
Rasmussen isn't convinced. "If you need to cut the load 15%, just pull out 15% of the servers and put them somewhere else," he says.
But for Data393, floor space is limited. DC power has enabled Leebelt to fill server racks that would otherwise run too hot for his air-handling systems. "[Vendors] don't tell you that you can't load a full rack of blades because the heat coming off the racks can be very significant," he says.
DC power by itself can't solve the problem of increasing power density in server racks. But the option has provided enough relief to convince Leebelt to migrate Data393's remaining 600 servers. "We're doing consolidation work to get out of AC hardware," he says.