Edgy About Blades

Despite initial enthusiasm, some users are still hesitant about widespread deployment of blade servers. Here's why.

What's not to like about blade servers? The technology is sleek and sexy. With fewer cables to interconnect, blades, which share a common backplane, are easier to manage than other types of servers. They take up less floor space, and vendors are advertising them heavily as an alternative to stand-alone and rack-mounted servers. Yet even as analysts predict growth, some IT organizations are hesitant about authorizing broad deployments. IT professionals cite concerns about heating and power, vendors' proprietary designs, the relative immaturity of the technology and premium prices.

While market research firm IDC predicts strong growth for blades over the next few years, Gartner Inc. projects more modest gains, citing user concerns. "By 2009, only approximately 16% of servers installed worldwide will be in the blade format," says Gartner analyst Jane Wright.

Cooling and power top the list of concerns for Capgemini clients, says John Parkinson, chief technologist at the Chicago-based IT consultancy. He says that in some cases dense blade deployments have required "major upgrades to power... and air handling."

Dealing with power and cooling issues can add significantly to the total cost of ownership, says Umesh Jagannatha, senior manager of technical services at Embarcadero Systems Corp. in Alameda, Calif. Embarcadero uses blades for a port security application but has passed on other uses for now.

Derek Larke is currently testing an IBM BladeCenter but has all but decided to go with 1U (1.75 in. tall) servers. "This thing is cranking out heat like there's no tomorrow. We noticed that the server room temperature has gone up," says Larke, manager of information services at Fun Sun Vacations Ltd. in Edmonton, Alberta.

Tim Dougherty, director of eServer BladeCenter marketing at IBM, says the problem isn't blade-specific but reflects an overall trend toward increasing processor density in data centers. IBM's BladeCenter design won't overheat, he says. But blade-filled server racks can create hot spots in the data center that air conditioning units can't handle, so administrators commonly leave racks partially empty in an effort to distribute the heat more evenly.

Edgy About Blades
1pixclear.gif
Image Credit: Wendy Wahman
1pixclear.gif

Robert Kreitzer, vice president of the Intel server engineering team at KeyCorp in Cleveland, says he watched IBM representatives demonstrate how they could cool a fully loaded BladeCenter rack, but the real-world advice he received was different. "I asked them full on, 'You don't really recommend filling the rack, do you?'" says Kreitzer, recalling that the representatives acknowledged that they didn't recommend it.

Wright says she regularly fielded calls about overheated blade racks a year ago. That problem has disappeared because vendors no longer fill racks with blades, she says. IBM counters that many of its customers do, in fact, operate with fully loaded racks.

Hesitant to Buy

Wright says blade server technology will evolve rapidly in the next two years to address those problems, so some users have delayed purchase decisions. "IBM openly tells customers it's working on a new way to cool blades, and it may involve water or liquids." That will likely mean a new chassis format. "Customers hear that a massive change is coming by 2006-2007, and they hesitate to buy," she says.

IBM acknowledges that a redesign is in the works. But "we see nothing that's going to take us away from air cooling at the box level through 2007-2008," says Scott Tease, product marketing manager for eServer BladeCenter.

Interoperability concerns derailed Kreitzer's initial assessment of blades. At the time, IBM offered a storage blade that was incompatible with KeyCorp's standard, which was based on products from Brocade Communications Systems Inc. Last fall, IBM opened up the specification for its BladeCenter architecture to allow best-of-breed I/O devices. Since then, Brocade and many other vendors have committed to design BladeCenter-compatible equipment. "What customers told us was, 'If that fabric switch isn't the one we've standardized on... don't talk to me,'" says Dougherty.

Now Kreitzer is taking a second look. But he still has a concern: Because every vendor has a proprietary chassis, third-party I/O blades require a vendor-specific interface. For example, a Brocade switch designed for insertion into IBM's BladeCenter can't be used in an HP blade server chassis or vice versa. But Wright says most users are "resigned to that" and are just happy to have name-brand options for I/O within the server blade chassis.

Leveling the Playing Field

"Customers would love to have interoperability and interchangeability among things like I/O modules," says Kevin Kettler, chief technology officer at Dell Inc. The architectures, he adds, are still "in their late infancy." A late entrant into the blade market, Dell would like to see more standards around bladed architectures to level the playing field. But market leaders IBM and Hewlett-Packard Co. still prefer to develop their own "ecosystems" where third parties can offer vendor-specific implementations of their products.

"IBM opening up its technology is not a significant step. There need to be some standards that handle interoperability," says a global IT manager at a major automaker who asked not to be named. IBM's Dougherty notes that administrators can still install blade server chassis from multiple vendors into a single rack. "The only thing we lock you into is the chassis," he says. A BladeCenter chassis can hold 14 blades.

Dougherty acknowledges that a common standard for add-in I/O devices—but not processor blades—may eventually come to pass. But for now, says John Humphreys, an analyst at IDC, "the Brocades of the world seem to be willing to sign on to provide multiple products in the blade space."

Ed Mulligan, managing director of technology services at The Bank of New York Inc., expects blade server architectures to deliver industry-standard approaches to network and storage connectivity. "Until such time, the risks outweigh the benefits when considering integrating this technology at an enterprise level," he says.

Eventually, common standards could emerge. "There are lots of discussions about the potential for InfiniBand," says IDC analyst Vernon Turner. Jim Pappas, director of initiatives for Intel Corp.'s Digital Enterprise Group, says InfiniBand and Ethernet are the front-runners. But that's still years away. Only after heating problems are resolved will vendors turn their attention to issues such as interoperability and standardization, says Gartner's Wright. And any standard will require the cooperation of both HP and IBM, which together own more than three quarters of the market, according to IDC.

Another inhibitor to broad acceptance is the premium charged for some blade servers. "Price is the No. 1 thing," says Kreitzer. He evaluated blades last November and found 1U servers more attractive. Vendors say prices are competitive with those of rack server offerings, but the bottom-line numbers some users are seeing don't always add up. Wright estimates that blades cost about 10% more per server when the cost of the chassis is factored in.

"At the end of the day, 1U servers were cheaper than blades," says Larke at Fun Sun. Jagannatha at Embarcadero says the pricing he received for individual HP DL360 and DL380 servers was also cheaper than the price of entry for blades. That comparison isn't always fair, says Humphreys, because server blades require buying a chassis to hold them. The per-blade costs typically aren't competitive unless the chassis is at least half filled. And blades can be cheaper than rack-mounted servers in some situations, such as when a group of blade servers can share a storage-area network interconnect rather than requiring individual host bus adapters for each server, says Wright.

Finding Space Elsewhere

Some IT organizations have passed on blades because they've freed up plenty of floor space using server virtualization technologies.

"We have had considerable success with VMware and server consolidation in rack-mounted servers, and we haven't had any strong drive to go to blade servers," says Phil Zwieg, vice president of technology services at The Northwestern Mutual Life Insurance Co. in Milwaukee. The company, which recently built a new data center, has consolidated 330 server-based applications into virtual machines that run on just 45 1U servers. Zwieg plans to take another look at blades in 2006 but says he still has questions about heat and standardization issues.

Companies like Northwestern that are using virtualization may find that it's more efficient to go with big, multiprocessor servers and carve those up into virtual machines, says Wright. Although blades could be used for this purpose, multiprocessor blades start to get bulky. "A lot of the value of blades is the modular design," which may be one reason why four-processor blades haven't taken off, she says.

While vendor marketing hype would have customers believe that blades will take over the server room, replacing stand-alone and rack-mounted servers as the preferred format, the technology may end up as simply another option—one for situations where a highly modular design is the best fit. "It will be just another format, just another choice," Wright says.

But while some organizations aren't yet ready to buy, none of the users interviewed for this story are writing off blades entirely. In fact, most say they will watch the technology closely as it matures. "There are efficiencies," says Larke. "That's why we're considering them."

DIFFERENT OUTLOOKS

Two major research firms have very different views of both the current market share of blade servers and how they expect the technology to fare in the next five years.

Different Outlooks

Source: IDC, Gartner Inc.

Related:

Copyright © 2005 IDG Communications, Inc.

  
Shop Tech Products at Amazon