The positioning of PCI Express as the general-purpose I/O interconnect for servers, PCs and laptops may give IT professionals who recall the hoopla over InfiniBand a sense of deja vu. A little more than a year ago, Intel Corp. and other vendors proposed that high-speed serial I/O technology as the new standard for all types of out-of-the-box I/O interconnects. This switched, channel-based architecture, with bandwidth of up to 10Gbit/sec., was going to replace both Ethernet in the data center and Fibre Channel in storage-area networks and become the new interconnect for server clusters. Then reality set in.
"It was overbilled, overhyped to be the nirvana for everything server, everything I/O, the solution to every problem you can imagine in the data center," says Bert McComas, an analyst at InQuest Market Research in Higley, Ariz. InfiniBand turned out to be more complex and expensive to deploy than vendors first thought, and it required installing a new cabling system when IT already had substantial investments in both Ethernet and another switched, high-speed serial interconnect, Fibre Channel.
"The economic climate and positioning of InfiniBand have changed since its inception," admits Tom Bradicich, co-chairman of the InfiniBand Trade Association. Bradicich, who is also chief technology officer of IBM's xSeries server line, sees InfiniBand playing a role in high-performance data center computing and database clustering. He says IBM is working on an InfiniBand cluster interconnect using host channel adapters, switches and specialized software. "In March, we completed that testing. Right now customers are evaluating it," he says. IBM made a formal announcement on June 17, and Dell Computer Corp., Hewlett-Packard Co. and Sun Microsystems Inc. also announced InfiniBand products for their server lines.
"InfiniBand has three fundamental features that no other I/O technology has," says Bradicich. "Today it's the only technology that can run at 10Gbit/sec. over copper. [The current 10Gbit/sec. Ethernet standard runs only on single-mode fiber]. The specification includes a built-in protocol offload engine so that the server CPU doesn't have to do as much work. And it supports Remote Direct Memory Access [RDMA] ... a technology that bypasses the need to make multiple copies of the data in memory. InfiniBand gives mainframe I/O capability to the open standards space." Comparable capabilities for 10 Gigabit Ethernet, such as TCP offload engines (TOE), are still a year or more away, he says.
But even for clustering, InfiniBand is hard to deploy, says McComas. Nonetheless, he sees it as the best solution for that purpose, since it provides an open alternative to today's proprietary designs. However, that small market gives InfiniBand a much narrower focus than originally envisioned.
David Hiesey, manager of advanced technology at HP, sees a role for InfiniBand in high-end clustering, but he says Ethernet is a strong alternative for small and midrange servers, particularly in light of the Institute of Electrical and Electronics Engineers Inc.'s recent release of its RDMA over IP specification. "HP is still very supportive of InfiniBand. You'll see it in our high-end products. But for industry standard servers, we think we can do that with RDMA over IP. You can have similar functionality with existing network infrastructures and not have to recable," he says.
"Ethernet won't necessarily be faster and better, but it may be cheaper," says Bradicich. Ethernet has the advantage of incumbency, but InfiniBand will continue to offer higher performance and quality of service, he says. By the time 10 Gigabit Ethernet catches up with InifiniBand's current feature set, sometime in 2004, InfiniBand will have upped the ante to 30Gbit/sec. technology, he says.
|
|