Sidebar: Is InfiniBand Ready for a Comeback?

The positioning of PCI Express as the general-purpose I/O interconnect for servers, PCs and laptops may give IT professionals who recall the hoopla over InfiniBand a sense of deja vu. A little more than a year ago, Intel Corp. and other vendors proposed that high-speed serial I/O technology as the new standard for all types of out-of-the-box I/O interconnects. This switched, channel-based architecture, with bandwidth of up to 10Gbit/sec., was going to replace both Ethernet in the data center and Fibre Channel in storage-area networks and become the new interconnect for server clusters. Then reality set in.

"It was overbilled, overhyped to be the nirvana for everything server, everything I/O, the solution to every problem you can imagine in the data center," says Bert McComas, an analyst at InQuest Market Research in Higley, Ariz. InfiniBand turned out to be more complex and expensive to deploy than vendors first thought, and it required installing a new cabling system when IT already had substantial investments in both Ethernet and another switched, high-speed serial interconnect, Fibre Channel.

"The economic climate and positioning of InfiniBand have changed since its inception," admits Tom Bradicich, co-chairman of the InfiniBand Trade Association. Bradicich, who is also chief technology officer of IBM's xSeries server line, sees InfiniBand playing a role in high-performance data center computing and database clustering. He says IBM is working on an InfiniBand cluster interconnect using host channel adapters, switches and specialized software. "In March, we completed that testing. Right now customers are evaluating it," he says. IBM made a formal announcement on June 17, and Dell Computer Corp., Hewlett-Packard Co. and Sun Microsystems Inc. also announced InfiniBand products for their server lines.

"InfiniBand has three fundamental features that no other I/O technology has," says Bradicich. "Today it's the only technology that can run at 10Gbit/sec. over copper. [The current 10Gbit/sec. Ethernet standard runs only on single-mode fiber]. The specification includes a built-in protocol offload engine so that the server CPU doesn't have to do as much work. And it supports Remote Direct Memory Access [RDMA] ... a technology that bypasses the need to make multiple copies of the data in memory. InfiniBand gives mainframe I/O capability to the open standards space." Comparable capabilities for 10 Gigabit Ethernet, such as TCP offload engines (TOE), are still a year or more away, he says.

But even for clustering, InfiniBand is hard to deploy, says McComas. Nonetheless, he sees it as the best solution for that purpose, since it provides an open alternative to today's proprietary designs. However, that small market gives InfiniBand a much narrower focus than originally envisioned.

David Hiesey, manager of advanced technology at HP, sees a role for InfiniBand in high-end clustering, but he says Ethernet is a strong alternative for small and midrange servers, particularly in light of the Institute of Electrical and Electronics Engineers Inc.'s recent release of its RDMA over IP specification. "HP is still very supportive of InfiniBand. You'll see it in our high-end products. But for industry standard servers, we think we can do that with RDMA over IP. You can have similar functionality with existing network infrastructures and not have to recable," he says.

"Ethernet won't necessarily be faster and better, but it may be cheaper," says Bradicich. Ethernet has the advantage of incumbency, but InfiniBand will continue to offer higher performance and quality of service, he says. By the time 10 Gigabit Ethernet catches up with InifiniBand's current feature set, sometime in 2004, InfiniBand will have upped the ante to 30Gbit/sec. technology, he says.

1by1.gif
I/O Timeline: InfiniBand vs. 10 Gigabit Ethernet
2002 2003 2004 2005
InfiniBand 4x InfiniBand 10Gbit/sec. early products ship. Includes protocol offload engine and RDMA capabilities. Runs on copper cabling. 4x InfiniBand commercial shipments begin. InfiniBand 2.0 30Gbit/sec. specification slated to be released. InfiniBand 2.0 early products expected to ship.
Ethernet 10Gbit/sec. Ethernet early products ship. Runs on single-mode fiber. 10 Gigabit on copper Ethernet specification expected to be released in the third quarter. Midyear: Early 10 Gigabit Ethernet products with TOE or RDMA expected to ship. 10 Gigabit Ethernet on copper early products expected to ship. Early 2005: Commercial shipments of 10 Gigabit Ethernet on copper with TOE and RDMA expected.

Source: Tom Bradicich, CTO IBM eServer xSeries and co-chairman of the InfiniBand Trade Association.

1pixclear.gif
InfiniBand vs. PCI Express:

Just what is the difference between InfiniBand and PCI Express? Both share similarities, but while InfiniBand is a channel architecture, PCI Express is a load/store architecture. Here's the difference, according to Tom Bradicich. co-chair of the InfiniBand Trade Association and chief technology officer for IBM's xSeries server line.

Channel Architecture

InfiniBand is based on a channel architecture. This communication architecture uses message passing, where the processor places information into memory, leaves it there and can then move on to other tasks. The I/O device then retrieves the information at its own pace. The processor does not have to wait for the I/O device.

Load/Store Architecture

PCI Express uses a load/store architecture, where the main processor sends data directly to or receives data from an I/O device (such as a LAN card or storage host adapter). Unlike a channel architecture, which writes information to memory and uses message passing, the processor in a load/store architecture must wait for the slow I/O device to respond. PCI is an example of a load/store architecture.

Copyright © 2003 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon