Where InfiniBand Fits

Tom Bradicich is co-chairman of the Infiniband Trade Association. He's also a distinguished engineer and serves as chief technology officer for the IBM eServer xSeries product line at IBM in Research Triangle Park, N.C. He spoke with Computerworld's Robert L. Mitchell about how InfiniBand fits with other data center I/O options.

Q: Why InfiniBand?

A:
Today's server I/O subsystem is PCI [Peripheral Component Interconnect], a bus connection architecture. It does not scale well as we go to higher and higher speeds. It's also a single domain of failure. If there's a failure on the bus, it takes down the whole system, and you can't figure out who did it. A third weakness is scalability. The more you add to the bus, the slower it goes. You're sacrificing performance and reliability.

InfiniBand is a point-to-point channel architecture. It allows I/O devices to communicate from one point to another on a dedicated channel, which gives it tremendous reliability and performance.

The channel architecture was invented on the IBM 360 mainframe. The new news about InfiniBand is that we can bring this mainframe-inspired capability to an industry standard platform. As an I/O interconnect, one can get the same class of reliability, performance and scalability as midrange and high-end systems do today.



Q: Fibre Channel is also a high-speed serial I/O interconnect. Why not use that?

A:
Fibre Channel has to go through the PCI bus and hence the bottleneck. What we envision is a pure and complete channel that goes from the heart of the server straight out to the I/O device.

It's possible to build a SAN [storage-area network] out of InfiniBand. Existing Fibre Channel SANs can easily be routed into an InfiniBand pipe into the server. InfiniBand can be a single pipe into the server, aggregating Ethernet, clustering and Fiber storage and replacing those cables.



Q: Then why do we have 3GIO, a third-generation I/O technology that's compatible with the current PCI software environment?

A:
3GIO is an interchip connection. It's a serial high-speed connection. Because it's higher speed, it can accept a high-speed InfiniBand connection. 3GIO won't be here until late 2004.



Q: Do you see InfiniBand as an inside-the-box interconnect or outside the box only? It's not designed for memory and processor connections, is it?

A:
It's connecting at a distance and creating fabrics for connecting data, storage and other servers. We won't see it connecting inside a box except for blade [servers].



This year, the InfiniBand [host channel adapters] would go right to the PCI bus. In the future, one can put it on the planar and bypass the PCI bus completely.



Q: What good are first-generation InfiniBand host bus adapters, since they just plug into PCI slots on servers?

A:
That does compromise some performance, [but] you still get the scalability and reliability and low latency. The second phase is native, where the InfiniBand chips are on the motherboard and bypass the PCI bus.



Q: Where does InfiniBand fit in terms of enterprise IT applications?

A:
There are three areas of application for InfiniBand. One is direct-attached storage, the second is clustering, and the third is an emerging area called server blades, where we want to take I/O and outboard it from the server.

If we do that, the server can be thinner and cooler, and we can pack more of them together and get higher density. The blade architecture blends nicely to the InfiniBand architecture because it's a universal interconnect where you can run storage and networking.

I/O can be remote, whereas it has to be close with a bus architecture. Today, Ethernet is the preferred approach [for clustering].



Q: What's wrong with Ethernet for clustering?

A:
Today, when you cluster industry-standard components together in a database environment, you must use either Ethernet, which has high latency, or use a [proprietary] connection. InfiniBand represents an open standard that's low latency and very high performance.



Q: InfiniBand also supports something called message-passing. What's that about?

A:
That's a major distinction between PCI and 3GIO and InfiniBand. PCI and 3GIO [use a] load-and-store architecture. The data is loaded [on the bus] and the microprocessor actually waits for the I/O device to come get the information before it goes on to do other work. That would be like the mailman putting mail in your mailbox and waiting for you to pick it up before he moves on.

[InfiniBand uses] message passing. The mailman leaves mail in your mailbox. You fetch the mail when you're ready. Meanwhile, the mailman is off doing other work. This results in much higher performance.



Q: Where are IBM's products?

A:
We will have an add-in InfiniBand card when the market is ready. That will happen the end of this year or in Q1 2003. That would be on Intel-based servers.



Q: What about for mainframes?

A:
At an unspecified time, we will have the capability in the non-Intel servers.

Copyright © 2002 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon