By Dave Simpson
All of the hoopla surrounding the next-generation I/O interface, now called InfiniBand, has created controversy and confusion over how the interconnect will eventually be used. The spec, which is due in the next quarter, is promoted primarily by Intel and is backed by vendors such as Compaq, Dell, Hewlett-Packard, IBM, Microsoft, Sun, and more recently Cisco.
Originally designed as a replacement for the PCI and PCI-X shared-bus architecture, it's clear that Intel and the InfiniBand backers have much more in mind than just an internal bus replacement. The spec calls for a channel-oriented, switched fabric, serial point-to-point link architecture. Like Fibre Channel, InfiniBand can accommodate a number of protocols, such as SCSI, IP, and the Virtual Interface (VI) architecture.
Intel promotes InfiniBand as being ideal for server-to-server (e.g., clustering) connections, as well as storage area network (SAN) connections. (Intel often uses the SAN acronym to refer to server area networks.) The goal, according to Intel officials, is a single wire that can consolidate cluster, IP, and storage networks. This brings into question the relative roles of InfiniBand and Fibre Channel in the future.
While providing a unifying interconnection link, InfiniBand could coexist with protocols such as SCSI, Fibre Channel, and the Virtual Interface (VI) architecture for inter-processor communications (IPC).
However, most vendors think it may be more of a time issue than a technology issue and that co existence will be the norm for the near term.
"I think they'll coexist for at least five to seven years, although some vendors will try to take InfiniBand into storage area networks," predicts Skip Jones, director of planning and technology at QLogic, a SCSI and Fibre Channel chip and board manufacturer. (QLogic has also announced plans to develop InifiniBand products.)
Jones says that for the next three years or so, InfiniBand activity will focus on internal interconnects and rack-mounted server clusters, which he refers to as processor area networks.
"Using InfiniBand for a storage area network, or Fibre Channel for a processor area network, won't be any closer to happening in five years than it is now," Jones predicts. "Interfaces like Ethernet and Fibre Channel coexist, and I don't see anything different happening when InfiniBand comes around. It's really hard to displace huge infrastructures."
"I think there will be space for both standards for quite some time," says Christopher Croteau, marketing director for I/O products at Intel, "but ultimately if the InfiniBand promise proves out you wouldn't have an InfiniBand and Fibre Channel combo. You'd just put all your storage on InfiniBand."
InfiniBand will support bandwidth ranging from 500MBps to 6GBps over multiple wires. That compares to 100MBps for Fibre Channel now, and 200MBps later this year. Plans for 1GBps Fibre Channel are on the drawing boards.
But raw speed isn't the only factor. Depending on the application (e.g., clusters or SANs), latency may be a bigger issue. "One of the big differences between InfiniBand and Fibre Channel is latency," says Jonathan Eunice, president of Illuminata, a research and consulting firm in Nashua, NH. "Fibre Channel is relatively high latency and is primarily designed for big payloads, whereas InfiniBand is relatively low latency, which is very good for clusters and applications like Oracle Parallel Server."
There are other differences to note. "Unlike Fibre Channel, InfiniBand is not optimized for storage; it's optimized for VI," says Jeff Russell, architect of server I/O at Crossroads Systems, a manufacturer of Fibre Channel-to-SCSI routers, and a member of the InfiniBand Trade Association (www.infinibandta.org).
Windows 2000 servers with internal InfiniBand interconnects are expected to appear near the end of 2001, at the earliest. Some time after that, vendors are expected to turn their attention to InfiniBand as a server-to-server cluster interconnect and, after that, as a one-wire-fits-all interconnect.
"Let's replace the PCI bus, and once that works we'll start thinking about plugging InfiniBand directly into switched fabrics," says Crossroads' Russell, who doubts that InfiniBand will eventually handle all traffic (e.g., server, storage, and LAN). "Multiplexing all that over one wire would be very tough, and I don't see the compelling benefit. We don't think you'll see SCSI or Fibre Channel on InfiniBand in first or second generations," he says, adding that initial implementations of InfiniBand will not have the distance capabilities of Fibre Channel.
Illuminata's Eunice predicts that InfiniBand may be used for connecting servers to storage in the 2002/2003 time frame, but only for short-distance connections, such as machine rooms. "In 2002 or 2003, InfiniBand will begin to encroach on Fibre Channel, although it won't be a direct competitor," Eunice predicts.
"Long term, InifiniBand will definitely encroach upon Fibre Channel, and if Fibre Channel doesn't march forward quickly, InfiniBand will marginalize it," says Eunice. "However, in some ways having InfiniBand around is wonderful for Fibre Channel because it's a kick in the seat to get going."
Director of planning and technology, QLogic
InfiniBand at a glance
Bandwidth: 500MBps (one wire) to 6GBps (12 wires)
Wire signaling rate: 2.5Gbps
Transmission media: Copper or fiber-optic
Architecture: Channel-oriented, switched fabric, serial point-to-point link, with I/O engine directly coupled to host memory.
Major supporters: Compaq, Cisco, Dell, HP, Intel, Microsoft, Sun
Availability: Draft specification is expected in the next quarter, with a final spec due Q3 2000. Initial products are expected Q4 2001, with widespread availability in 2002.
For more information: www.infinibandta.org, www.intel.com.
Related articles in InfoStor
"Goodbye Fibre Channel, hello InfiniBand?" (Opinion), February 2000, p. 46
"Prepare for the shift to switched fabrics' (Feature), January 2000, p. 36.
"2Gbps Fibre Channel coming this year" (News), January 2000, p. 1.
"NGIO, Future I/O camps bury the hatchet" (News), September 1999, p. 1.