The future roles of Fibre Channel, SCSI, and InfiniBand are being hotly debated. In an attempt to douse the fires, we asked contributing writers Elizabeth Ferrarini and Lee Steele to interview vendor representatives and one industry analyst. Here are excerpts from those interviews.
Director of strategic and technical marketing, Quantum Corp.
Fibre Channel has been lumped into many different spaces, although SCSI still dominates at the disk-drive interface level. A lot of Fibre Channel is sold for system-to-subsystem use, but there are not a lot of Fibre Channel disk drives. Typically, a RAID controller converts Fibre Channel to SCSI. Fibre Channel will be a backbone in the computer room. It's questionable how far InfiniBand will go. If InfiniBand is going to be the computer room backbone, would you convert to Fibre Channel or SCSI?
How will InfiniBand affect disk-drive interfaces? Will Fibre Channel level off or decline?
We can start thinking of storage devices sitting on InfiniBand. Intel shows a picture of InfiniBand in a cabinet of disk drives with a RAID controller. Who cares what the interface is behind the RAID controller? Any shared bus, such as Fibre Channel-which is a loop-or SCSI-which is a parallel bus-is somewhat constrained. For example, if you're doing sequential operations, a few drives can saturate a Fibre Channel or Ultra160 SCSI bus, although you can put more than 100 drives on Fibre Channel.
InfiniBand, on the other hand, makes every bus a point-to-point bus. There's just one link. Every device becomes an n-node of the fabric. To this end, storage will scale better because you won't be limited by the bus bandwidth problem. Instead, you'll be limited by the bandwidth of the switches and routers.
What will this do to storage area networks? * Fibre Channel SANs can run IP, and IP will also run over InfiniBand. The software, which is the crucial part of a SAN, and the interconnect, will still work. Most SANs today are Fibre Channel and SCSI. In the future, it could be Fibre Channel, SCSI, and InfiniBand.
With Fibre Channel, vendors had to develop their own ASICs. The companies behind InfiniBand say they want to enable the industry by providing the logic for free. This is a different way of enabling the market than what happened with SCSI, Fibre Channel, and SSA.
Quantum can't ignore an effort with this many players. We're a member of the InfiniBand Trade Association, along with other disk-drive manufacturers.
Senior manager of technology, Exabyte
In the near term, you're going to see the storage industry focus on Fibre Channel and the server industry focus on InfiniBand. The InfiniBand folks primarily want to improve server performance.
PCs have always had separate slots for the microprocessor, memory, and storage devices. InfiniBand will merge two of them into one. You'll be able to have redundant arrays of inexpensive servers.
Right now, development efforts are under way to hook up as many as 1,000 Intel PCs in parallel and run them like a big mainframe.
What will happen in the future?
Intel put money into Crossroads in June 1999, at the same time that Crossroads announced support of Future I/O and NGIO [which merged into InfiniBand]. Crossroads will focus on making a router to go between InfiniBand and Fibre Channel.
Fibre Channel provides storage-to-storage communications, as well as storage-to-server communications. InfiniBand provides server-to-server communications.
There are three different types of communications that need to happen, and they all have to be optimized for different applications. The router between InfiniBand and Fibre Channel is going to be the server-to-storage I/O device.
In the networking world, you have all these different layers of protocols and wrappers. You're going to see the same thing with InfiniBand and Fibre Channel. They will support higher-level protocols.
What about SCSI?
It's both a protocol and an interconnect. As an interconnect, it's going away. As a protocol, it's going to stay. It's still going to be how you communicate with storage devices over a SAN. Fibre Channel is just the hardware layer and the physical connectivity. Fibre Channel takes the SCSI commands and wraps them in a Fibre Channel wrapper.
SCSI will exist for some time because it's optimized for storage. So, even if it's going over InfiniBand, it's still going to be a SCSI protocol. All the software out there already supports SCSI and all the storage peripherals already support it.
Analyst, D.H. Brown
InfiniBand is designed to be the successor to the PCI bus. It's a specification for a substantially faster bus protocol than PCI. InfiniBand offers a speed range from 500MBps to 600GBps.
The other main difference is that InfiniBand isn't a pure bus-based technology. It's a point-to-point switch-based technology. High-speed point-to-point buses help ensure that each device gets the bandwidth it needs.
Will InfiniBand replace Fibre Channel as an interconnect?
Will InfiniBand make SCSI go away?
Fibre Channel is really an interface that competes with SCSI, not PCI. InfiniBand will benefit both Fibre Channel and SCSI.
You have a connector in a box to connect the processor to your I/O adapter. To keep market momentum, Fibre Channel has to be faster, more reliable, and offer more features than SCSI. SCSI is going to stay alive, and since it has the volume, it's going to drive costs down.
Meanwhile, Fibre Channel has distance going for it. Also, we're seeing organizations putting their storage on Fibre Channel or disconnecting storage from servers. This indicates that storage is so important that organizations want ways to optimize the storage for speed, rather than for cost and widespread availability.
InfiniBand provides a lot of flexibil ity in speed. Different implementations will have a different number of wires-1 wire for 500MBps and 12 wires for 600GBps. The bus protocol of Infini-Band is designed with TCP/IP in mind. Why? Because you can make packets more efficient with a simple mapping between network protocols and the system bus protocol. You could develop system clusters that you would address using IP-like mechanisms within the bus protocol. Since the packet format has been designed with IP in mind, InfiniBand has the potential benefits of efficiency and of simplifying cluster architectures or enabling new cluster architectures.
Senior systems architect, LSI Logic
(Note: This opinion was sent in response to February's Editorial, which asked: "Will InfiniBand compete with Fibre Channel, or will InfiniBand be relegated strictly to in-the-box connections?")
Fibre Channel has two primary applications: storage applications: storage area networks (a fabric connecting servers to shared storage resources) and as a peripheral interface (disk drive, tape drive, etc). Although InfiniBand will eventually replace the PCI bus as an in-the-box connection, LSI believes InfiniBand will first emerge as an out-of-box, server-to-server and/or storage area network interconnect.
InfiniBand's ability to displace incumbent technologies in these applications will depend on its ability to deliver significant new value, at a competitive price, while achieving a functional equivalence to the incumbent interface(s). We believe InfiniBand's ability to overcome these acceptance hurdles will vary among server market segments.
Ultimately, if InfiniBand is to replace PCI, it must first demonstrate that, like PCI, it is a viable transport for all I/O types: storage, server-to-server, and LAN/WAN. In summary, InfiniBand can be expected to compete with all open SAN technologies (Fibre Channel and SCSI), all proprietary SAN technologies, and it will likely coexist with Fibre Channel and SCSI as a peripheral attachment interface.