—Part 1 of 2—InfiniBand, the interconnect that enjoyed 15 minutes of fame more than five years ago, finds itself back in the spotlight, and this time around storage professionals might want to take a closer look at the technology. While many in the storage industry thought it was dead, InfiniBand found a niche in the high-performance computing (HPC) world connecting huge clusters of servers. Recently, HPC administrators started asking why they couldn't use InfiniBand for their storage networks, too.
What happened to IB?
Between 1999 and 2001, as bandwidth was being eaten up by faster processors, streaming video, and high-resolution graphics, InfiniBand was hailed as the answer to all bandwidth problems. The PCI bus was hitting the wall, and InfiniBand was going to replace the bus-based architecture with a high-bandwidth, low-latency, serial I/O interconnect. Some analysts predicted that InfiniBand would become the standard method of connecting servers to other servers, storage devices, and networks. And some observers even predicted that InfiniBand would replace Fibre Channel SANs.
Analysts speculated that InfiniBand would be accepted more quickly than Fibre Channel had, because it started out with support from the InfiniBand Trade Association (IBTA). Formed in 1999, the IBTA had already gathered 200 members by mid-2001.
Unfortunately, InfiniBand had a timing problem, hitting the stage just before the technology and economic downturn. As a result, some InfiniBand start-ups went out of business in the 2002?2003 time frame.
Jonathan Eunice, president of the Illuminata research firm, thinks that if the technology bubble hadn?t burst Fibre Channel would have been detrimentally affected by InfiniBand. ?There?s been a tremendous setback for InfiniBand in terms of timeline and scope,? says Eunice. ?Also, some of the original supporters, like Intel, decided to take the PCI Express route.?
Eunice adds that, at the time, storage professionals were asking themselves if they really needed another network. Most of them were aligned with Fibre Channel or were looking at TCP/IP networks for NAS and iSCSI.
InfiniBand is a low-latency, high-performance, serial I/O interconnect. Its serial bus is bidirectional, with 2.5Gbps single-data rate (SDR) throughput in each direction per connection. It also supports double-data rate (DDR) and quad-data rate (QDR) for 5Gbps and 10Gbps throughput. And aggregating links dramatically increases throughput. For example, a quad-rate 12X link can push data at 120Gbps. And InfiniBand can run across copper or fiber-optic cabling.
Source: Enterprise Strategy Group
InfiniBand is deployed primarily in server clusters ranging from two to thousands of nodes. In addition to connecting servers, InfiniBand can connect communications and storage fabrics in data centers. The technology can also support block- or file-based transfers. The IBTA claims more than 70 companies have announced InfiniBand products.
?InfiniBand start-ups that survived found early adopters in the HPC space,? says Brian Garrett, an analyst with the Enterprise Strategy Group. ?Users of clustered scientific applications deployed on commodity servers found that InfiniBand could be used to affordably ?glue? servers together, creating a single computer.?
?Large commercial database clusters are starting to use InfiniBand, and it?s also moving into the geophysics, seismic, oil and gas production, and energy markets,? says Greg Schulz, a senior analyst at the Evaluator Group consulting firm.
Some well-established InfiniBand vendors include Mellanox, PathScale, SilverStorm Technologies, and Voltaire. And pre-configured InfiniBand clusters are available from well-known vendors such as Dell, Hewlett-Packard, IBM, and Sun.
IB breaks into storage
In mid-2004, a group of end users and vendors founded the Open IB Alliance to deliver a single, open-source Linux- and Windows-based software stack for deploying InfiniBand. Some of its members include Cisco, DataDirect Networks, Dell, Engenio, Mellanox, Network Appliance, Oracle, PathScale, Rackable Systems, Silicon Graphics, SilverStorm, Symantec, and Sun.
The Open IB Alliance had a positive influence on InfiniBand adoption, but there were other factors as well, such as rumblings from the HPC community.
?The HPC world, and certain parts of the commercial environment where companies have deployed large grids of servers for specific applications, focus on efficiency and low cost,? says Rick Villars, vice president of storage systems research at International Data Corp. (IDC). ?They don?t want three different networks; they want one. A number of them have chosen InfiniBand for their networks and they want storage on InfiniBand, too.?
?HPC users that already have InfiniBand for their server network want InfiniBand for their storage as well,? says Illuminata?s Eunice. ?In these cases, InfiniBand plays against Fibre Channel.? In addition to HPC environments, Eunice thinks that certain vertical markets with clustered databases will be interested in storage devices with native InfiniBand.
Part II of "Is InfiniBand poised for a comeback?" will appear on InfoStor's Website tomorrow."