InfiniBand, the interconnect that enjoyed 15 minutes of fame more than five years ago, finds itself back in the spotlight, and this time around storage professionals might want to take a closer look at the technology. While many in the storage industry thought it was dead, InfiniBand found a niche in the high-performance computing (HPC) world connecting huge clusters of servers. Recently, HPC administrators started asking why they couldn’t use InfiniBand for their storage networks, too.

Between 1999 and 2001, as bandwidth was being eaten up by faster processors, streaming video, and high-resolution graphics, InfiniBand was hailed as the answer to all bandwidth problems. The PCI bus was hitting the wall, and InfiniBand was going to replace the bus-based architecture with a high-bandwidth, low-latency, serial I/O interconnect. Some analysts predicted that InfiniBand would become the standard method of connecting servers to other servers, storage devices, and networks. And some observers even predicted that InfiniBand would replace Fibre Channel SANs.

Analysts speculated that InfiniBand would be accepted more quickly than Fibre Channel had, because it started out with support from the InfiniBand Trade Association (IBTA). Formed in 1999, the IBTA had already gathered 200 members by mid-2001.

Unfortunately, InfiniBand had a timing problem, hitting the stage just before the technology and economic downturn. As a result, some InfiniBand start-ups went out of business in the 2002-2003 time frame.

Jonathan Eunice, president of the Illuminata research firm, thinks that if the technology bubble hadn’t burst, then Fibre Channel would have been detrimentally affected by InfiniBand.

“There’s been a tremendous setback for InfiniBand in terms of timeline and scope,” says Eunice. “Also, some of the original supporters, like Intel, decided to take the PCI Express route.”

Eunice adds that, at the time, storage professionals were asking themselves if they really needed another network. Most of them were aligned with Fibre Channel or were looking at TCP/IP networks for NAS and iSCSI.

InfiniBand 101

InfiniBand is a low-latency, high-performance, serial I/O interconnect. Its serial bus is bidirectional, with 2.5Gbps single-data rate (SDR) throughput in each direction per connection. It also supports double-data rate (DDR) and quad-data rate (QDR) for 5Gbps and 10Gbps throughput. And aggregating links dramatically increases throughput. For example, a quad-rate 12X link can push data at 120Gbps. And InfiniBand can run across copper or fiber-optic cabling.

InfiniBand is deployed primarily in server clusters ranging from two to thousands of nodes. In addition to connecting servers, InfiniBand can connect communications and storage fabrics in data centers. The technology can also support block- or file-based transfers. The IBTA claims more than 70 companies have announced InfiniBand products.

“InfiniBand start-ups that survived found early adopters in the HPC space,” says Brian Garrett, an analyst with the Enterprise Strategy Group. “Users of clustered scientific applications deployed on commodity servers found that InfiniBand could be used to affordably ‘glue’ servers together, creating a single computer.”

“Large commercial database clusters are starting to use InfiniBand, and it’s also moving into the geophysics, seismic, oil and gas production, and energy markets,” says Greg Schulz, a senior analyst at the Evaluator Group consulting firm.

Some well-established InfiniBand vendors include Mellanox, PathScale, SilverStorm Technologies, and Voltaire. And pre-configured InfiniBand clusters are available from well-known vendors such as Dell, Hewlett-Packard, IBM, and Sun.

IB breaks into storage

In mid-2004, a group of end users and vendors founded the Open IB Alliance to deliver a single, open-source Linux- and Windows-based software stack for deploying InfiniBand. Some of its members include Cisco, DataDirect Networks, Dell, Engenio, Mellanox, Network Appliance, Oracle, PathScale, Rackable Systems, Silicon Graphics, SilverStorm, Symantec, and Sun.

The Open IB Alliance had a positive influence on InfiniBand adoption, but there were other factors as well, such as rumblings from the HPC community.

“The HPC world, and certain parts of the commercial environment where companies have deployed large grids of servers for specific applications, focus on efficiency and low cost,” says Rick Villars, vice president of storage systems research at International Data Corp. (IDC). “They don’t want three different networks; they want one. A number of them have chosen InfiniBand for their networks and they want storage on InfiniBand, too.”

“HPC users that already have InfiniBand for their server network want InfiniBand for their storage as well,” says Illuminata’s Eunice. “In these cases, InfiniBand plays against Fibre Channel.” In addition to HPC environments, Eunice thinks that certain vertical markets with clustered databases will be interested in storage devices with native InfiniBand.

Cisco’s acquisition of Topspin for $250 million in April 2005 also fueled interest in InfiniBand. In a brief on InfiniBand’s impact on Cisco’s switch offerings, ESG’s Garrett noted: “Storage switching connects servers to storage systems and devices on a shared network. …Cisco Server Fabric Switching (SFS), based on Topspin’s InfiniBand technology, connects servers within a cluster, enables shared connection to storage, and provides a platform within the network for the virtualization of storage and server resources.”

In an interview, Garrett says, “Topspin provided the HCA [host channel adapter] that connects to Cisco’s switch. This turns the InfiniBand ‘fat pipe’ into multiple pipes-one for connecting servers, one for connecting to storage, and one for bridging over to Ethernet for external storage. Right now the switches are running at 10Gbps, and 30Gbps is becoming available. Companies can now use InfiniBand as a direct connection to storage. This is good for HPC applications that move massive amounts of data to storage as quickly as possible, such as data collection from satellites.”

As for InfiniBand storage arrays, Illuminata’s Eunice considers Engenio and Texas Memory Systems to be early players. “Texas Memory’s solid-state disk [SSD] and Engenio’s [high-speed] disk subsystems are oriented toward some of the technical computing arenas where InfiniBand is gaining attention,” he says.

Texas Memory offers an InfiniBand interface that allows its SSD devices to connect to HPC networks. Its 4X InfiniBand interface and a single RamSan SSD provides up to 3GBps of sustained throughput over four ports. The InfiniBand interface allows the RamSan devices to connect natively to the high-bandwidth, low-latency servers used in grid computing environments.

LSI Logic’s Engenio division introduced storage systems with native InfiniBand at the Supercomputing conference in November 2005. Engenio’s 6498 controller and 6498 storage system are sold through OEMs and can be configured with either Fibre Channel or Serial ATA (SATA) disk drives.

“InfiniBand is becoming the interconnect of choice for inter-processor communications in Linux clusters,” says Steve Gardner, director of product marketing at Engenio. “Those customers started asking why they have to build a separate network for storage with Fibre Channel infrastructure, switches, and HBAs or buy converters that perform Fibre Channel-to-InfiniBand conversion. Both choices increase complexity and impact network performance.”

SGI and Verari are two of Engenio’s InfiniBand system OEMs. SGI resells Engenio’s InfiniBand disk arrays as the InfiniteStorage TP9700. Verari, a blade server developer, uses Engenio’s technology in its VS7000i InfiniBand-attached storage system.

Isilon is taking the concept of server clusters to storage environments, offering InfiniBand connections as options on its storage arrays.

“Clustered storage is built from the ground up to store large amounts of unstructured data, digital content, and reference information,” says Brett Goodwin, vice president of marketing and business development at Isilon. “We teamed with Cisco to create a storage system that uses InfiniBand as a high-performance, low-latency cluster interconnect.”

Goodwin reports that since Isilon first introduced its IQ series of storage systems with InfiniBand connections, in April 2005, about 90% of its customers have opted for the InfiniBand option (as opposed to Gigabit Ethernet) for node-to-node connectivity. Isilon offers the InfiniBand version of its IQ clustered storage systems at the same price as the Gigabit Ethernet version.

DataDirect Networks delivers more than 3GBps of read/write performance in its InfiniBand-based RAID storage networking appliance, the S2A9500. Designed for HPC and rich media applications, the S2A9500 can deliver 360 teraFLOPS and 1 petabyte of SATA storage.

Germany-based Xiranet Communications recently introduced its XAS 500-ib InfiniBand storage system, which is based on the SCSI RDMA Protocol (SRP) and offers capacity up to 7.5TB.

In October 2005, Voltaire and FalconStor teamed up to combine iSCSI management capabilities with the high performance of InfiniBand to enable accelerated backup and data replication.

In November 2005, Microsoft jumped into the market when it announced InfiniBand support for HPC environments with its Windows Compute Cluster for Windows Server 2003 software.

Hurdles remain

One of the hurdles that InfiniBand faces is that it can’t be used over long distances. However, like Fibre Channel, it can be bridged. The distance limitation might not be an issue for server environments, but as the technology moves into storage the ability to transport large amounts of data across long distances becomes more critical.

Obsidian Research Corp., a Canadian InfinBand technology company, is developing long-range storage technologies with the US government. Obsidian has developed a 2-port, 1U box called Longbow XR that encapsulates InfiniBand traffic in a variety of WAN links.

In one demonstration, it linked remote clusters over OC192c SONET networks at up to 6,500km.

The Open IB Alliance also addressed the distance issue when it showcased the world’s largest cross-continental InfiniBand data center in conjunction with the Supercomputing 2005 conference. The project included InfiniBand clusters in California, Washington, and Virginia connected over a WAN with Obsidian’s Longbow XR devices.

Another major challenge that InfiniBand faces is the foothold that Fibre Channel and Ethernet already have in storage networking.

“Fibre Channel and Ethernet are entrenched,” says Illuminata’s Eunice, “but they can’t carry the traffic that InfiniBand can and do not have the low latency.”

Leave a Reply

Your email address will not be published. Required fields are marked *