By Ann Silverthorn
InfiniBand, the screaming-fast interconnect with low latency and loads of bandwidth, has seen a slow adoption rate in the storage arena. The slow adoption can be attributed largely to timing since InfiniBand debuted just when Fibre Channel was gaining traction and just before the general technology downturn several years ago. Although InfiniBand didn’t take market share away from Fibre Channel, InfiniBand did take hold in high-performance computing (HPC) environments and may eventually find a competitive place in storage.
Willard says that some early adopters of InfiniBand are considering extending the technology to storage systems so the server cluster and storage would be on the same fabric, although he doesn’t have any hard data on that potential trend.
“Many computer architectures originate in the technical computing market, such as the initial work on grids that led to virtualization,” says Willard.
Analysts say it makes sense for InfiniBand to expand into data storage and commercial enterprises because it’s relatively inexpensive, fast, and reliable.
“So far, the problem InfiniBand has solved is putting servers together in a cluster,” says Brian Garrett, a technical director at the Enterprise Strategy Group (ESG) research and consulting firm. “If you’re not clustering, you might not need it. However, more and more companies are adopting clusters and moving applications off big Unix machines. If that trend continues, more people will find InfiniBand attractive.”
“In the near term, InfiniBand’s affordability and high bandwidth can’t be ignored for clusters,” Garrett continues, “and in the long term, it has the potential to be an interconnect in the data center for broader applications, including storage.” In addition to the HPC market, Garrett sees applications such as data mining, decision support, and backup requiring the speed and bandwidth that InfiniBand provides.
Garrett regards InfiniBand and 10Gbps Ethernet as emerging alternatives to Fibre Channel. ESG reports per-port pricing for InfiniBand host channel adapters (HCAs) as low as $125, compared to $1,000 or more for 10Gbps Ethernet adapters. And InfiniBand switches can be half the cost of 10Gbps Ethernet switches. Comparing bandwidth road maps, InfiniBand soars to 120Gbps through 2007. Considering latency, InfiniBand wins at three microseconds, compared to 50 microseconds for 10Gbps Ethernet.
Bret Weber, fellow, strategic planning and architecture in LSI Logic’s Engenio Storage Group, says that Ethernet’s speed may lag behind InfiniBand but that it’s an uphill battle to bring a new infrastructure into IT environments. (Weber also serves on the board of directors of the OpenFabrics Alliance-formerly OpenIB-which develops InfiniBand and transport-agnostic software stacks.)
Brett Goodwin, vice president of marketing and business development at Isilon Systems, which offers both InfiniBand and Ethernet connections for its clustered storage systems, concurs. “Ethernet has been around for a long time and people are familiar with it, so InfiniBand has to build awareness,” he says. “The most powerful way to get the word out is with the customers using InfiniBand.
“We’ve had oil and gas customers say that if they didn’t have a clustered storage system powered by InfiniBand, they couldn’t have processed the data fast enough to get contracts with the major oil companies,” Goodwin continues. “Media companies are able to create TV programming or feature films in half the time it used to take. Pratt & Whitney has consolidated its jet engine and commercial testing on [InfiniBand-based] clustered storage and reduced the testing time from six months to six weeks.”
One reason InfiniBand might not be taking off in the storage market is because the technology hasn’t received strong enough validation from large storage vendors.
“There are some vendors-Engenio, SGI, and Verari, for example-that are enthusiastic about InfiniBand, but mainly in the context of HPC,” says Jonathan Eunice, founder and principal IT advisor at the Illuminata research and consulting firm. “Most of these vendors have InfiniBand as an option in their product line, as well as Ethernet and Fibre Channel.”
Eunice also notes that storage switch vendors such as Brocade have concentrated on Fibre Channel and faster forms of Ethernet, although Cisco has made a number moves into the InfiniBand space.
Isilon claims 95% of its customers prefer the InfiniBand option in the company’s IQ line of clustered storage systems.
Engenio has an InfiniBand product- the 6498 storage system-which it OEMs to vendors such as SGI. The 6498 has two controllers, each with two 10Gbps InfiniBand host-side interfaces. There are also four 4Gbps Fibre Channel connections for each controller that connect to JBOD disk arrays with up to 224 Fibre Channel or Serial ATA (SATA) disk drives.
DataDirect Networks also has an InfiniBand-based storage system-the S2A9500-that supports Fibre Channel and InfiniBand and up to 840TB of capacity. DataDirect officials note that some the company’s customers mix Fibre Channel and InfiniBand ports on the same system. They say some customers choose to use InfiniBand on the front-end, reserving the option to use InfiniBand on the storage side at a later date by swapping Fibre Channel ports for InfiniBand ports.
QLogic has entered the InfiniBand market via acquisitions, buying Path-Scale in February and SilverStorm in September (see sidebar). QLogic currently sells the InfiniPath family of HCAs and ASICs, which were originally developed by PathScale. QLogic also offers Fibre Channel HBAs and iSCSI HBAs for Gigabit Ethernet-based SANs. At the Intel Developer Forum in September, QLogic demonstrated interoperability between its InfiniPath InfiniBand adapters and Ethernet networks running the OpenFabrics Enterprise Distribution (OFED) of Linux.
It’s not only vendors that are keeping their options open. The OpenFabrics Alliance has recently expanded to include Ethernet technologies. This alliance was founded as the OpenIB Alliance in 2004 to develop “transport-agnostic” open source software for remote direct memory access (RDMA) InfiniBand fabric technologies. RDMA (sometimes referred to as memory-to-memory transfer) allows data to bypass operating systems and travel directly from the memory of one computer to that of another.
Engenio’s Weber says the OpenFabrics Alliance discovered that there is commonality with RDMA over Ethernet from the iWARP project, so it expanded its charter to include Ethernet standards efforts, acknowledging that the two technologies can be complementary. The organization hosted two demonstrations at September’s Intel Developer Forum conference that showcased InfiniBand and iWARP running concurrently in clusters.
“The common upper-level protocols can now talk down either through InfiniBand RDMA hardware or iWARP for Ethernet,” says Weber. “This way, common upper-level storage protocols, like SRP or iSER, can run on Ethernet for low-end or midrange systems, and on InfiniBand for the high-end of the spectrum.”
SRP (SCSI Remote DMA Protocol) is a protocol defined by the ANSI T10 committee that tunnels SCSI request packets over InfiniBand hardware for block-level storage. It allows one host driver to use storage target devices from a variety of hardware vendors.
iSER (iSCSI Extensions for RDMA), ratified by the InfiniBand Trade Association (IBTA) in September, is a new IETF standard extension to iSCSI that includes support for RDMA-enabled networks such as InfiniBand. Vendors such as FalconStor Software, LSI, SGI, and Voltaire are currently working on iSER-based InfiniBand products.
Greg Schulz, founder and senior analyst of the StorageIO Group, believes that InfiniBand, 10Gbps Ethernet, and Fibre Channel are running on parallel tracks. He also thinks that there’s more to the issue than speed.
“In terms of the interface, InfiniBand, 10Gig Ethernet, and Fibre Channel are just pipes, and they’re all getting faster and faster,” says Schulz. “So then it comes down to economics, which favors Ethernet, but that’s not to say it’s an automatic shoo-in. There’s an opportunity for InfiniBand to make inroads, particularly in HPC and other cluster environments.”
Schulz says that as soon as drivers and other software pieces fall into place, InfiniBand, 10Gbps Ethernet, and even higher-speed Fibre Channel will have more value than just being very fast wires. He says there’s hype about RDMA and low latency, but the reality is that those benefits won’t be realized until the software pieces are there.
“iSER and iWARP functionality have to be embedded in the operating system,” says Schulz. “As soon as regular TCP-type utilities and applications-whether NFS, CIFS, or HTTP-become more transparent to the applications so the operating system and the underlying drivers can utilize and support them, we’ll have more than just fast wires.”
InfiniBand was once hailed as the ultimate high-speed interconnect to unite servers and storage networks. It hasn’t gained a substantial foothold in the storage market yet. However, if actions by storage vendors and standards bodies continue to throw the spotlight on InfiniBand, it could eventually become a driving force in storage networking.
(For more information on InfiniBand’s history and relation to data storage, see “Is InfiniBand poised for a comeback?”InfoStor, February 2006, p. 1.)
QLogic buys SilverStorm for InfiniBand
In September, QLogic announced its second major InfiniBand play of the year as it entered into an agreement to purchase SilverStorm Technologies, which manufactures InfiniBand host channel adapters (HCAs). QLogic agreed to pay $60 million in cash for SilverStorm.
A privately held company, SilverStorm was founded in 2000 and is based in King of Prussia, PA. In May 2005, it changed its name from InfiniCon to SilverStorm to reflect its interest in technologies other than InfiniBand.
SilverStorm specialized in high-performance cluster computing interconnect solutions, including multi-protocol fabric directors and InfiniBand fabric edge switches and HCAs.
SilverStorm’s line of HCAs was designed for standard servers, blade servers, storage devices, and communications platforms, creating high-performance channels between host devices and InfiniBand fabrics. With its single-data rate (SDR) HCAs at 10Gbps and double-data rate (DDR) HCAs at 20Gbps, the company claims up to 20 times the bandwidth and 10% of the internal latency of traditional Gigabit Ethernet server interfaces.
The SilverStorm acquisition marks the second InfiniBand-related play for QLogic this year. In February, the company announced it would acquire Path-Scale for approximately $109 million (see at “QLogic acquires PathScale for InfiniBand”. QLogic currently sells PathScale’s InfiniPath HTX InfiniBand HCAs.
Representative InfiniBand vendors
Host channel adapters (HCAs)- Mellanox, QLogic
Storage-DataDirect Networks, Hitachi Data Systems, Isilon Systems, LSI Logic’s Engenio Storage Division, Network Appliance, Texas Memory Systems, Verari, Xiranet