By Kevin Komiega
The industry has been extolling the benefits of InfiniBand as a next-generation data-center interconnect for years, but the technology has experienced fits and starts due to management complexity and lack of user interest. Cisco believes that is all about to change.
The networking giant has made good on its promise to meld the low-latency, high-performance capabilities of InfiniBand with the simple management of Ethernet with the launch of new management tools, support for open standards, and a new line of InfiniBand switches aimed at building a single, protocol-agnostic, unified compute fabric.
The new platform includes CiscoWorks LMS, Resource Manager Essentials, and Dynamic Fault Manager with support for Cisco SFS-7000 Series InfiniBand switches and the new Cisco SFS 7000D Series InfiniBand DDR (Double Data Rate) switches with support for standards such as Open Fabrics and Open MPI.
Cisco officials say the company has successfully integrated Ethernet and InfiniBand under a common framework for network management, application protocols, and application programming interfaces (APIs), a feat which simplifies configuration, management, and troubleshooting for enterprises deploying compute clusters and finally brings InfiniBand to the mainstream.
“Our biggest initiative over the last year has been to integrate Topspin’s technology to make InfiniBand a mainstream interconnect,” says Krish Ramakrishnan, vice president and general manager of Cisco’s Server Networking and Virtualization Business Unit, which was created when Cisco acquired InfiniBand start-up Topspin for $250 million in April 2005.
The sticking point for many customers when it comes to adopting InfiniBand has always been complexity. Now, Ramakrishnan says, complexity is no longer an issue.
“Users can now manage Ethernet and InfiniBand together as a single holistic fabric,” says Ramakrishnan. “We are bringing all of the Ethernet management capabilities like fault isolation and configuration management to InfiniBand. This is a major boost for the technology and proves that it can be a compute backbone for the industry.”
InfiniBand is a high-performance, switched-fabric interconnect standard for servers that is usually found in server clusters where high bandwidth and low latency are required. In addition to server clusters, InfiniBand unifies the compute, communications, and storage fabric in the data center.
According to the InfiniBand Trade Association (www.in finibandta.org), InfiniBand also serves as a high-performance interconnect between both block- and file-based storage systems and server clusters. The ultimate goal is to deliver higher performance with lower overall total cost of ownership by using a single network for both clustering and storage connectivity.
The InfiniBand Trade Association is lead by a steering committee comprising Cisco, IBM, Intel, Mellanox, Sun, and Voltaire.
Some industry experts say that InfiniBand is still an immature technology, but that Cisco’s new offerings should stimulate more interest in the technology.
“I’m not sure that InfiniBand’s on the radar screen of storage professionals, but it’s important for storage people to understand what Cisco has done to ease the ability to manage InfiniBand,” says William Hurley, a senior analyst with the Data Mobility Group research and consulting firm.
“The reality is that iSCSI over Ethernet will now begin to put a lot of pressure on Fibre Channel in the minds of customers because it opens the door for people to have a second and third consideration about what they want to do with Ethernet in their fabrics,” says Hurley.
Hurley says the advent of Ethernet-to-InfiniBand conversion via Cisco’s new servers and software will allow users to take large groups of servers not commonly attached to Fibre Channel SANs, such as Web servers and application servers, remove their hard drives, and use a high-performance InfiniBand backbone to connect the server pools to low-cost Serial ATA (SATA) arrays using iSCSI targets.
Cisco’s family of SFS 7000D Series InfiniBand DDR switches scale from 24 ports to 288 ports and add Double Data Rate (DDR) capabilities that double the bandwidth from 10Gbps to 20Gbps and decrease latency to support resource-intensive applications.
For more information, see “Is InfiniBand poised for a comeback?” Info-Stor, February 2006, p. 1.