PCI Express and InfiniBand are complementary

Posted on July 01, 2002

RssImageAltText

By Robert Simcoe
InfiniSwitch

PCI Express will be used to connect support chips, while InfiniBand will be used primarily for connecting systems and storage across data centers.

Considerable confusion exists around the interplay of PCI Express (formerly 3GIO) and InfiniBand. Many people believe they might be competing interconnects designed to address the I/O logjam in systems used in data centers. Others think IT professionals will need to choose one or the other to speed data flow. This is simply not the case.

InfiniBand and PCI Express will be complementary components of future computer systems. PCI Express offers a mechanism to provide high bandwidth between the support chips in systems, and InfiniBand offers the mechanisms to connect multiple systems across a data center to allow the large-scale storage, processing, and shared I/O required by multi-processor and clustered systems.

A technology transition from single-ended parallel interconnects to high-speed serial differential interconnects is under way throughout the computer industry. The networking industry long ago embraced serial transmission technology as a way to communicate over long distances. Now, even over short distances between chips, serial transmission of data is a more scalable and easier-to-design solution to meeting increasing bandwidth requirements.

Primary goals
The primary goal of PCI Express is to provide a higher bandwidth, easier-to-implement interconnect between the support chipsets used in PC and server motherboards. The primary goal of InfiniBand is to standardize a "loss-less," low-latency network for cluster and system-area interconnects.

This is the vision of PCI Express: If the CPU, video, and inter-chip links can all be high-speed serial, then there is a clear path to increase the performance of the chipsets, reduce the number of specialized interconnects in systems, and reduce the number of pins on the support chips. This also minimizes the number of layers on the motherboard, which lowers cost. Together, this promises to improve overall system performance while keeping the cost of support chips and motherboards in line with that of the processor chip.

Another primary goal of PCI Express is compatibility with existing PCI software, which means that it must boot operating systems without any change. Architecturally, PCI Express maintains the simple flat memory space of the load-and-store machine that has been the heart of PC I/O implementations.

PCI Express is really about interconnecting the chips that make up a PC or server system. Increasing performance while decreasing costs dictates a change in the way the electrical signals are handled between chips and on the motherboard. It is possible to standardize on a single electrical interface that is capable of serving the needs of each of the subsystems of a PC or server. The PCI Express specification is expected to be finalized in the second half of this year.

Enter InfiniBand
The InfiniBand architecture draws largely from cluster interconnect technology. Data-center managers dealing with computer-intensive problems found that uniprocessor systems did not scale adequately. In response, vendors developed clusters of servers that work in concert to tackle many of these problems. This spawned a number of proprietary interconnects that were used to connect large numbers of processors together in low-latency, loss-less networks.

The key features of these networks were that they were high-speed, did not lose packets, and used network interfaces that were significantly different from lower-speed, general-purpose network interfaces like Ethernet.

Various solutions have been deployed for clusters of up to hundreds of servers. (Storage area networks based on Fibre Channel have similar properties.) The fundamental concepts of large clusters have been proven, but no industry standard interconnect was established. This is what motivated system vendors' support for the InfiniBand specification. InfiniBand merged the Future I/O and NGIO efforts into a single specification that was released in October 2000.

InfiniBand specifies not only the wire speed and protocol, but also the architecture of adapters and the operating system interface. Another goal for InfiniBand is to replace the PCI interconnect where it had been used to connect multi-processor systems in large rack configurations.

The technology defines an I/O model that allows adapters to be shared by multiple processors, rather than designating one processor to do the I/O for a cluster. In this use of InfiniBand, the interconnect is within a defined physical configuration (e.g., a rack), and the I/O for each processor can use InfiniBand to reach a shared adapter or multiple shared adapters for better system I/O scaling. This allows systems vendors to address one of the major I/O bottlenecks in multiprocessor systems.

The combination of PCI Express's higher-bandwidth interconnects between PC support chipsets and server motherboards and InfiniBand's ability to efficiently connect server and storage devices in the data center will mean far-greater processing power and storage capabilities for high-performance, high-capacity environments.

Robert Simcoe is co-founder and vice president of technology at InfiniSwitch (www.infiniswitch.com) in Westborough, MA.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.