NGIO, Future I/O camps bury the hatchet

Posted on September 01, 1999

RssImageAltText

NGIO, Future I/O camps bury the hatchet

Zachary Shess

Dave Simpson

After months of contention, the two groups proposing specifications for a successor to the PCI bus standard have agreed to develop a common specification and protocol. Last month, the Next Generation I/O (NGIO) and Future I/O coalitions announced they will form an industry group--called System I/O--that will produce the new spec by the end of the year. Products based on the System I/O standard are expected in 2001.

The System I/O group was formed by seven vendors (Compaq, Dell, Hewlett-Packard, IBM, Intel, Microsoft, and Sun), with IBM and Intel co-chairing the steering committee. Participation in the group is open to all vendors, and the first industry forum is scheduled for next month.

The System I/O specification, like the NGIO and Future I/O proposals, will provide a low-latency, high-bandwidth interconnect based on a switched fabric architecture, as opposed to a bus architecture employed by the PCI standard. The new spec will incorporate elements from both the NGIO and Future I/O proposals, and individual vendors will be able to license their intellectual property.

Performance is expected to vary, depending on implementation. However, Martin Whittaker, research and development manager at Hewlett-Packard`s Enterprise Systems and Software Group, says the interconnect will be available in 1-wire, 4-wire, or 12-wire versions, with aggregate bandwidth ranging from 500MBps (one wire) to 6GBps (12 wires), with a 2.5Gbps wire signaling rate.

Although both the System I/O and Fibre Channel specifications are based on a switched fabric architecture, they are not expected to compete. "I think you`ll see SIO-to-Fibre Channel and SIO-to-SCSI bridges just as you have PCI-to-Fibre Channel and PCI-to-SCSI bridges today," says Whittaker.

Sometimes referred to as a "system area network," the Fiber Channel-based switched fabric architecture eliminates the single processor-to-I/O controller path, using instead a fabric for direct data paths between the server processor and storage devices, for example. Instead of being at the mercy of possible I/O bus arbitration and throughput bottlenecks, the more scalable fabric allows multiple server-to-device data paths to be added as server I/O card slots fill up. Result: Bandwidth increases in proportion to the number of connected devices.

Since processor speeds are expected to double every year for at least the next decade, proponents of the System I/O spec say the architecture will alleviate bottlenecks between the host and I/O controllers and provide system configuration flexibility. With a shared-bus architecture such as PCI, the bridge acts as an arbiter between the memory complex and I/O bus, with I/O controllers battling to get information onto the I/O bus. With the System I/O approach, host channel adapters connect the I/O controllers to the host bus, and act as I/O schedulers, DMA engines, and access gatekeepers.

The streamlined communications approach also eliminates internal server bottlenecks caused by processor interrupts by offloading tasks from the processor. These improvements will become critical as next-generation chips come to market.

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives