SCSI and Fibre Channel: The Coexistence Strategy

Posted on December 01, 1997

RssImageAltText

SCSI and Fibre Channel: The Coexistence Strategy

Consider Fibre Channel as complementary to, not a replacement for, SCSI.

Roger Cummings

Distributed Processing Technology

Most people would agree that this is a time of significant change in the peripheral interface industry. After many years of promise and development, the combination of serial interfaces and cost-effective fiber-optic transceivers is finding its way into mainstream products, making significant new system and subsystem architectures possible.

Fibre Channel is a standard interface that incorporates this technology. Using a serial interface that supports both electrical and fiber-optic transceivers, Fibre Channel offers vastly improved scalability, connectivity, cable distance, and performance over preceding interfaces (see "Fibre Channel Defined" below).

The serial interface architecture and new transceiver technologies result in a markedly different interface. In fact, with Fibre Channel, the entire philosophy behind peripheral interfaces has changed. Fibre Channel is arguably the first peripheral interface specifically designed to coexist with its predecessor--SCSI--rather than replace it.

The coexistence approach is not new. It has been required for many years in a number of fields, most notably in local-area networks (LANs). The largest installed base of LANs is Ethernet, and any succeeding standard that could not coexist with that installed base would clearly create substantial adoption difficulties. Therefore, it is no accident that Fast Ethernet, and now Gigabit Ethernet, promise coexistence with the installed base as well as an incremental upgrade path over time as driven by application and user requirements. Enabled by a layered LAN architecture, changes and upgrades to one "layer" have minimal effect on other areas. This flexibility is a major advantage of a layered architecture with well-defined functional separation.

The power behind the coexistence approach in LANs is rooted in the need to preserve and extend the massive investment in deployed systems. This investment is not only in LAN interfaces and their protocol stacks, but also in infrastructure (i.e., wiring, routers, and switches), test equipment, and management and control facilities. Beyond the actual products, there is also significant intellectual investment by technicians, designers, and architects.

Does it make sense to define a coexistence strategy for peripheral interfaces? After all, there are significant differences between the two situations. Peripheral interfaces do not really have an infrastructure, i.e., no cabling in building walls, wiring closets, routers, or switches. Furthermore, peripherals are often mounted inside other equipment and therefore are not viewed separately. Peripheral interfaces are also much less visible to users than are LANs, and they have been made less visible by recent initiatives such as Plug and Play. As far as most users are concerned, peripherals are managed entirely by the operating system. The drivers are loaded automatically and the icons magically appear.

The reason a coexistence strategy makes sense for peripherals in general and for SCSI in particular is rooted less in the preservation of an infrastructure than it is in the preservation of intellectual investment. Millions of dollars have been spent educating people in a number of fields: the hardware designers who create SCSI interfaces, the driver writers who produce the software that uses SCSI command sets and protocols, the disk-array software architects who maximize the use of SCSI features, the maintenance technicians who analyze and debug bus structures, etc. The SCSI command sets are comprehensive and flexible, but there is a significant learning curve to their use. Repeating that learning curve in order to adopt a completely new interface is unacceptable in these times of lower margins for many in the computer industry.

Coexistence works in the LAN situation because the layered architecture supports the separate enhancement of pieces of the architecture on an incremental basis. It is often said that peripheral interfaces do not incorporate that sort of architecture, but that is not strictly true. SCSI, for instance, incorporates a somewhat limited form of layered architecture. The SCSI command sets exhibit sufficient functional separation to allow them to be used in other situations.

Since Fibre Channel transports the SCSI command sets unaltered, an architecture can be created where the method used to transport a SCSI command to a peripheral is hidden from the operating system and the user. This preserves both the investment in driver software and the considerable knowledge base related to the SCSI command set and their many vendor-unique extensions.

The power of the SCSI and Fibre Channel coexistence strategy is the way that it directly addresses the needs of the marketplace. While the limited connectivity and operating distance of parallel SCSI was appropriate for connecting a small number of peripherals to PCs, those parameters have increasingly become a limitation as more and more peripherals have to be connected to servers with few PCI slots in which to house the interfaces.

Fibre Channel can connect hundreds or thousands of peripherals, over kilometer distances, with an interconnection scheme that scales in performance as well as in connectivity. For the first time, it is possible to build mainframe-sized peripheral configurations that are connected to open-architecture servers using industry- standard interfaces. With Fibre Channel, high-performance connection to a large disk farm can only require a single PCI slot. In addition, the serial nature of Fibre Channel supports a small connector footprint, which allows Fibre Channel disks to be dual-ported without exceeding standard form factors. Thus, configurations can be built with mainframe-class fault tolerance by using commodity peripherals.

The significant capabilities enabled by the coexistence strategy are illustrated by the system configuration shown in the diagram on p. 34. This configuration shows a pair of servers connected via a central connection facility to a set of disk cabinets. Note that there are two separate connections to each server and to each disk cabinet. Each connection can use a Fibre Channel electrical definition for distances of up to 30 meters, a low-cost optical definition for distances of up to 500 meters, or a higher-cost optical definition for distances of up to 10 kilometers.

One of the disk cabinets contains disks with native dual-Fibre Channel interfaces, which are interconnected directly in a Fibre Channel-Arbitrated Loop (FC-AL) configuration. Another disk cabinet contains single-ported parallel SCSI disks and a Fibre Channel-SCSI bridge (a simple device that receives SCSI commands over Fibre Channel and transmits them over parallel SCSI but does not interpret the SCSI commands).

The central connection facility can be a low-cost hub, in which case the entire configuration is interconnected as a pair of arbitrated loops that support a maximum of 126 devices and a maximum system bandwidth of 100MBps per loop. Or, the central connection facility can be a more complex switch, in which case the number of devices is limited only by the 24-bit address space and the maximum system bandwidth is many times 100MBps because of the parallelism supported by the switch. In either situation, the configuration has a significant degree of fault-tolerance.

From LANs to SANs

In this configuration, the coexistence strategy has come full circle: a peripheral system configuration that also exhibits many of the characteristics of a LAN. The connectivity and the operating distance and flexibility of Fibre Channel, and its ability to use active interconnects have given rise to a new term: storage-area network (SAN). (For more information about SANs, look for the Cover Report in the February issue of InfoStor.) This term reflects a significant new approach to system architecture, namely, the separation of storage and processing elements. This new approach is key to creating cost-effective, fault-tolerant, and highly available server configurations for today`s applications.

In the future, there is an even more intriguing possibility of coexistence: Transceiver definitions and coding schemes used for Gigabit Ethernet have evolved from the Fibre Channel specifications and it is already possible to buy transceivers and other low-level components that support both schemes. This foreshadows a single infrastructure that will serve the needs of a SAN or a LAN, as required by specific applications.

Click here to enlarge image

Click here to enlarge image

A fault-tolerant Fibre Channel configuration is based on two separate connections to each server and to each disk cabinet.

SCSI: The Non-Standard Standard

With the current preponderance of PC-based servers and industry- standard interfaces, it is easy to forget that a mere 10 or 15 years ago the computer industry was dominated by vendors who created entire systems based on proprietary peripheral interfaces. These companies were keenly aware of the importance of such interfaces--and especially the interface to storage subsystems--to overall system performance.

Despite the demise of many of those companies and the move to industry-standard interfaces, the philosophy has not entirely disappeared. Many major players today are also aware of the importance of the peripheral interface and have developed their own unique extensions to give them substantial added value in their systems.

Fortunately, the SCSI command sets allow the incorporation of vendor-unique functions, and even commands. It is not unusual, therefore, for suppliers of "commodity" SCSI peripherals to have separate firmware loads for each of their major OEM customers (and sometimes even separate loads for the OEMs` different product lines). The coexistence strategy for SCSI and Fibre Channel has the major advantage of carrying forward not just the standard SCSI command sets, but also all vendor-unique extensions.

Fibre Channel Defined

Fibre Channel is both an interface definition and an interconnect definition. The interface definition currently supports data rates from 12.5MBps to 400MBps, but the majority of current products operate at 100MBps. The definition includes multi-mode fiber, single-mode fiber, and multiple types of coaxial cable to provide a spectrum of cost/performance levels.

Fibre Channel has pioneered the use of shortwave laser technology, which when used with multimode fiber provides a low-cost optical solution for distances up to 1 kilometer. A de facto definition for a mezzanine card using a 10-bit or 20-bit wide parallel interface allows for easy interchange of transceiver types in the field. Because all of the "physical variant" definitions use the same coding scheme and framing protocol, a single interface design is required.

The Fibre Channel interface definition provides a generic transport mechanism that is suitable for use by many protocols and types of protocols. To date, mappings have been created for protocols such as SCSI, Internet Protocol (IP), Block Mux, Audio-Video, and the Simple Network Management Protocol (SNMP).

The interconnect definition is described in the main Fibre Channel standard by a functional model only, giving considerable flexibility in tailoring interconnect designs to specific applications. Currently, three types of Fibre Channel interconnects exist: simple point-to-point connections between two devices, arbitrated loop connections for up to 126 devices, and switched connections that can connect a large number of devices and a high aggregate bandwidth. It is also possible to mix switches and arbitrated loops in configurations for even greater flexibility (see diagram). Fibre Channel also includes multiple quality-of-service definitions, including circuit-switched, frame-switched, datagram, and virtual circuit classes.

Click here to enlarge image

For maximum flexibility, Fibre Channel farms can be configured with a mix of switches and hub-based arbitrated loops.

Roger Cummings is a senior interface architect at Distributed Processing Technology (DPT) in Maitland, FL. He is chair of Technical Committee T11 of the National Committee for Information Technology Standardization (which produces Fibre Channel and HIPPI standards) and participates in Technical Committee T10 (which produces SCSI standards) and the PCI and I2O Special Interest Groups.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives