Sorting out SAN and disk drive interfaces

By Tom Heil

On the SAN front, it's Fibre Channel vs. iSCSI and InfiniBand. For disk subsystems, ATA, SCSI, and Fibre Channel face serial versions of ATA and SCSI.

Storage area networks (SANs) are networks that allow multiple servers to access a pool of shared storage. Today, the word "SAN" is practically synonymous with block-based SCSI I/O over Fibre Channel.

Several "SCSI-over-IP" protocols (e.g., FCIP and iSCSI) have emerged to support block-based I/O over TCP/IP and Ethernet (or other IP transports like WANs). How and where SCSI-over-IP will emerge are still uncertain. There is little doubt, however, that these technologies will be used to link remote SANs for purposes like remote mirroring and disaster tolerance, since Fibre Channel is not expected to become a long-haul interface. In this role, SCSI-over-IP technologies complement Fibre Channel by mitigating its distance limitation.

Whether iSCSI will emerge as a Fibre Channel alternative in core (same-building) SANs is less clear. Many believe that rather than take on Fibre Channel in the enterprise, iSCSI will emerge first in small and medium-sized businesses that so far have shunned Fibre Channel due to its perceived cost and complexity.

However, it remains to be seen whether iSCSI SANs will really be much cheaper and/or simpler than Fibre Channel SANs. For example, since iSCSI adapters depend on TCP/IP offload engines (TOEs), their cost may be closer to Fibre Channel host bus adapters (HBAs) than to Ethernet network interface cards (NICs). Also in this space, NAS may prove resilient. NAS has a low cost of acquisition and ownership and will get an automatic performance boost once TOEs are deployed.

At 1Gbps and faster speeds, the processor bandwidth consumed by TCP/IP becomes increasingly intolerable. To overcome this limitation, a number of vendors are developing TOE cards that move protocol processing into the interface adapter (on embedded processors or ASICs). It is not yet clear how TOE complexity and cost will impact Ethernet economics, which has traditionally relied on simple, inexpensive NICs. This, then, is a critical juncture in Ethernet's evolution, as its fate in high-performance applications seems tied to the success of TOEs.

Longer term, InfiniBand may emerge as a SAN technology, although it may have to prove itself first in inter-processor communications (IPC). By then, there will be a tremendous installed base of Fibre Channel—and perhaps iSCSI—SAN storage pools, so Target Channel Adapter (TCA) bridge/router products will be required.

InfiniBand's TCA strategy represents a significant new I/O model. The closest thing to it today is the I/O channel model in IBM mainframes and AS/400s. Like local I/O, TCAs conform to connector and form-factor standards that enable third parties to produce adapters. However, the TCA interface is message-based, and TCAs can be shared by multiple servers. In some environments—especially data centers—the remote TCA model may supplant, at least partially, the local I/O model.

Even if InfiniBand is widely deployed in IPC configurations, there are still significant technical challenges and operating-system dependencies that must be resolved to support an intelligent, shared I/O model. The emerging trend toward blades (high-density board-based servers) may benefit InfiniBand-based I/O, because InfiniBand shows significant promise as a blade backplane technology.

Many believe high-performance IPC will be InfiniBand's first market foothold, since InfiniBand natively implements application-to-application messaging. Even here, InfiniBand faces challengers. Ethernet may reach further into this space if and when TOE cards are adopted. Also, the Virtual Interface (VI) architecture is being mapped to both Ethernet and Fibre Channel. Beyond this, InfiniBand's long-term success depends on moving beyond IPC into higher volume arenas like SANs and general I/O, where it will face stiff competition from entrenched incumbents such as Fibre Channel. As mentioned, server blades may be the "wild card" that helps InfiniBand over the hump, but it is too early to tell.

In atypical fashion, the first major IPC standard—VI—focused on software rather than hardware. VI proponents recognized that the performance liability of traditional networks was not so much in low-level wire or link protocols but, rather, in upper protocols like TCP/IP, and even in the fundamental way I/O works. Typically, applications invoke the operating system to conduct I/O operations. In contrast, VI bypasses the operating system, enabling direct application-to-application messaging. VI has been mapped to several wire protocols, including Fibre Channel and Ethernet. Regardless of how IPC wire protocols emerge, for performance cluster applications like parallel databases, application-to-application messaging is here to stay.

As previously stated, SANs today are exclusively block-based. Moving forward, however, SANs will increasingly support both blocks and files—a trend that may favor iSCSI adoption. A longer-term "wild card" technology—Direct Access File System (DAFS)—may likewise favor InfiniBand (or any VI transport) adoption in SANs. DAFS exploits VI's software efficiency by giving applications direct access to shared files without operating-system intervention. Although applications have to be rewritten, which is always a major adoption hurdle, DAFS proponents believe the potentially major shared-file performance gains will justify the effort.

Subsystem/device interfaces
Three interfaces—ATA, SCSI, and Fibre Channel—dominate the disk-drive industry today. ATA is the de-facto standard for PCs. ATA also seems poised to dominate consumer (e.g., Xbox) devices and is increasingly popular in entry-level servers and workstations. These systems define ATA's "sweet spot," where low cost is critical and scalability beyond a few drives (ATA's primary weakness) is not required.

SCSI's sweet spot is high-end workstations and mainstream servers that must be cost-effective at two or three drives, yet scale to many tens of drives. Finally, Fibre Channel—specifically FC-AL—is poised to dominate high-performance, highly available SAN-attached storage. This segment values—and is willing to pay for—Fibre Channel's scalability (hundreds of drives) and dual-port capability.

Two key trends are shaping the evolution of drive interfaces. The first is serialization, reflected in the Serial ATA and Serial-attached SCSI initiatives. Supporters of both interfaces hope to inherit the market space of their parallel predecessors. These initiatives will share as much physical-layer technology as possible but will differ significantly in protocol implementation. Serial ATA is extremely cost-effective and is optimized for ATA's one-to-two-drive sweet spot. Serial-attached SCSI is optimized for SCSI's sweet spot. Also, each of these new interfaces maintains backward-compatibility with the dominant software architectures in their respective markets. To software, Serial ATA drives look like ATA drives. Likewise, Serial-attached SCSI drives (and Fibre Channel drives) look like SCSI drives.

The second trend is point-to-point topologies, vs. shared-bandwidth topologies like buses or loops. Both Serial ATA and Serial-attached SCSI are point-to-point connections. Fibre Channel drives can be connected point-to-point, but loops dominate. Point-to-point has significant technical merit, but scalability beyond a few drives requires a new class of "expander" chips to support large port counts. The cost structure of these devices will play a crucial role in determining whether point-to-point topologies can totally eliminate the need for the traditionally cheaper loop connections. Users of large configurations where loop performance is adequate (e.g., 50 or 60 drives on a Fibre Channel loop today) have little incentive to pay for point-to-point if it costs more.

Long-term challenge
So how will adoption unfold? The biggest long-term challenge Serial ATA faces is unseating ATA, which today ships in huge volumes to penny-pinching market segments (at low margins). This is a tough climate for major overhaul. Drive vendors are being asked to invest significant R&D in a new drive and then price it at near margin-less ATA levels. PC chipset vendors will have to provide integrated Serial ATA channels at no extra charge, or system OEMs will have to eat the cost of a discrete adapter.

Although Serial ATA is expected to emerge this year, it will take several more years before its cost structure matures to the point where it can put a significant dent in ATA volume. Until then, it will probably be confined to high-end desktops and entry-level servers.

Another interesting adoption question is, "What's the eventual market mix between Serial ATA and Serial-attached SCSI drives in servers and workstations?" (Serial-attached SCSI is not expected to play in the PC market.) Today, ATA has taken about 15% of this traditional SCSI stronghold. Projecting out three years, one could anticipate a similar line between Serial ATA and Serial-attached SCSI. Both now and then, the battle is the same: pitting the advanced capabilities of SCSI against the lower drive cost of ATA. Some believe ATA could penetrate further today if it weren't for its awkward physical layer. Since Serial ATA improves the physical layer, one might expect its market share to grow.

Future of Fibre channel
So what's in store for Fibre Channel? At today's 1Gbps and 2Gbps rates, the same Fibre Channel definition meets both SAN and drive interface requirements. Moving forward, however, these requirements are driving Fibre Channel in profoundly different directions. SAN-class Fibre Channel is moving to 10Gbps and switched only (no arbitrated loop) topologies, consistent with SAN requirements. However, this path is overkill and too expensive at the drive level, which instead is looking at a simpler speed upgrade to 3Gbps or 4Gbps. Beyond that, it is likely that the drive-class Fibre Channel and Serial-attached SCSI road maps will converge.

Tom Heil is a senior systems architect in LSI Logic's storage standard products group (www.lsilogic.com) in Milpitas, CA.

This article was originally published on July 01, 2002