In the fast-paced world of e-commerce and the Web, there are two facts that seem to remain constant. First, the amount of data is always growing. Second, there can never be an interruption in the access to data.

These facts are forcing a shift in the way systems are being viewed and designed. Fading are the days where the server is the heart of the IT environment and

everything else is a peripheral. In the future, the data (i.e., storage) will be viewed as the heart of the IT environment. Key to this shift is storage area network (SAN) technology.

SANs allow block-level direct access to storage by multiple servers. This is different than network-attached storage (NAS), which operates at the file level. The de-coupling of servers from storage has many advantages. First, SANs allow storage and servers to be scaled independently of each other. In other words, a SAN allows additional storage or servers to be brought online without disturbing the rest of the systems in the environment, or causing an interruption in the access to data. The ability of storage to be accessed by multiple servers allows for greater fault tolerance. If a server fails in a SAN, the ability to access the server’s storage is not lost; another server can assume the role of the failed server. This ability is key to server clustering.

Fibre Channel is currently the only option for building SAN fabrics, but in the future it may be possible to implement SANs using Ethernet and/or InfiniBand.

Fibre Channel

Fibre Channel is a set of standards developed by the American National Standards Institute (ANSI). Development began in 1988 as an extension of the work being done on the Intelligent Peripheral Interface (IPI) Enhanced Physical standard. Fibre Channel is a high-performance, full-duplex interface that supports multiple topologies, physical interconnects, and protocols. Devices running at 1Gbps are currently shipping in volume, and a variety of vendors have begun sampling 2Gbps devices. Future interface rates could include 4Gbps to support storage and 12Gbps (10Gbps after encoding) to support fabrics.

The Fibre Channel architecture is based on layers, called levels, that have the following definitions:

FC-0 defines the physical portions of Fibre Channel, including media types, connectors, and electrical and optical characteristics needed to connect ports.

FC-1 defines the transmission protocol, including the 8B/10B encoding, order of word transmission, and error detection.

FC-2 defines the signaling and framing protocols, including frame layout, frame header content, and the rules of use. It also contains independent protocols such as login.

FC-3 defines common services that may be available across multiple ports in a node.

FC-4 defines the mapping between the lower levels of Fibre Channel and the upper-level command sets such as SCSI and IP. Each command set mapping is defined in a separate specification.

Fibre Channel supports three different connection topologies: point-to-point, arbitrated loop, and switched fabric. In a point-to-point topology, two devices are connected together with a single Fibre Channel link. The link can be a copper cable up to 50 meters in length, or an optical fiber link that allows cable distances up to several kilometers. Because a point-to-point topology involves only two devices, it is not a suitable topology for SANs.

In an arbitrated-loop topology, devices are connected in a loop. In this case, the receiver of a node is connected to the transmitter of the previous node in the loop, and the transmitter of the node is connected to the receiver of the next node in the loop, and so on. An arbitrated loop can support up to 127 devices on a single loop, and the bandwidth is shared among all of the devices.

The arbitrated-loop topology was originally developed as a low-cost way to attach storage devices to Fibre Channel, and this topology still makes up the bulk of Fibre Channel implementations today. One problem with an arbitrated loop, from a SAN point of view, is that adding or removing devices involves opening up the loop. This causes all traffic to stop on the loop, and the loop has to be reinitialized. Some of these problems can be circumvented through the use of Link Resiliency Circuits (LRCs) or Fibre Channel hubs that include LRCs. While there are many SANs already built using an arbitrated-loop topology, it is generally not an optimal topology for SANs.

The true power of a SAN is realized using a switched-fabric topology. In a switched fabric, multiple devices are connected together through a switch or a series of switches. This topology allows for any-to-any connections, where each connection has the full bandwidth available. Switched fabrics can be very large, with millions of devices attached to the fabric. Fabrics also allow for the addition and removal of devices without interruption to the fabric, and also allow for the mixing of different speed devices. Figure 1 presents an example of a simple SAN that is connected using a Fibre Channel switched fabric topology.

Even though Fibre Channel is the dominant SAN interconnect, some problems remain. Interoperability at the device level is for the most part no longer an issue, but interoperability between switches is still a potential problem.

Another major issue is in the area of SAN management. Currently, a majority of SAN management solutions require a separate Ethernet connection to pass management commands. This is referred to as out-of-band management. Solutions that support IP over Fibre Channel, or in-band management, just recently began shipping. The Fibre Channel Industry Association (FCIA) and the Storage Networking Industry Association (SNIA) have formed several working groups to address these issues.

Ethernet

Ethernet is the dominant networking technology. Work on Ethernet began in 1980, and in 1983 the Institute of Electrical and Electronic Engineers (IEEE) approved the IEEE 802.3 standard.

Ethernet follows a hierarchy that extends from the physical layer up through the application layer. The reference for this hierarchy is the seven-layer Open System Interconnection (OSI) model. The layers are defined as follows:

Layer 1 is the physical layer, which defines the transport, including media types, connectors, and electrical and optical characteristics.

Layer 2 is the data link layer, which defines the access method, such as Ethernet or Token Ring.

Layer 3 is the network layer, which defines the routing protocols such as IP or IPX.

Layer 4 is the transport layer, which defines transmission control protocols such as TCP or UDP.

Layer 5 is the session layer, which defines the end-to-end session control.

Layer 6 is the presentation layer, which defines application-specific data formatting.

Layer 7 is the application layer, which includes e-mail, file transfers, etc.

In the past, Ethernet has not been considered as a SAN interconnect, primarily because it was too slow and did not support a block-level storage protocol. However, with 1Gbps switched-fabric Ethernet networks starting to ship in volume, and 10Gbps speeds on the roadmap, speed is no longer a major issue.

The main issue is developing a block-level storage protocol for Ethernet. There are several efforts under way to develop a suitable block-level storage protocol for Ethernet. For example, Cisco and IBM, as well as Adaptec, have submitted proposals based on using the SCSI protocol over Ethernet (see InfoStor, June, p. 1). A working group within the Internet Engineering Task Force (IETF) has been formed to develop a method to encapsulate the SCSI protocol, and a standard is expected within a year.

Ethernet has many advantages similar to Fibre Channel, including high speed, support of a switched-fabric topology, long and inexpensive cables, and a very large address space. The fact that Ethernet is ubiquitous in the IT environment provides several other advantages, such as widespread interoperability, a large set of management tools, and economies of scale. A SAN based on Ethernet would basically look the same as the Fibre Channel SAN shown in Figure 1, with the exception that the switch would be an Ethernet switch.

The main near-term problem with Ethernet is the lack of a standardized storage protocol. Another challenge is the fact that the transport protocol, TCP, requires a large amount of processing by the CPU. The large CPU overhead is not suitable for storage traffic, and has to be dealt with by either off-loading TCP processing to an intelligent controller or using a lighter-weight transport protocol.

InfiniBand

InfiniBand is the result of the merging of the Next Generation I/O (NGIO) architecture and the Future I/O (FIO) effort. InfiniBand represents an industry-wide effort to develop a replacement for the PCI bus. Version 1 of the specification was due this month.

InfiniBand represents a significant change in server architecture. Figure 2 illustrates the architectural model.

In InfiniBand, the memory controller is connected to a host channel adapter (HCA). The HCA is connected using InfiniBand links through a switch, or a series of switches, to target channel adapters (TCAs). TCAs are then used to interface to other forms of I/O, such as parallel SCSI, Ethernet, or Fibre Channel. The TCA could also be the front end for an external RAID subsystem.

There can be multiple HCAs and TCAs within an InfiniBand subnet. Subnets are joined together through routers. InfiniBand is based on a 2.5Gbps connection, and the links can be 1, 4, or 12 connections wide. Each link can support up to 15 Virtual Lanes, which pass messages between queue pairs. A queue pair consists of a send queue and a receive queue.

InfiniBand was designed to support storage, networking, and inter-processor communications, and draws heavily from Intel’s Virtual Interface (VI) architecture. As such, InfiniBand is well suited as a SAN interconnect. Actually, InfiniBand goes beyond SANs in that it is designed to be an interconnect for system area networks, which allow for large, high-performance clustered systems.

Even though InfiniBand has many potential advantages over current I/O technologies, it is a major industry undertaking that involves sweeping changes to server architectures. As with any new architecture, there will be many issues that arise during the course of developing systems, and many speed bumps along the way. Estimates vary, but it may be two to three years until InfiniBand products begin shipping in volume.

Conclusion

Today, Fibre Channel is the only viable interconnect for SANs. Fibre Channel is well suited for large IT environments where cost and complexity are not an issue.

As SAN technology migrates down to smaller environments, storage over Ethernet may become an attractive alternative. Ethernet is well understood by IT personnel, and it provides the advantages of a single network/storage infrastructure. And, due to the sheer size of the Ethernet market, storage over Ethernet will likely have a significant price advantage over Fibre Channel.

In time, server architectures will migrate to InfiniBand. By that time, SAN technology will be fairly pervasive, with a large installed base of Fibre Channel and Ethernet SANs. Figure 3 illustrates the relative positions of the different I/O technologies in relation to system complexity.

Each of the I/O technologies fills a place in the SAN market, and they are for the most part complementary. Each technology fills a different set of needs, so that no one technology can satisfy the entire market.

Leave a Reply

Your email address will not be published. Required fields are marked *