Fibre Channel

Posted on May 01, 1999

RssImageAltText

Fibre Channel

An in-depth look at how hubs and fabric switches work, and how to determine which provides the best solution.

By Tom Clark

Storage area networks (SANs) have quietly infiltrated corporate networks as part of systems solutions that resolve specific application issues. Database servers, for example, may use Fibre Channel switches to enhance throughput for data queries. File and application servers may be shipped in a default configuration with Fibre Channel hubs to provide high-speed access to large JBOD (Just a bunch of disks) or RAID arrays. Since these configurations are packaged by solutions providers, customers may or may not be aware they are actually deploying SANs. And since solutions providers have performed all the necessary qualifications of products to ensure smooth implementations, customers are not involved in the basic selection of SAN components or storage network design.

However, the increasing use of Fibre Channel technology to solve a wide variety of storage problems is pushing SAN design issues to the forefront. Just as users are more actively involved in designing and implementing local and wide area networks, they are looking at the building blocks of SANs to create their own storage network solutions. Fibre Channel disk arrays, host bus adapters, hubs, fabric switches, and Fibre Channel-to-SCSI bridges are the Lego pieces administrators can use to build solutions for tape backup, clustering, bandwidth, distance, and other application-driven problems.

To select the appropriate pieces, it helps to understand each component`s functions. For example, when is a fabric switch better than a hub? When should hubs and switches be used in combination? There are no universal answers, but being familiar with the architecture and capabilities of hubs and switches makes making rational choices for SAN design easier.

The benefits of Fibre Channel hubs

Analogous to Ethernet or Token Ring hubs, a Fibre Channel-Arbitrated Loop (FC-AL) hub is a wiring concentrator. Hubs were engineered to address problems that arose when Arbitrated Loops were built by simply connecting transmit to receive among multiple devices. A daisy chain of transmit/receive links creates a circular data path, or loop, but also poses significant problems for troubleshooting, not to mention adding or removing devices.

To add a new device, for example, the entire loop must be downed. If a fiber-optic cable breaks or a transceiver fails, all cables and connectors between all devices must be examined to identify the offending link.

Hubs resolve these problems by collap-sing the loop topology into a star configuration. Since all devices are connected centrally to the hub, the hub becomes the focal point of adds/moves or changes to the network. Arbitrated loop hubs provide port bypass circuitry that automatically reconfigures the loop if a device is removed or added. Before a new device is inserted into the loop, at minimum the hub verifies valid signal quality. A device with poor signal quality or inappropriate clock speed is left in bypass mode, which allows other good citizens on the loop to continue without disruption.

Each port on a hub typically has LEDs, which gives at-a-glance status information of insertion, bypass, and bad-link states. These features enable a much more dynamic environment because devices can be hot-inserted or removed without disrupting the physical layer, and problems can be more readily identified.

The internal architecture of an arbitrated loop hub embodies the physical loop (see Fig. 1). When a device is inserted into a hub port, the loop is extended through the port`s transmit lead, along the fiber optic or copper cable plant to the receiver of the attached node. From there, it extends from the transmit lead of the attached node, along the cable plant, to the port`s receiver. Thus, the circumference of the loop is dynamically altered as devices are inserted or removed. Unused ports (e.g., Port 4) or those that aren`t receiving valid signals remain in bypass mode and shunt data onto the next port. In this example, the receive lead of the last port, Port 8, is internally connected to the transmit of the first port, which completes the loop within the hub.

This freestanding arbitrated loop provides a continuous circuit (transmit to receive) on each port so that a circular data path is maintained. Arbitrated loop hubs may have 7 ports to 32 ports; more devices can be attached simply by cascading the hubs. That`s accomplished by connecting a port on one hub to a port on another, preferably with fiber-optic cabling, until the desired port count is reached.

To pass data from one port to another, a hub needs Clock and Data Recovery (CDR) circuitry to recover a valid Fibre Channel signal (1.0625 gigabaud). Some hub designs use a synchronous repeater circuit that regenerates the transmitted signal on the basis of the received (recovered) clock. Other designs employ a retimer circuit that uses an independent clock to regenerate the outbound signal.

Both designs have advantages and disadvantages. On the one hand, a repeater circuit provides better performance, with latency typically in the 30-nanosecond range, while a retimer circuit typically imposes a 240-nanosecond processing penalty. However, retimer circuits allow multiple long links (up to 10km) to be connected, each of which benefits from a freshly retimed signal.

Neither design ensures outbound signals free of jitter or timing deviations. Signal quality is a design function, which depends entirely on the quality of engineering applied to the hub`s electrical and physical design. A properly designed repeater circuit can have very good signal quality characteristics, while a poorly implemented retimer can inject unwanted noise onto a link.

Other than verifying proper Fibre Channel signaling, hubs are normally passive participants in the storage network. Unlike HBAs, disk arrays, fabric switches, and other interconnect products, FC-AL hubs do not have Fibre Channel addresses, nor do they engage in protocol-level activity. The exception to this passive hub role is posed by special management functions designed into some hub products. The ability to interpret and respond proactively to protocol events (e.g., to automatically bypass a port because an attached node is issuing potentially disruptive commands) helps maintain loop health and reduce downtime. Such functionality should be non-intrusive, however, since the prime directive of a hub is to simply facilitate communication between Fibre Channel nodes.

In theory, arbitrated loop standards allow 126 nodes plus one fabric port to be attached to a single loop. This number was not generated by performance testing over various application suites, but simply represents the maximum number of encoded bytes (out of 256) that have the requisite balance of 1s and 0s for loop protocol. Therefore, the 127 Arbitrated Loop Physical Addresses (AL_PAs) have no relationship to realistic loop population. Depending on the number of active participants and the traffic requirements of each, extended cascades of hubs typically do not grow beyond 50 to 60 usable ports. Most loops for storage applications are in the 5- to 30-node range.

Of course, hub vendors say they support "maximum cascades with up to 127 devices," although such large loops are in reality only built by engineers for their own amusement.

Fibre Channel hubs, whether in stand-alone configurations or extended cascades, represent a shared 100MBps network segment. Adding more devices to a single segment further divides the bandwidth available to each node, assuming that all nodes are equally active. In storage network configurations, active participants are typically servers, while the storage arrays or tape subsystems simply respond to server requests. Although each node contributes some processing overhead (approximately 240 nanoseconds) and the cabling from hub to nodes may impose some propagation delay, the most important consideration for loop bandwidth is not the number of devices on a loop segment, but the traffic requirement of the active initiating nodes.

Full-motion video streams, for example, may require about 30MBps of bandwidth per stream. Multiple initiators (i.e., video servers or workstations contending for bandwidth) encounter problems if more than three are active concurrently. Sustained full-motion video is an extreme case; most applications have a less aggressive appetite for bandwidth, which allows loop hubs to be used for a variety of implementations with very good performance results.

Figure 2 illustrates a common application for arbitrated loop hubs. To ensure high availability, the servers are dual-provisioned with HBAs, while the disk arrays have both "A" loop and "B" loop connections to two separate hubs. Normal data traffic passes through the primary loop. If the primary loop fails (e.g., the loop is hung by a malevolent protocol event or the hub suffers hardware failure), software on the servers automatically routes traffic to the standby loop. This redundant data path configuration ensures that a route between initiators and targets is always available.

Such a high availability scheme may be complemented with clustering software on each server, which provides server failover, should a server, component, or software application fails. In addition, disk mirroring and other RAID techniques may be employed to ensure redundancy of the data. All of this is accomplished via a fairly straightforward, relatively economical deployment of loop hubs.

Why fabric switches?

Fabric switches are considerably more complex than loop hubs in terms of design and functionality. While a hub is simply a wiring concentrator for a shared 100MBps segment, a switch provides a high-speed routing engine and 100MBps per port. Apart from custom management functions, hubs do not participate in Fibre Channel activity at the protocol layer. In contrast, a fabric switch is an active participant in Fibre Channel communications, both for services it provides (fabric log-in, Simple Name Server, etc.) and for overseeing the flow of frames between initiators and targets (buffer-to-buffer credit, fabric loop support, etc.) at each port.

Supporting 100MBps per port and the advanced logic required for routing and fabric services initially kept the per-port cost of first-generation switches quite high, at about $2,500 to $3,000 per port. But second-generation ASIC-based switches are about half the price, about $1,000 per port, making Fibre Channel fabrics affordable to medium to large enterprise networks.

Fibre Channel switch designs typically use a cut-through switching method to route frames from source to destination. Cut-through provides very high performance since only the destination address (D_ID) in the frame header needs to be read to make a routing decision. Alternately, a store-and-forward technique requires an entire frame to be buffered before it is routed.

As shown in Figure 3, one or more switch ports are serviced by an ASIC, which interfaces to a switch matrix. Microcode that supports specific fabric services is accessed through well-known addresses, as established by Fibre Channel switch standards. Port-to-port switching latency is typically in the two-microsecond range due to re-timing overhead and switch routing.

A fabric switch is a powerful component for storage network design. A switch port may support a single node (N_Port) or multiple arbitrated loop nodes (NL_Ports). A single node on a switch port has a dedicated 100MBps pipe through which data can be sent or received. Arbitrated loop devices on a switch port must use the appropriate protocols to gain access to shared media, but they can now communicate with other fabric devices via fabric log-in. Some vendors` switches also support non-fabric arbitrated loop devices to make use of first-generation disk arrays and other devices.

As with loop hubs, fabric switches can be cascaded to increase the total number of ports. Theoretically, an extended fabric network can have more than 15 million addressable devices. But with most fabric switches in the 8- to 16-port range, it will be some time before all that space is used.

Cascading fabric switches adds some complexity to SAN design, since the cascade links themselves may become potential bottlenecks for switch-to-switch traffic. Redundant links may provide one solution, although they consume additional switch ports. A cascade may also be meshed, creating multiple links between multiple fabric switches. A meshed topology allows for alternate data paths, as shown in Figure 4.

Fabrics can also simplify device discovery and relationships. When a device logs onto a fabric, it typically registers with a Simple Name Server (SNS). The SNS is a small database that provides name fields, participant address information, and a list of upper layer protocols it supports. As an initiator (server) logs onto the fabric, it queries the SNS for devices that support the SCSI protocol, and then establishes sessions directly with disk targets. This prevents the server from having to poll through more than 15 million addresses to find disks.

The fabric may also supply a Registered State Change Notification (RSCN) service, which notifies an interested node (e.g., a server) if another node (e.g., a target) is removed from the fabric or otherwise changes state.

What`s the application?

Deciding when a fabric switch or loop hub is more appropriate usually cannot be decided based on features alone. Fabric login, SNS, and RSCN, for example, are all functions required by a fabric, which is not to say that these services are necessarily required by end-user applications. To fashion the building blocks of a SAN into a useful solution, the original problem must be defined. What`s the application?

For example, for connecting servers to storage, loop hubs are adequate for most transaction processing, email, and other applications with moderate traffic loads. Most installed Fibre Channel SANs are based on this straightforward configuration, with one to four servers attached via loop hubs to two to four disk arrays. For high availability, hubs are deployed in redundant loops (see Fig. 2).

As a SCSI replacement, a loop hub configuration simplifies the wiring scheme, increases bandwidth, and accommodates adds/moves and changes through hot-insertion or removal of devices on the fly. Since an arbitrated loop is a shared media, only one server can actively communicate at any given moment. But if each transaction is fairly small (e.g., transfer of a 10MB file), there is little contention among servers for bandwidth.

A switch could be substituted for the hub in the above example, but aside from higher per-port bandwidth it would offer no particular advantage. If 1 to 5 servers in normal conversations with storage do not saturate a 100MBps shared media, installing a switch will not enhance performance. Bandwidth would simply go unused.

However, if the application involves patterns of traffic bursts that exceed 100MBps, then a switch has distinct advantages over a hub. Prepress operations, for example, involve intermittent reads and writes of very large graphic files. If multiple workstations and storage are attached to a switch, concurrent 100MBps transfers are possible so that two or more workstations can read or write files without contention or delay.

Some applications are inherently more suited to switched, rather than shared, environments. For example, sustained full-motion video streams and streaming tape backup cannot tolerate interruptions, no matter how brief. Loop initialization routines (LIPs) can disrupt tape backups, causing them to abort. Likewise, a LIP during a full-motion video transfer can cause an image to momentarily degrade in a checkerboard effect. A switch allows these streams to be segregated onto individual ports without disruption.

At about 30MBps and 5MBps to 15MBps, these video and tape applications don`t take full advantage of a switch`s full bandwidth. However, in this example, the issue is not bandwidth, but the integrity of the data stream. The switch`s ability to provide dedicated bandwidth is critical, not fabric log-in, SNS, RSCN, or 100MBps-per-port bandwidth.

Campus storage networks and disaster recovery implementations may use loop hubs for local traffic and switch ports for long (up to 10km) links between sites. By segregating the long links to switches, local data traffic is not affected by the propagation delay that typically results from a larger loop circumference. Likewise, in a distributed environment in a single building, hubs may be used to connect servers and storage on each floor, and then attached to switches for up-link to a centralized data center.

Users on each storage/server cluster enjoy 100MBps performance on their local loops, while all loops are connected via switch ports to a common resource (e.g., a tape library). In these examples, SAN designs can leverage both hub and switch capabilities to provide the most cost-efficient, high-performance solution.

The population of a storage network also affects hub and switch components. Loop hubs adequately serve most departmental storage networks. For large SANs, however, multiple hubs with 5 to 30 nodes per loop can be interconnected with cascaded switches to provide hundreds of interactive nodes.

Switches also play a role in consolidating data into large RAID arrays. The application problem here is how to reduce disk administration and support costs by concentrating multiple software applications onto a central storage device. At the same time, the application servers must have access to this common resource, and the array must be able to support a higher number of requests.

A single RAID enclosure may be subdivided into multiple volumes--each accessed by a separate application server. These enclosures may contain terabytes of data and service dozens of application servers. To provide dedicated bandwidth to the array, a switch may be used to support the RAID array, while groups of application servers are attached to the switch via loop hubs.

While there is no single reference or performance matrix that defines when a hub or switch should be used, analysis of the basic application issues can guide users through the decision process. Following the precedent set by shared and switched segments in LAN environments, loop hubs and fabric switches in combination can resolve a wide range of storage problems for enterprise networks.

Click here to enlarge image

Fig. 1. The internal architecture of an arbitrated loop hub embodies the physical loop. Ports that have no devices attached (e.g., Port 4 above), or are not receiving valid signals, remain in bypass mode and shunt data on to the next port.

Click here to enlarge image

Fig. 2. In a typical arbitrated loop configuration, servers have dual host bus adapters, and disk arrays have A-loop and B-loop connections to two separate hubs.

Click here to enlarge image

Fig. 3. In a fabric switch configuration, one or more switch ports are serviced by an ASIC, which interfaces to a switch matrix.

Click here to enlarge image

Fig. 4. A cascade may involve a meshed topology, with multiple links between multiple fabric switches providing alternate data paths.

Tom Clark is director of technical marketing at Vixel Corp., in Bothell, WA. He is the author of "Designing Storage Area Networks," due in the third quarter from Addison Wesley Longman. He can be contacted at tclark@vixel.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives