An overview of storage area networks and Fibre Channel components.
BY GREG SCHULZ
Internet, e-commerce, large databases, data warehousing/mining, and video applications depend on an ever-increasing amount of data. As a result, these storage-intensive applications have special requirements. Disaster tolerance, extended distances, and worldwide 24x7 availability have placed an emphasis on storage being scalable, modular, open, highly available, fast, and cost-effective.
Parallel SCSI storage and RAID arrays have gone a long way toward addressing these requirements. However, the "virtual data- center" vision has been limited by existing storage architectures.
As a result of the emergence of storage area networks (SANs), storage is entering a period of change similar to what computer networks went through in the late 1980s and early 1990s. Networks have migrated from proprietary interfaces such as SNA and DECnet to open TCP/IP on Ethernet. Simple hub and spoke configurations gave way to robust switched networks with multiple sub-nets, zones, segments, and the Internet. Networks have evolved from being a mechanism for access to computer systems from terminals or PCs, to being able to transfer and share files and support distributed applications including e-mail and Web-hosting.
New storage interfaces
The storage interface such as parallel SCSI that sits between host systems and storage devices is in some cases becoming a hindrance to growth. Traditional storage environments with dedicated storage, as shown in Figure 1, are no longer sufficient for today's applications needs.
That is not to say interfaces such as SCSI are dying. SCSI will continue to co-exist in many environments and can be part of an overall SAN strategy. However, for applications requiring large amounts of storage, high performance, shared storage over long distances, and high availability, a new storage model is required.
Storage area networks
Key potential benefits of Fibre Channel and SANs are reduced storage management effort and costs. Management can be reduced in the following ways:
- Consolidated storage (disk and tape) and storage management;
- Shared storage pools for dynamic allocation;
- Remove redundant costs and complexity;
- Elimination of vendor-specific "islands" of storage;
- Simplified storage planning and procurement;
- LAN-free and serverless backup; and
- Disaster recovery and replication.
In Figure 2, a SAN is represented in a logical manner similar to the way a LAN would be shown. Like a LAN or WAN, underneath the SAN there is an underlying infrastructure.
Today, Fibre Channel is the primary enabling technology for building SANs. The Fibre Channel standard has been refined over recent years, as has the interoperability of various components (host bus adapters, switches, and devices). Currently, Fibre Channel supports speeds of 100MBps or 200MBps, with various topologies, including arbitrated loop, point-to-point, and switched fabric.
Figure 1: Traditional storage architectures include storage devices that are directly attached to servers.
Fibre Channel is an ANSI-standard protocol supporting flexible wiring topologies. Fibre Channel supports several upper-level protocols (ULPs), including SCSI, TCP/IP, FICON, and VI for different application requirements.
A SAN is a network for storage that can include hubs, switches, directors, host bus adapters (HBAs), and routers used for accessing storage. A benefit of a SAN is that you can isolate all storage I/O on a separate network, so that traditional network traffic is not impacted by storage I/O traffic.
A SAN is often depicted as an open-ended "cloud," or network, with virtually unlimited bandwidth and host connectivity. Various servers can plug into and gain access to common pools of storage and services in a transparent manner.
Fibre Channel overview
Fibre Channel is a high-speed serial interface for connecting computers and storage systems (e.g., RAID/JBOD arrays, tape drives/libraries). Fibre Channel provides attachment of servers and storage systems across distances of 10km and beyond, enabling floor-to-floor, building-to-building, and campus-wide distances. It supports multiple standard protocols (e.g., FICON, TCP/IP, and SCSI) concurrently over the same physical cable or media, which can simplify cabling and infrastructure costs.
This interface also allows standard SCSI packets to be transported over fiber-optic or copper interconnects using SCSI_FCP (SCSI Fibre Channel protocol). End users can incorporate existing SCSI devices in a SAN via Fibre Channel-to-SCSI converters such as bridges and routers.
Figure 2: In a SAN configuration, storage is attached directly to the storage network.
Not all storage subsystems are designed to take advantage of Fibre Channel, and the performance of some applications may not be improved because some products have internal constraints that prevent them from running at faster rates. For these systems, Fibre Channel provides distance and connectivity benefits.
Fibre Channel SAN environments consist of several components, depending on the topology and applications.
HBAs and device drivers-HBAs attach to host I/O buses or interfaces such as PCI or the SBus. In addition to providing a physical interface between the host bus and the Fibre Channel interface, HBAs can support various protocols, including SCSI, FICON, TCP/IP, and VI. Today, some of the major differences between HBAs are the level of interoperability with other adapters, protocols supported, operating systems support, and physical media interface support.
For redundancy, Fibre Channel environments should be built around dual switches and/or directors to eliminate performance bottlenecks and single points of failure.
The goal is to build a storage network using similar techniques and principles used for traditional networking combined with storage I/O channels. Given the network type of flexibility provided by Fibre Channel topologies, redundancy can be configured into a storage configuration in many ways.
Using redundant HBAs attached to separate switches and/or directors, storage systems can be configured to isolate against HBA failure, cable failure, or failures at the switch, or I/O controller level. Fibre Channel's distance and performance capabilities enable many applications to benefit from increased redundancy and disaster recovery, including in-house disaster recovery.
Cabling and GBICs-Fibre Channel cabling includes copper for distances up to 30m and fiber-optic cable for distances to 10km and beyond. Mixed-media topologies are fully supported in a Fibre Channel environment, with conversion being handled by GBICs (small interface modules that house a transceiver for a particular medium). The GBIC provides an adapter type of function and enables hubs or switches to support multiple media types such as copper and fiber optics.
Fibre Channel hubs-A Fibre Channel hub provides much the same functionality as an Ethernet hub or concentrator. A hub provides self-healing capabilities using port bypass circuitry to prevent a device failure or physical change from disrupting the loop. A hub is essentially a loop in a box that simplifies cabling and increases loop resiliency.
Hubs can also be used to create entry-level SANs that can be migrated to switch-based fabric environments, thus reducing the cost per port. On one hand, hubs provide simple and easy-to-implement "starter" SANs for small environments at a low cost. On the other hand, Fibre Channel hubs provide shared bandwidth and access, which can result in performance degradation-as more host systems are added, the size of the loop or number of devices is increased, or traffic increases.
Switches and directors-A Fibre Channel fabric consists of one or more switches or directors that provide increased bandwidth, as opposed to the shared bandwidth of hubs. A Fibre Channel switch provides the same function as a standard network switch, in that it provides scalable bandwidth between various sub-nets, segments, or loops. Unlike a hub or loop, which has shared bandwidth, a switch provides scalable bandwidth as users or devices are attached. Switches are used to create fabrics by interconnecting various loops or segments with Inter Switch Links (ISL). Switches can also be used to isolate local traffic to particular segments, much like traditional network switches isolate LAN traffic.
Figure 3: Interconnecting switches can increase bandwidth between ports and improve overall SAN performance.
A Fibre Channel director is a large port count, non-blocking scalable enterprise-class switch with full redundancy. Fibre Channel directors support multiple protocols, including FICON, SCSI, and IP concurrently. A Fibre Channel director can be used to implement large SANs ranging from hundreds to thousands of ports with less complexity, given the number of native ports and fewer ISLs required. Director-class products enable multiple SAN "islands" or smaller switches to be brought together to simplify management, similar to how a large IP router/switch like a Cisco Catalyst 6500 ties a LAN together. When directors and switches are configured together, the director may be referred to as a core device and the switches as edge devices.
Bridges and routers-A Fibre Channel bridge, or router, provides the ability to migrate existing SCSI devices to a Fibre Channel SAN environment. On one side of the bridge are one or more Fibre Channel interfaces, and on the other side are one or more SCSI ports. The bridge enables SCSI packets to be moved between Fibre Channel and SCSI devices. Other new bridges or routers include Fibre Channel to iSCSI for accessing storage over Ethernet and Fibre Channel to ATM gateways for SAN/WAN.
Fibre Channel subsystems-Current Fibre Channel storage devices include JBOD and RAID disk arrays, solid state disks, and tape drives and libraries. Most Fibre Channel RAID arrays today still have SCSI disk drives and Fibre Channel host interfaces.
SAN software-SAN software today includes backup packages to access Fibre Channel tape devices, file- or data-sharing software, and volume managers to provide host-based mirroring, disk striping, and other volume and file system capabilities. SAN software also includes data replication, virtualization, remote mirroring, extended file systems, shared file systems, network management, and serverless backup.
The key to configuring storage for performance and database applications is to avoid contention or bottlenecks. So, when creating a SAN for database environments, avoid making the mistake of trying to use a single Fibre Channel interface or loop to support all of your storage. Instead, use multiple Fibre Channel HBAs to spread I/O devices such as RAID arrays on different interfaces to avoid contention.
The simplest and easiest way to implement a SAN is to buy a "SAN in a box," an enclosure that essentially includes all necessary SAN components.
As a next step, you might implement small production SANs, based on hubs or switches that enable groups of systems to share storage and resources. A subsequent step would be to interconnect various sub-SANs, with zoning or volume mapping to isolate storage to specific host systems for data integrity. Volume mapping, or masking, enables a shared storage device such as a LUN on a RAID array to be mapped to a specific host system. Volume mapping ensures that only the authorized or mapped host can access the LUN in a shared storage environment.
The main advantages of using hubs for simple SANs in the past were low cost and availability. End users are now shifting toward switches as a starting point and toward directors to connect multiple sub-SANs or create larger SANs. The shift toward switches and directors is being driven by reduced cost per port, increased functionality, management tools, and interoperability.
To increase the bandwidth between a host and a SAN, additional HBAs can be added and attached to separate switch or director ports. As shown in Figure 3, using switch ports to interconnect switches or directors can increase overall port count; load-balancing is important to prevent saturating or causing blockage on these ISLs.
Tips and comments
Whether you are ready to implement a SAN or you are investigating the technology for future implementation, the following are some points to consider:
- SANs can be implemented in phases and can include existing storage devices;
- Costs for SAN components are dropping, while features, functions, and interoperability are increasing;
- Similar to a standard network environment, which may include sub-nets or switched segments, you can configure a SAN with multiple sub-SANs or switched segments where certain systems and storage can be isolated and mapped to specific hosts;
- Fibre Channel directors can be used as large high-performance switch or core devices as well as being combined with smaller switches configured as edge devices;
- SAN software for functions such as data sharing, file replication, mirroring, and other applications will continue to evolve; and
- Fibre Channel is not the only possible infrastructure for SANs. Products based on an early version of the iSCSI standard are starting to appear, which will allow end users to build a SAN with standard Ethernet/IP networks.
Greg Schulz is an FC/9000 market development manager at Inrange Technologies (www.inrange.com) in Mt. Laurel, NJ.
- Fabric-A collection of one or more switches that combines to create a virtual fabric where the various endpoints (ports or buses) have virtual connections or cross points to each other in a non-blocking manner. Non-blocking access means that ports do not have to share common bandwidth as in a hub or concentrator, thereby improving I/O performance.
- Logical unit numbers-LUNs describe a logical or physical device and are referred to as logical physical volumes or partitions.
- Upper-level protocols-ULPs operate at the FC-4 level in the Fibre Channel specification. ULPs include SCSI (SCSI_FCP), TCP/IP, VI, FICON, and ATM.
- Virtual Interface-VI is designed for high-speed, low-latency memory-to-memory or system-to-system messaging.
- Volume mapping-A method for mapping specific storage devices or volumes to particular host systems.
- Zoning-A method for creating virtual storage pools using host-based software, HBAs, or switches.