A look at storage area network benefits, the Fibre Channel interface, and network components.
By Greg P. Schulz
Internet, e-commerce, database, data warehouse, data mining, and video applications are generating increasing amounts of data-information that is seen as a strategic resource and, in some cases, a key source of corporate revenue. These storage-intensive applications-with special requirements such as disaster tolerance, extended distances, and continuous availability-necessitate storage that is scalable, modular, highly available, fast, and cost effective. In the storage arena, the SCSI interface in some cases has become a hindrance to growth, and traditional storage environments with dedicated storage no longer suffice.
That is not to say that interfaces such as parallel SCSI are dead. In fact, SCSI will continue to co-exist with Fibre Channel in many environments and can be part of an overall SAN strategy. However, for applications needing large amounts of storage, high performance, shared storage over long distances, and high availability, a new storage model is needed: storage area networks (SANs).
Two benefits of Fibre Channel SANs are reduced storage management effort and costs. Management can be reduced in the following ways:
- Consolidated storage (disk and tape)
- Shared storage pools for dynamic allocation
- Centralized storage management
- Elimination of vendor-specific islands of storage
- Simplified storage planning and procurement
- LAN-free and server-less backup
- Disaster recovery and replication
Fibre Channel is an ANSI standard protocol that supports flexible wiring topologies and multiple upper level protocols, including TCP/IP, SCSI, ESCON, HIPPI, ATM, and VI. (VI, or virtual interface, is a Fibre Channel upper-level protocol (ULP) that is optimized for high-speed messaging (e.g., distributed lock information for databases and file systems.) Fibre Channel enables SANs in the same way that Ethernet advanced early networks.
The Fibre Channel standard has been refined over recent years, and interoperability (e.g., host bus adapters, hubs, and switches) has improved. Today's 100MBps Fibre Channel interface supports various topologies, including arbitrated loop (FC-AL), point-to-point, and switched fabric. Although Fibre Channel supports many protocols and applications, it is primarily used as an enhanced storage interface to replace or supplement parallel SCSI.
But the key benefit of a SAN is its ability to isolate all storage I/O on a separate network so that LAN traffic is not affected by storage I/O traffic. Fibre Channel is an infrastruc -ture similar to a networking infrastructure, with bridges, routers, switches, hubs, and adapters. These devices need to be combined with device drivers to implement a Fibre Channel SAN environment.
Fibre Channel overview
Fibre Channel is an industry-standard, high-speed serial interface that connects computers and storage systems up to 10 kilometers apart, enabling floor-to-floor, building-to-building, and campus-wide distances. Fibre Channel has a 100MBps throughput, compared to most parallel-SCSI implementations (20MBps for Fast Wide differential SCSI). However, the new Ultra 160 standard has a throughput of 160MBps.
Fibre Channel supports multiple standard protocols (e.g., SCSI, IP, and VI) concurrently over the same physical cable or medium, which can simplify cabling and infrastructure costs. Not all storage subsystems are designed to take advantage of Fibre Channel, however. And the performance of some applications may not be improved due to internal storage constraints. For these systems, Fibre Channel provides distance and connectivity advantages. Performance is improving with Fibre Channel devices, particularly in "full" Fibre Channel RAID devices with Fibre Channel disk drives and enhanced cache management.
Topologies and applications determine the list of components:
- HBAs and device drivers. Host bus adapters attach to host I/O buses or interfaces such as PCI, Sun's Sbus, HP's HSC, and IBM 's MCA. Essentially, a Fibre Channel HBA provides the same function as an Ethernet or SCSI adapter.
In addition to providing a physical interface between the host bus and the Fibre Channel interface, HBAs can support various upper-level protocols such as SCSI, IP, and VI. Most adapters come with drivers that interface with standard host drivers for SCSI, IP, and in a few cases VI. Some of the major differences between host bus adapters include interoperability with other adapters, protocol support, operating system support, and physical media interfaces.
For redundancy, Fibre Channel environments can be built around hubs and switches to eliminate single points of failure and performance bottlenecks. Given the network flexibility of Fibre Channel topologies, redundancy can be configured in a variety of ways. Using optical or fiber cables, redundancy can extend up to and beyond 10 kilometers. Using redundant HBAs, each attaching to separate hubs or switches, storage systems can be configured to isolate against failures in adapters, cables, hubs, switches, and I/O controllers. Fibre Channel flexibility, distance, and performance enable many applications to benefit from increased redundancy and disaster-recovery capabilities.
- Cabling and GBICs. Fibre Channel cabling includes copper for distances up to 30 meters and various types of optical or fiber-optic cable for distances up to 10 kilometers. In all instances, performance is rated at a maximum 100MBps, with the difference being distance. Mixed-media topologies are fully supported in a Fibre Channel environment, with conversion handled by GBICs (small interface modules on hubs, switches, and adapters that house a transceiver for a particular medium).
Rather than using a separate adapter card or hardware for DB-9 or HSSDC copper (two types of copper interfaces) and SMF and MMF optical (two types of optical interfaces), GBICs provide adapter-type functionality. GBICs enable a hub or switch to support multiple media types, including copper and optical.
- Hubs. A Fibre Channel hub provides much the same functionality as an Ethernet, FDDI, or other network hub or concentrator. A hub has self-healing capabilities, using port bypass circuitry to prevent a device failure or physical change from disrupting the loop. A hub is essentially a loop in a box, which simplifies cabling and increases loop resiliency.
Hubs can be used to create entry-level SANs that can be migrated to switched fabric environments. On one hand, hubs provide low-cost, easy-to-implement SANs for small environments. On the other hand, Fibre Channel hubs provide shared bandwidth and access, which can result in performance degradation as devices and hosts are added.
- Switched fabrics. A fabric consists of one or more Fibre Channel switches, providing increased bandwidth compared to hub-based, shared-bandwidth configurations. A Fibre Channel switch provides the same functions as a network switch: non-blocking access among various sub-nets, segments, or loops.
Unlike a hub or loop, which has shared bandwidth, a switch's bandwidth is scalable. Ports are simply increased. Switches are used to create fabrics by interconnecting various loops or segments together and can also to isolate local traffic to particular segments
- Bridges. A Fibre Channel bridge, sometimes referred to as a router or multiplexer, migrates SCSI devices to Fibre Channel environments. On one side of the bridge are Fibre Channel interfaces; on the other side, parallel SCSI ports. A bridge enables SCSI packets to be moved between Fibre Channel and parallel SCSI devices.
- Disk arrays. Current Fibre Channel storage subsystems include JBOD (Just a Bunch Of Disks) and RAID arrays. Tape drives, libraries, and solid-state disks are also expected to go Fibre Channel soon. Most Fibre Channel RAID arrays today have Fibre Channel network interfaces with SCSI or Ultra-SCSI disk drives, although a few vendors provide "full" Fibre Channel arrays (e.g., including Fibre Channel disk drives.) These arrays are being enhanced to support point-to-point, loop, and switch topologies.
- SAN software. SAN software currently includes backup packages, file or data sharing software, and volume managers to provide host-based mirroring, disk striping, and other volume and file-system capabilities. In the future, SAN software will include replication, remote mirroring, extended file systems, shared file systems, network management, and server-less backup.
Software components include device drivers for host bus adapters, management software, and optional special function host software. Special function software may include host mirroring for remote or disaster-tolerant mirroring of data, backup software, data sharing or file replication software, clustering software, or distributed locking for messaging and database applications.
Volume mapping, or masking, enables a shared storage device like a LUN on a RAID array to be mapped to a specific host system. Volume mapping, which can be implemented in software or hardware, ensures that only the authorized (or mapped) host can access the LUN in a shared storage environment.
Whether you are ready to implement a SAN or are investigating the technology for future implementation, here are some additional points to consider.
- Depending on your performance requirements, you may need more bandwidth than provided by a single loop or hub. To increase bandwidth between a host and a SAN, additional host bus adapters may be added and attached to separate hubs, loops, or switch ports. Interconnecting switches increase bandwidth and overall SAN performance. Interconnecting two or more hubs together without a switch will not increase bandwidth because you still have a shared loop.
- SANs can be implemented in phases, and may include some of your existing storage devices. Cost for SAN components (hubs, switches, drives, and RAID controllers) are dropping, while features and functions are increasing. And interoperability is rapidly improving.
- Similar to standard LAN environments, which may include sub-nets or switched segments, you can configure your SAN with multiple sub-SANs or switched segments. In these environments systems and storage can be isolated and mapped to specific hosts.
- SAN technology developments will get a performance boost-100MBps to 400MBps-over the next few years. SAN hardware components will continue to evolve, and interoperability will continue to improve. SAN software for data sharing, file replication, mirroring and other applications will continue to evolve.
- Although Gigabit Ethernet provides similar theoretical performance, deploying SANs over Ethernet or its derivatives would be very difficult. Network protocols such as TCP/IP, UDP, NFS, CIFS, and HTTP are well suited and supported over networks like Ethernet; however, block or direct-access protocols like SCSI, or high-speed messaging protocols like VI are supported on Fibre Channel.
Fibre Channel provides better performance, as well as the ability to run multiple upper-level protocols (TCP/IP, NFS, VI, SCSI, ESCON, and ATM) concurrently on the same medium over long distances.
Greg P. Schulz is a senior technologist at MTI Technology Corp., in Anaheim, CA. www.mti.com.