STORAGE SYSTEMS: Drive/controller advances for SANs

Posted on January 01, 2000

RssImageAltText

Technologies include XOR, multi-port, full duplex, multiple exchanges and sequences, independent LIP controllers, outstanding credits, LRCs, and LUN mapping.

by Bill Clemmey and Steve Hammond

Implementing and managing a Fibre Channel storage area network (SAN) may not be as difficult as previously thought. Several emerging technologies in hard disk drives, RAID controllers, and enclosures are making SANs easier to install and manage, while promoting interoperability, efficiency, reliability, and performance.

One of the more important technologies is Exclusive-Or (XOR) firmware code. Working in conjunction with LUN mapping software, XOR can be a key hardware-enabler of Fibre Channel networks. XOR is a function associated with storage-array applications that detects errors and helps ensure availability in a RAID system. Data is striped across multiple drives, including a parity check drive. In arrays, the storage devices are organized into objects of redundancy groups. Some portion of the redundancy group is used for check data. The check data is generated by performing a cumulative XOR with the data from other areas within the group. If an error is detected, XOR corrects it and regenerates the data.

Further, if one device within the redundancy group becomes an initiator and sends an XOR command to another device within the group, this eliminates the need for the storage controller to rebuild, or regenerate, the data. In this "third-party" XOR scenario, a storage device effectively acts as an initiator and performs subsequent operations within the redundancy group.

Some disk-drive manufacturers are embedding third-party XOR in devices to improve performance in host-based RAID applications in SAN environments. While the XOR operation may be performed by either the storage-array controller or the storage device, when it takes place within the device, the controller does not have to perform XOR operations. This leaves the hosts to perform other tasks, thus improving overall performance.

null


Diagram shows a disk drive running in a dual-port, full-duplex, multi-exchange SAN fabric.
Click here to enlarge image

Another important drive feature for SAN configurations is multi-port support for redundancy and fault tolerance. And because SANs are often implemented in fabric topologies using public loops, they require default public support in Fibre Channel drives.

Manufacturers are responding to these requirements with technologies such as full duplex, multiple exchanges and sequences, dual loop initialization process (LIP) controllers, and outstanding credits.

  • Full duplex. Full duplex refers to a drive's ability to simultaneously transfer and receive data on a single port. SANs, with multiple loops and fabric topologies, demand the additional performance that full duplex technology offers. Full duplex enables the drive to consistently run at speeds of up to 200MBps per port.
  • Multiple exchanges and sequences. To optimize performance, a drive can multiplex multiple I/O data streams within a single loop tenancy to single or multiple initiators or to other targets. This is especially advantageous in Fibre Channel SANs, in which systems and devices vie for bandwidth and control. By incorporating the technology for multiple exchanges and sequences, Fibre Channel drives can optimize throughput whenever they are on the loop. Moreover, the ability to handle multiple exchanges and sequences improves SAN performance by limiting overhead.
  • Independent LIP controllers. Emerging Fibre Channel drives will incorporate two independent LIP controllers, one for each port. This eliminates overhead associated with single LIP controllers, and the LIP handshaking between the two ports. Because SANs use two loops, the independent LIP controllers allow the drive to initialize on either loop with no performance hit on the other port. The advantage is better concurrent performance on both loops. In other words, the faster the LIP and the more efficient the loops, the faster and more reliable the initialization of the SAN application.
  • Outstanding credits. A key issue in current SAN technology is the inherent delays caused by disk drives on the loop because they can't handle large "credits." A credit is the permission to send a frame over the bus. By using additional credits, the SAN's loop efficiency improves since the wait time for Readys (RDYs) is reduced. Additional credits also support longer loops without taking a performance hit.

Enclosure and controller enablers

Some storage subsystems combine these Fibre Channel disk-drive improvements with other enabling technologies in enclosures and controllers. Hot-swappable bypass circuits, known as loop resiliency circuits (LRCs), boost SAN performance and fault tolerance at the enclosure level. LRCs reduce the possibility of a single point of failure and keep the loop alive. Some enclosures also ensure loop resiliency with onboard hub functionality. This is done through port bypassing; data is sent to the next storage device without stopping or bottlenecking at the failed enclosure.

In addition, most enclosures today use SCSI enclosure services (SES) on a chip. The SES standard provides several benefits in a SAN environment. For example, it enables the capture information about the health and status of components such as fans, power supplies, and enclosure environmental conditions. This information is accessed from the SES chip through the controller and HBA and is forwarded through the server to the client. Onboard "agents" can feed the information to a WAN management console. SES uses drive hardware and does not require a unique ID within the device. This frees up an otherwise-used ID for more drives.

A number of controller enhancements are also particularly beneficial in SAN environments. For instance, many onboard controllers offer "bridge" capabilities. Bridge controllers attach to any operating system and help ensure device interoperability. In addition, some controllers have an operator control panel that allows all RAID activity to be configured and controlled at the device level.

null


Using more credits enables longer loops and minimizes loop delays in a SAN environment.
Click here to enlarge image

Another issue for controllers in a Fibre Channel SAN environment is addressing target device IDs-for example, a disk drive-when a LIP occurs. An ID can be assigned by a fabric, hard-coded on the back panel, or soft-assigned during the LIP process.

The limitations of some operating systems require IDs to be hard-coded (e.g., the IT manager must manually assign a hard address to any drive before it is installed, as well as monitor IDs.) This prerequisite adds management and support issues, which become more apparent in Fibre Channel SANs. When juggling up to 125 IDs in a single loop, the complexity and inefficiencies of hard coding become painfully clear in two ways.

First, hard coding does not easily adjust to new configurations; in fact, new configurations must be set up to support the hard-coded IDs. Second, it does not efficiently use the expanded set of IDs available in a Fibre Channel SAN. For example, if an enclosure has more than eight drive bays, the remaining drive bays (fewer than eight) use eight bits. A 10-bay system uses up two eight-bit addresses, leaving six (16-10) unused.

While early host adapters followed initial Fibre Channel specifications that required all disk drives to be hard-addressed, most RAID controllers today follow the revised spec, which allows a host adapter or RAID controller to determine the physical location of a disk drive on the loop. Therefore, controllers can now handle any type of addressing method. And, if the disk drives are installed behind a RAID controller, it eliminates the need to hard address the disk drives.

LUN mapping options

Though most controllers now feature fabric-aware capabilities, some controllers also take advantage of fabric and SAN storage-sharing requirements through on-board LUN mapping software. This allows administrators to partition LUNs on a storage device and to share the device through heterogeneous server operating systems and clients.

LUN-mapping capabilities can be implemented in four ways: via host-based software, through a switch or other intermediary device, on a host bus adapter, or on the RAID controller. (See InfoStor, November 1999, p. 22.)

Host-based software implementations can provide the greatest capabilities to do multiple reads and writes across multiple host servers. On the downside, this method requires the most host processing cycles. In addition, it can slow down network traffic as commands are processed.

At the intermediary level, between the host and the storage, switches provide an easy-to-implement approach, maximizing access to all storage and attached hosts. However, this method can be costly.

LUN mapping on host bus adapters is generally less expensive than switches, but compatibility may be an obstacle. Some host bus adapters are not compatible across multiple platforms or with host bus adapters attached to existing storage systems.

Using a controller for LUN mapping provides IT managers with a storage-centric view of the device. Putting LUN- mapping technology onboard does not take up host cycles or network bandwidth. This approach is cost-effective and has the greatest expansion capabilities. This method, however, does not control the metadata for writes. For instance, in a dual-write setting, both A and B can see the request as open. When A writes and closes, B can overwrite with bad data-corrupting files. So, file-sharing and locking software may be necessary add-ons.

Controllers boost SAN reliability

Most controllers also have active/active capabilities, which reduce the single point of failure, enable hot swapping, and maximize bandwidth. I/O activity can occur across both controllers simultaneously, effectively doubling the throughput. When one controller goes down, the performance of the system is only decreased to single controller performance. In an active/passive setting, you do not get the benefit of the extra performance gain.

Automatic failover, which restarts applications on another server if the executing server fails, is another SAN-enabling feature of many controllers. Putting automatic failover on the controller cuts the single point of failure in a dual-controller setting. This feature can also reside on the server or can be embedded in the host bus adapter.

With recent advances in core technologies, the industry is close to realizing the full potential of storage networks. But SANs are not mainstream, yet. While manufacturers continue to improve the underlying technologies, IT managers should work with experienced integrators and vendors familiar with all the complex issues.

Bill Clemmey is Fibre program manager at Quantum Corp. (www.quantum.com), in Milpitas, CA, Steve Hammond is vice president of marketing at nStor Technologies Inc. (www.nstor.com), in Lake Mary, FL.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives