Storage-Area Networks: Present and Future

Posted on June 01, 1998

RssImageAltText

Storage-Area Networks: Present and Future

A rundown of where we`re at, where we`re going, and what`s still missing.

By Clodoaldo Barrera

These are times of great innovation in storage systems. But without major improvements to the way data is stored and managed, there are serious practical limits to the scalability of these systems. The storage industry is working to create an infrastructure that meets future requirements. Though there is much left to do, a technology exists that changes the way storage and data associates with servers. The concept is dedicated networks, or storage-area networks (SANs).

Over the past 30 years, the goals of storage solutions have not changed much. Indeed, the qualitative merits of good storage solutions are the same as they were in the early days of mainframe glass houses. The five basic attributes of a solid storage strategy continue to be:

- Growth. A storage environment must accommodate data growth--additional compute power, applications, and users and generators of data.

- Access. Data must be available to users when and where they need it, at the right performance level and in the right format.

- Movement. Not only must data be delivered to users, but it must be physically located in the right place. Backup copies must be sent to other locations; data must be migrated to new storage hardware. The real trick is moving data without interrupting access or slowing system performance.

- Security. Data must be safe from disaster, caused by man or otherwise, and from unauthorized users.

- Management. Administrators must be able to easily perform regular backups and ensure data redundancy and optimum system performance in a timely manner.

Of course, the above must be achieved within the constraints of cost and technology. Over the years, a fairly complete solution has been developed for mainframes and associated storage peripherals, channel connections, and management software. As long as the data stays within the mainframe environment, management tools allow automated backups, data migration, replication, remote copy, and other cost-saving or business-benefit services.

The rise of distributed systems and network computing has made storage management profoundly more challenging, however. Even if networks were infinitely fast and latencies were zero, there is still the huge problem of managing the vast amount of storage products that exist.

The Promise of SANs

The promise of SANs is simple: a high-bandwidth, reliable network that is dedicated to and optimized for storage-related data traffic. The network connects servers with storage products and provides a high performance path for server I/O. Due to the distances supported by the network, storage can be physically separated from multiple servers. Storage is now a common resource for the servers and can be partitioned among them. With a little imagination, data could be shared among servers attached to shared storage.

SANs also connect individual storage subsystems, permitting true peer-to-peer operations among storage units without a server middleman. Finally, this type of network is built on a standard that is supported by all servers and storage subsystems, eliminating the multi-vendor tower of Babel.

Technologies for SANs

Storage interconnect technologies (i.e., SCSI, SSA, Fibre Channel, and Fibre Channel/ESCON) play an important role in SANs` future.

SCSI

SCSI is the most commonly used interconnect between servers and storage, devices and external controllers, and external controllers to servers. SCSI is based on an 8-bit or 16-bit data bus. A Wide Ultra2 version now touts an 80MBps transfer rate--a far cry from SCSI`s 5MBps transfer rate in the early 1990s. SCSI supports up to 15 devices in a shared-bus topology at distances of a few meters in single-ended designs and up to 25 meters in low-voltage differential (LVD) implementations.

SCSI buses can be designed into the back planes of storage enclosures, which greatly simplifies the job of connecting disk drives to the shared bus. Although the process of setting up and terminating the SCSI bus is rather complex, the technology is mature and is available at low cost from a number of vendors. SCSI is a reliable transport for data storage, and virtually all server platforms support some version of SCSI storage attachment. The simplicity of the interface and its ubiquity make it a good choice for local attachment of storage to servers.

The limitations of SCSI arise directly from its simplicity. SCSI is inherently a local interface; it does not support long distances or reconfiguration within the interface; and it has a fixed amount of available bandwidth. Performance is constrained by the shared bus, which carries only one transmission at a time. Multiple buses can be used to achieve large capacities or for high availability, but there are physical and connectivity limits. Further, attaching multiple servers to a shared storage resource (such as a disk array) is complex.

SSA

More than three petabytes of Serial Storage Architecture (SSA) storage have been attached to Unix and PC servers since the interface was introduced in 1995. Unlike the shared SCSI bus, SSA is configured as a series of point-to-point links, each carrying a packet of data. SSA is typically configured as a loop, which gives each server or device two paths to any destination in the loop. The logical protocol carried over the SSA loop is SCSI, so only modest software modifications are required to support the new interface and are confined to the device driver.

The serial loop topology and packetized protocol make SSA a more flexible interface than SCSI. SSA loops support 127 nodes, adapters, or devices over distances of 25 meters per loop. The links are bi-directional and operate in full-duplex mode. A loop with a single host adapter can conduct four full-performance operations at a time--a read and a write in each direction from the adapter with an aggregate bandwidth of 80MBps. (This speed is expected to double later this year, with full backward compatibility.)

Breaks in the SSA loop do not prevent access to data because retry conditions are handled at the link level and because adapters can use alternate paths around the loop. This is a key asset in re-configuration: The loop can be broken, new drives or servers can be attached, and the loop reconnected without interrupting applications.

Another key advantage is SSA`s scalability in multi-server configurations. An SSA loop is a series of point-to-point connections between neighboring nodes; traffic on one connection does not interfere with traffic on another, an attribute known as spatial reuse. For a loop with 8 servers, as many as 32 full-speed transfers can occur at any one time (two reads and two writes per server). Therefore, properly configured loops can increase in bandwidth as the configuration grows--an important feature for SANs. SSA host bus adapters with multi-initiator support and RAID-5 provide host connection to SSA storage subsystems. SSA adapters are available for RS/6000 and Sun servers, and PCI adapters have Windows NT, OS/2, and Netware drivers for PC servers. A bridge connection allows SCSI hosts to connect to loops of SSA disk drives. An optical extender is available for distances of up to 1.2 miles.

Fibre Channel

An emerging technology, Fibre Channel promises to become the true workhorse of SANs. Fibre Channel is a set of standards that define the physical, signaling, switching, and upper-level protocol mapping of a 100MBps interface. Low-cost copper links support 25-meter connections; multimode fiber-optic links, 500 meters; and single-mode fiber, up to 20 km. Longer distances can be achieved with repeaters. SCSI, HIPPI, ATM, IP, and IPI protocols have been mapped to the Fibre Channel transport.

The Fibre Channel standard describes three interconnect topologies. The simplest is point-to-point: Two units (servers or storage) are connected to each other with a single, dedicated fibre cable. The second topology--Fibre Channel- Arbitrated Loop, or FC-AL--is more useful for storage applications. FC-AL is a loop of up to 127 nodes that is managed as a shared bus. Traffic flows in one direction, carrying data frames and primitives around the loop with a total bandwidth of 100MBps. Using the arbitration protocol, a single connection is established between a sender and a receiver, and a data frame is transferred around the loop. Loops can be configured with hubs to make connection management easier.

FC-AL is an effective way of connecting multiple servers and storage devices, but a single break in the unidirectional loop stops all traffic. Loops can be made more robust with resiliency circuits that heal the loop when a device fails or is removed, but high-availability applications require more fault tolerance. In these cases, dual-loop configurations are used with dual-ported devices, requiring a set of resiliency circuits and hubs for each loop.

Like SSA, FC-AL carries a mapping of the SCSI upper-level protocol, so software changes are minimal when new hardware is added. Advanced functions will require new software. For example, device drivers will support multiple paths to storage devices, so failures in one loop can be recovered by directing traffic to the other loop.

Finally, Fibre Channel has a switched topology that allows thousands of ports to be connected to a fabric. Traffic between ports is routed through switches that can be interconnected and cascaded. Configurations can be created with large capacity, high availability, high throughput, or a combination thereof. Protocols for name serving have been worked out that allow systems to be attached to the fabric, to obtain an address, and to become known to other systems. A special switch port, called an FL port, allows an FC-AL loop to be attached directly to the fabric. Gigabit Fibre Channel products began to appear last year, and more are expected this year, including disk drives from multiple vendors, host bus adapters, hubs, and switches. Loop and point-to-point connections can be used to attach cached-RAID disk subsystems to multiple servers. After a late start, the tape community is showing interest in Fibre Channel attachments; products are expected later this year.

Fibre Channel and ESCON

ESCON SANs support a switched fabric for system-to-system, system-to-peripheral, and peripheral-to-peripheral traffic. The access methods that S/390 mainframes provide support multi-pathing, fail-over, and cross-system sharing of storage resources and data. And ESCON Manager software includes security features such as fencing. However, since its introduction in 1990, ESCON has not increased the basic signaling speed of 20MBps. Furthermore, while widely used in high-availability enterprise computing environments such as S/390 environments and on RS/6000 servers with channel-to-channel connections, ESCON is not supported in the general server marketplace and is not priced to play in lower-level server applications.

Last month, IBM announced an initiative for Fibre Channel on the S/390 platform.

What Will SANs Look Like?

First, SANs already exist. Fibre Channel and SSA loops are good examples of early SANs, while a S/390 CECPlex with its associated storage and ESCON connectivity is a good example of a high-end SAN.

Second, bridges to new infrastructures are necessary. IBM already offers bridge products for SCSI and SSA and for SCSI and ESCON. As Fibre Channel becomes more popular, bridges to the other interconnects will become available. Storage servers will support a number of different interfaces; by 2000, disk arrays will support Fibre Channel, SCSI, ESCON, SSA, and possibly network interfaces such as Gigabit Ethernet.

Third, there is value in interoperability. Storage servers that can be attached to any server in an enterprise are more valuable than those that cannot be attached. Users will continue to place higher value on products that conform to standards and are interoperable. SAN connectivity demands that other combinations work too: For example, a remote backup to another disk product or to a tape library should be possible without additional equipment.

Fourth, SANs require significant management software. Intelligent device drivers and management tools for adapters, hubs, switches, and bridges are necessary. Without a strategy for these management components, SANs will become unmanageable. Ultimately, SAN management packages will include tools that are plug-ins into enterprise management frameworks.

Fifth, do not confuse the physical connectivity of SANs with middleware and application-level exploitation that will evolve over time. To share data between applications in two dissimilar servers, it is not enough to provide a common connection to shared storage. True data sharing requires many layers of software. Similarly, SANs allow peer-to-peer operation between a disk array and a tape drive, but considerable work is required before a fully outboard backup can be performed.

At the risk of making predictions in print (which can later be inspected for accuracy), here is a view of how SANs will evolve.

1) SANs as better channels. As a first toehold, technologies like Fibre Channel and SSA offer functional improvements over parallel SCSI without requiring significant changes to systems and software. Fibre Channel offers longer distance support and performance and scalability improvements over SCSI. Long-distance attachment for disk arrays and tape libraries will likely be the principal motivating factor for Fibre Channel releases in 1998 and 1999. In this step in the evolution of SANs, the logical protocol is still SCSI, and the software changes are modest. There is real value in taking this first step because the benefits are immediate, the costs are minimal, and the infrastructure is expandable.

2) SAN-optimized storage. New storage functions will improve performance, availability, and management. Replication services will be important in this phase. New software will be needed to manage these functions, but the advantages will be significant. For example, with replication, backups can be performed without interrupting applications, which means the "backup window" will become a thing of the past. Cross-platform data copy sharing is an important benefit of this phase. Bridging products will allow the transport of data from one system platform to another (e.g., extracting data from a mainframe database and sending to a Unix platform for mining).

Outboard data movement will also be possible in this phase, allowing users to back up disk arrays to tape in a peer-to-peer operation, which doesn`t tie up host-processing cycles or bandwidth. The software costs for this step will be more significant than they are for step one, but so are the benefits (e.g., performance balancing and faster backups and reconfiguration procedures). The host file system must still control the backup, providing one or more disk arrays with a list of extents, determining synchronization points, and choosing the destination of the backup. And, lastly, storage subsystems, disk arrays, and tape libraries will become storage servers that present emulated device interfaces or higher-level APIs.

3) SAN-optimized systems. At some point, SANs will become the de facto I/O environment. Replication and conversion of data within the network and direct delivery of data from storage servers to clients will likely become key services. Clients and users will view the network as the ultimate secure repository of data. Intelligence within the network will determine cache location, who has access to the data, when and where to do backups, and how bandwidth is used. Setting management policies will still be the job of the application that creates the data. Strategies will be needed to manage data with a collection of attributes, based on policies that instruct agents within the SAN.

SAN technology exists, but there are still many unresolved issues surrounding its future implementation. How can security be assured? What protocol will be used to move data? (Surely not a SCSI block protocol.) What will the network look like? Universities and industry consortia are building advanced systems that rely on peer-to-peer transfers of data, third-party operations, and management classes based on data attributes. Network Attached Secure Disk (NASD), sponsored by the National Storage Industry Consortium, and Network Attached Peripherals, described in the Department of Energy ASCI project, are examples of two projects that are underway. And the channel technologies required to lower costs and improve interoperability are coming to market and offer considerable value in their early stages. Longer-term improvements and system exploitation will be software-intensive and therefore will be more complex. But there is no doubt that the size of the opportunity will drive future investments in this area.

Click here to enlarge image

Click here to enlarge image

Click here to enlarge image

Click here to enlarge image

Click here to enlarge image

Clodoaldo Barrera is director of strategic storage systems at IBM`s storage systems division in San Jose.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives