SANs: whats real, whats not?

SANs: what`s real, what`s not?

Storage area networks (SANs) are the new buzz in corporate IT circles. Most industry observers agree that SANs will become the standard architecture for enterprise-wide network storage within the next three years. But how much of the buzz is hype and how much is real? And how long will you have to wait for the promise of SANs to become a reality?

By Ron Levine

First, what is a SAN? A SAN is a networked, high-performance I/O channel that is connected to the back end of servers. Its nodes are storage devices, all communicating over a common storage pipeline. This pipeline consists of an interface (which is usually Fibre Channel) and interconnects (switches, gateways, hubs, etc.), which creates a fabric.

By moving the storage device off the server-bus connection and onto a dedicated network subsystem, a number of things happen:

- Storage is externalized, freeing it from the operational limitations of the server and the traffic constraints of the network, thus improving storage subsystem performance and LAN/WAN performance.

- Storage devices and data are made available to multiple hosts without affecting the performance of the communications network.

A SAN is a shared storage repository attached to multiple servers via an independent network, potentially removing all storage functions from the LAN or WAN. Network performance improves because it is free from the cumbersome overhead associated with file access, retrieval, storage, and data backup functions--tasks that hog network bandwidth, cause traffic bottlenecks, and drain overall network resources.

Data access and availability are enhanced because file read/writes, backup/restore, archiving/retrieval, data migration, and data and device sharing are more effectively and efficiently handled by a network optimized for the demands of storage tasks (e.g., high throughput, large packet data transfers).

In a SAN, data is accessible by way of alternate data paths (providing fault-tolerant operation) and is more easily scaled, serviced, and managed due to centralization. SAN methodology also makes it easier to implement disaster protection configurations.

SANs are not new. Mainframes and supercomputers have used high bandwidth, high availability, dedicated data storage networks when uninterrupted data access is a necessity. What makes SANs exciting are the features Fibre Channel brings to the implementation of network storage.

With distributed networks handling more business critical applications, access to--and the availability of--the associated data is strategic to a company`s operation. SANs are a cost-effective solution for dealing with IT`s requirement for instant, uninterrupted availability of data and users` insatiable appetite for additional data storage capacity.

Over the last decade, applications, and CPU performance have increased by many levels of magnitude, but storage devices and channels have not kept pace. Consequently, storage subsystems must be redesigned to deliver the proper level of bandwidth, redundancy, and protection to meet the increasingly strategic nature of today`s open-systems environment.

The ideal storage network

In the traditional approach, open-systems storage is LAN-centric. Storage devices are attached to individual servers with point-to-point bus connections such as SCSI. Each server may have its own proprietary data management architecture, splintering storage administration and control. Any communication between storage devices must occur over the LAN.

But LAN data paths often do not have the necessary bandwidth to address the storage requirements of data-intensive applications such as on-line transaction processing (OLTP) and data warehousing. This results in network bottlenecks and overall system performance degradation.

These problems are proving to be a challenge to IT executives because connectivity and cable length restrictions in traditional SCSI connections are limiting their performance enhancement options. As a result, many sites must manage each server`s storage separately and grow additional storage capacity off-line on individual servers.

But what if we could start again? "We`d rebuild storage connections and networks with plenty of bandwidth to handle high-throughput applications like imaging, video, and CAD and high I/O applications like OLTP and databases," says Scott Robinson, vice president of engineering at Datalink Corp., an independent information management solutions provider in Minneapolis. "We`d also implement fault-tolerant storage operations (stored data would remain available despite the loss of any single component) on this new network, and we`d make it easily scalable, allowing for storage capacity to be added as future needs dictate, independent of any specific server or operating system."

Robinson continues: "The ideal storage management solution would also include the ability to centrally manage all storage devices and allow universal data sharing over the enterprise network. Storage management functions like backup/restore, archive/retrieval, disaster avoidance/recovery, volume management, and file systems would be tightly coupled." The storage implementation described by Robinson is the promise of SANs.

What`s available now?

SANs are changing the way we think about storage and its distribution over a network. Instead of a LAN-centric view, SANs take a data-centric view, that is, data is centrally stored and accessed by multiple servers over a dedicated connection. The messaging network (LAN/WAN) no longer provides storage-related functions and is scaled and managed without regard to storage and file access requirements. The storage-dedicated connection (SAN) handles all storage tasks.

Unfortunately, the reality is that all the components to implement a true SAN environment are not yet in place or available. SANs are being rolled out in three phases. Phase-one installations already exist, and the storage industry is starting to deliver on some of the promises of phase two. But organizations can still receive immediate and measurable operational benefits by employing available SAN components now and adding the remaining pieces as they become available.

Phase one is disk-oriented and consists mostly of enhancing storage-network data transfers by deploying RAID devices and replacing SCSI I/O paths with Fibre Channel. This technology shift provides higher bandwidth and increased speed over longer distances than is possible with SCSI. The combination of RAID and Fibre Channel improves reliability through fault-tolerant disk operations and channel connections. Application-specific data sharing, SAN-based backup/restore, and centralized data management operations are additional benefits of Fibre Channel implementations.

Fibre Channel switches and hubs provide for increased (and simplified) storage device scalability, true hot plugging of storage subsystems, and security and isolation between functions. A number of corporations are now installing Fibre Channel connections to alleviate storage-related network bottlenecks with an eye toward full-scale SAN implementations in the future.

What`s coming soon?

Phase two of SAN implementation, which involves SAN-specific capabilities that SCSI simply can`t provide, is just beginning. For example, zoning, logical unit number (LUN) masking, and high-availability clusters can all be implemented through Fibre Channel device linking.

Fibre Channel switches allow connected devices to be assigned to dedicated storage zones. For example, a disk resource can be assigned to one server, while a tape resource is committed to another server. This configuration can be implemented today, providing the immediate benefits of zone fault isolation, the capability to dynamically add devices to a server`s storage pool, and 100MBps bandwidth.

Software is on the horizon that will enable users to dynamically re-zone configurations. This means, for instance, that a tape library could be temporarily included in the disk zone during a specific backup window for direct-attached backup and then be reallocated to a different zone for remote vaulting or archiving. Before this can happen, however, backup software must be able to recognize the dynamic appearance of a shared tape resource across multiple servers. Software to support this concept is expected to be available by mid-year.

LUN masking enables storage resources to be subdivided so it can be shared across multiple network servers. This allocation of individual storage segments within a device allows for selective space and file sharing among servers and work groups, dedication of storage areas for specified applications or file types, and information-specific access management (e.g., picking individual disks within a CD tower that can be accessed by specific servers).

Over the next few months, high availability and high scalability SAN clusters will appear. As SCSI is replaced with Fibre Channel and SAN support software becomes available, cluster configurations will become more powerful. Cluster performance will benefit from the dynamic re-allocation of storage resources, fault isolation on the fabric, true on-the-fly attachment of devices, and cabling options that allow the physical separation of redundant components along the SAN.

What`s ahead?

In two or three years, phase three will bring a major shift in the way corporate technology officers think about information systems. In an era where data is considered a business` most strategic asset, these executives will take an information-centric view, not a LAN-centric view, of their IS operations. Information will increasingly be shared across heterogeneous platforms and applications. File systems, volume managers, HSM, archive, and backup functions will be highly integrated and tightly coupled, improving efficiency and ease of storage management.

"As SANs are rolled out, multi-node heterogeneous clusters will enable the rapid growth of shared-storage application functions in a high availability environment," says Robinson. "Servers will be scaled up as application needs--not storage and storage management needs--dictate. The LAN will be free to perform the communications role it was designed for without the burden of storage-related tasks like backup and archiving."

As SAN software application support develops, information storage and management will become centralized across the enterprise, and server and LAN functions will become increasingly independent of storage.

SANs: today and tomorrow

Today, SAN benefits include:

- Increased bandwidth, speed, and overall performance of storage subsystems.

- Greater storage scalability.

- Improved high availability and fault-tolerant storage subsystem operations.

- Elimination of bus distance restrictions of SCSI connections.

- Removal of LAN traffic bottlenecks caused by storage tasks.

- Sharing of storage resources and data among multiple servers and applications.

Other potential SAN benefits are on the way, including:

- Centralized storage administration.

- Dynamic re-distributing of storage resources.

-Streamlined data management processes.

Many SAN features are already being installed, providing distinct IS operational advantages over traditional storage architectures. Additionally, investments today in SAN technology (for example, Fibre Channel) lay the foundation for future infrastructure capabilities. But many additional features of SANs, some of which are being touted today, will not be available for a few years.

Click here to enlarge image

In the traditional approach, open-systems storage is LAN-centric. Storage devices are attached to individual servers with point-to-point bus connections such as SCSI.

Click here to enlarge image

Fig. 2: Instead of a LAN-centric view, storage area networks take a data-centric view in which data is centrally stored and accessed by multiple servers over a dedicated connection. Fig. 3: Fibre Channel switches allow connected devices to be assigned into dedicated storage zones. Example: A disk resource can be assigned to one server, while a tape resource is committed to another server.

Ron Levine is a freelance writer in Carpenteria, CA.

This article was originally published on January 01, 1999