Multi-host SANs pose challenges

Posted on August 01, 1999


Multi-host SANs pose challenges

Data sharing software solves many of the problems inherent in heterogeneous storage area networks.

Christopher Stakutis

To many people, storage area network (SAN) technology is just another networking technology with the same features and data protection capabilities as traditional LANs. This is not true.

Though the evolution of LAN networking has at times been painful and has certainly not been error-free, when you plug a computer into a LAN port today, you can be assured of one thing: inbound access is granted only to those with proper authorization. Similarly, merely being connected to the physical LAN does not enable you to interrogate and alter servers or other machines without proper authorization.

There is an expectation that like LAN networking, multiple hosts can be attached to a SAN fabric and operate without repercussion, though the most common SAN configuration today consists of a single host with multiple--usually disk subsystems--attached devices (see Figure 1).

Unfortunately, this is not possible with existing SAN technologies. Virtually any device that is plugged into a SAN can access and potentially damage another device. Far worse, multiple types of hosts essentially guarantee that some amount of data corruption will take place.

Why? Because all host operating systems assume that disk devices are their own private devices and therefore are entitled to write information to them, cache data in memory, and undo transactions that seem incomplete.

Following is a partial list of potential issues in a multi-host SAN:

•Marking/mounting. During boot-up, computers "discover" the storage devices they can physically access. For each one found, they write information and potentially auto-format the device with a native file system. Of course, each time a "write" occurs to a device that is not coordinated with other computers on the SAN, a significant amount of data is lost.

•Rollback. File systems ensure that file structures are kept intact in the event of power loss or other failures. Written records, which are kept on the storage device, tell a computer how to undo, rollback, and otherwise "fix" any partially completed transaction. In a multi-host SAN environment, when one machine is creating data, any other machine passing through the power-up integrity phase can undo those transactions.

•Cross allocation. When a computer creates new data, it needs to find available space on storage devices to hold that data. Typically, the "free list" is kept in RAM for performance reasons. In a multi-host SAN, each machine ends up using the same set of blocks for new data. This results in cross-linked data on the storage devices, rendering the data useless.

•Coherency. A machine creating a new file on a storage device or new data in a file, has no mechanism to inform other machines that their internal RAM caches may contain old data.

•Non-native file systems. Even if the above events are avoided, one computer platform does not understand another platform`s native file system. In the best-case scenario, systems would ignore "foreign" file systems; however, typically a machine tries to understand it within its own context. In doing so, it drastically alters the file system.

Protection alternatives

The obvious question is, When will operating system providers solve the problems associated with multi-host SAN environments? Technically speaking, the problem can be solved, but operating system vendors are unlikely to do so soon. The issue is complex, and it is not in the best interest of these vendors to make their solutions work in heterogeneous environments.

For one, though the SAN market is expected to be large, it will be comparatively small to general-purpose operating system vendors. Furthermore, solving the problem could potentially weaken their products, in terms of performance, or require critical code to be changed.

Fortunately, there are two categories of tools that can help users survive in a multi-host SAN environment:

•Device-level LUN control or zoning and

•Sharing software with partitioning.

Device-level LUN control is a hardware-centric approach that provides the highest level protection, but with considerable configuration hassles. With device-level LUN control, intelligent devices that control SAN accesses (e.g., hubs, switches, or RAID arrays) are set up so that servers see some--but not all--devices. When the servers are booted, each host is only aware of the devices it is permitted to see. If the entire network is configured carefully, no unwarranted accesses take place. However, few devices support this functionality today, and there is no standard for device-level LUN control.

Multi-host data-sharing software, on the other hand, prevents data corruption, while allowing each computer to potentially see any and all storage in a coordinated, safe manner. Software-based data sharing is easy to administer, provides high performance, and has minimal risks.

One software approach is a hybrid LAN-SAN solution. This method maintains existing security and administration mechanisms while capitalizing on the traditional LAN networking paradigm, which handles heterogeneity, security, administration, and other issues.

By intelligently separating data (the file`s contents) from meta-data (file name, security, etc.), a hybrid solution transparently redirects the file data during "reads" and "writes" to the SAN wire while the meta-data continues as usual over the LAN. Thus, storage elements (partitions) can be shared at the data level between multiple hosts.

Virtualized storage

Sharing software enables users to realize the full promise of SANs: aggregating storage devices regardless of distance and sharing that data among many systems at high speeds. Additionally, the enterprise community is looking for SAN technology to solve several other issues, specifically:

•The virtualization of all storage so that pieces can be carved-out interactively and assigned to machines that have transient or growing needs.

•Direct (raw) access to virtual storage partitions (non-sharing) to satisfy the needs of non-network-aware applications, such as Oracle, or the need for hard-mounted private areas (swap space, local-like disks, etc.).

Virtualizing the storage means providing an abstract layer on top of the physical storage elements and below the operating systems. Many RAID controllers provide this type of functionality; however, they do it only for their part of the storage pool. In the future, this abstraction must span all of the storage elements in the SAN, and the computers themselves must control it.

Some forthcoming operating systems allow for a greater degree of dynamic control over storage devices. For example, with Windows NT 5.0 (Windows 2000), a logical volume can dynamically grow or shrink at any time. A new disk can be added to the volume (volume set) and later removed or partitioned.

The challenge of virtualized storage is providing a cross-platform abstraction that is easy to implement on many operating systems, but does not compromise the SAN value proposition (i.e., speed, direct access, etc.) [see Figure 2].

Some LAN-SAN hybrid solutions come close to providing virtualized storage. In this architecture, the client machines never hard-mount the file systems; instead, they rely on a network-mount to abstract the physical storage.

When a file is opened, ancillary information about the physical blocks of the file are provided to the client so the client can go directly to the SAN devices and retrieve the blocks. This is done without knowledge of the file-system layouts, partitions, stripes, spans, etc.

However, it is on a per-file basis, which means there is some overhead for each file, plus the volume, which by nature is a network volume, is sometimes not suitable for raw access or local file private file systems (because it is not hard-mounted).

An alternative approach treats some files, possibly very large files, on a master/controlling machine like volumes on other machines. For example, a 20GB Windows NT file could be treated like a UFS partition (or several partitions) from a Unix client. This method involves introducing a pseudo driver on Unix systems which looks for special files via the SAN-LAN connection at startup, retrieves the physical-access specifics regarding the files, and then presents it as a contiguous set of blocks like a raw device.

Essentially, this is what LAN-SAN data-sharing solutions do--they take a network-mounted file, learn the individual non-contiguous physical layout, and present the abstract of a contiguous logical file to users. On the master machine, it is just a file, and can be treated like a file (moved, de-fragmented, spanned over volumes, protected, striped, shrunk, stretched, and backed up).

This approach has a number of benefits. Clients have the opportunity to deal with a seemingly raw device, and thus use it for private access for applications like Oracle or local-mounted file systems. When accessing this partition, there is no overhead on the LAN and no meta-data--exactly like accessing file contents in the LAN-SAN approach.

The entire set of disks is abstracted by another machine that ultimately manages the set. This machine controls which participating machines see which pieces, as well as seamlessly manages the aggregation and partitioning of the storage for the whole network. Client machines access the shared repository in two different ways: coherent data sharing or partitioned (pool sharing and amortization) data sharing among many machines.

SANs offer significant benefits to IT managers. However, users expect certain compatibility and data-protection features. In fact, the expectation of how SANs work is still far from reality. Simply plugging a server into a SAN network with several servers using the same storage device can result in serious data loss.

Furthermore, having more than one computer share data on storage devices between servers is not possible with most operating systems.

The good news is that partitioning software exists that protects against data loss and software exists for sharing data between SAN systems in a hybrid LAN-SAN approach. But there is also a need for some systems to have private access in order to run Oracle, other "raw" applications, or private local-file systems. A global method of aggregating and partitioning the storage will meet these requirements.

Click here to enlarge image

Fig. 1: The most common SAN cnfiguration toda consists of a single host with multiple attached disk subsystems.

Click here to enlarge image

Fig. 2: Virtualization provides a cross-platform abstraction that allows data sharing among heterogeneous host platforms.

Christopher Stakutis is director of engineering for the Shared Storage Business Unit at Mercury Computer Systems (, in Chelmsford, MA.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives