Five steps to total SAN managemen

Posted on September 01, 2000

RssImageAltText

Plenty of point products exist, but SAN management is still missing a few elements.

By Barry Robertson

As the storage area network (SAN) concept has matured and interoperability problems have lessened, management difficulties have emerged as the last major barrier to widespread SAN deployment. This is not surprising, given the potential complexity of SANs. By virtue of its underlying protocol (Fibre Channel), the SAN storage model is both open and without boundaries; in theory, it permits thousands of devices to be directly shared by a multitude of hosts across mile-wide networks. The reality, of course, is that nobody is using SANs on anything even remotely approaching that scale, and for good reason: There are still no comprehensive management tools.

The lack of these tools is in part the fault of software developers who have focused on one aspect of the SAN management puzzle instead of the big picture. For IT professionals looking to implement a SAN from scratch, or convert a legacy SCSI-based environment to Fibre Channel, this can be frustrating.

To overcome this general deficiency, SAN management must be redefined in terms that are meaningful to users. An ideal SAN management model should give IT professionals control of the storage network and its assets. More specifically, it should enable them to:

  • Visualize the structure of the SAN and all its resources, including RAID logical volumes.
  • Filter the global view of the SAN according to user-defined criteria.
  • Zone and sub-zone SAN resources as needed, for the purpose of organization or to control user access (security).
  • Direct the flow of I/O traffic throughout the entire SAN, even as far as delegating which adapters on which hosts may be used by a given resource at a given time.
  • Monitor and automatically notify IT managers about the performance of every physical and logical device on the network.
  • Add and remove resources as needed without interrupting normal operations.

Until management applications offer capabilities similar to these-in other words, until IT professionals are given total control of the SAN and its resources-the software side of the SAN equation will continue to lag far behind the hardware side.

Step 1: Visualization

SAN management requires a comprehensive view of network resources and how they are connected (both physically and logically). Although SANs offer virtually unlimited choices when it comes to connecting and configuring devices, this does not mean that all of them are equally desirable or efficient. Having constant access to a kind of "global roadmap" would alleviate this potential problem by allowing administrators to intelligently plan every stage of the SAN's evolution, from initial deployment to future expansion.


Zoning SAN resources: Hybrid zoning combines standard switch-based zoning with software-based zone control.
Click here to enlarge image

Being able to view and explore the SAN's connectivity structure would also be immensely valuable in managing the relationship between physical devices (e.g., disks) and logical devices (e.g., RAID arrays). For example, if a particular RAID array was equipped to handle dynamic enlargement, an administrator could simply browse the SAN's global view, locate some free disks, and then "add" them to RAID.

This same scenario would also be applicable to other aspects of RAID management, such as the allocation of spare (failover) disk pools

that could then be shared by multiple RAID arrays across the network. Although it's only part of the solution, being able to intuitively view and explore the SAN structure is a crucial first step toward total SAN control.

Step 2: Filtration

While a global view of SAN resources is certainly valuable, it is not always useful, particularly in large switched-fabric environments involving hundreds or even thousands of devices. Administrators should therefore have the ability to filter their overall view of SAN resources according to whatever criteria they deem appropriate.

Consider mixed operating system environments, where separate storage resources may exist for separate platforms, but all of the resources still need to be potentially visible (and manageable) from any host on the network. In these situations, administrators could simply filter the SAN's disks according to whatever operating system owns them: "Show me all of the SGI disks," or "Show me all of the HP disks."

In theory, filters could also be layered in any number of combinations: "Show me all of the SGI and HP disks over 30GB that are empty and unmounted." Although the holy grail of a completely shared file system may one day render operating system filtration unnecessary, such filtration is-and will likely remain so for many years-the best and most logical approach to organizing storage resources within heterogeneous operating system environments.

Step 3: Zoning

One of the initial attractions of Fibre Channel was the ability to carve a SAN into exclusive "zones," or subnets, using a multi-port switch. When planned and constructed intelligently, switch-based zones can be an effective means of controlling data pathways and user access. However, they lack flexibility for a variety of reasons:

  • Zone control logic typically resides entirely within the switches, rather than on the hosts.
  • By virtue of the switches' physical construction, devices on the same node or port can't be placed in separate zones.
  • The notion of a sub-zone (a zone contained within another zone) isn't feasible with most switches. What's more, modifying or reconfiguring a switch-based zone usually causes the switch to reset, which essentially renders the switch inoperable until the reset cycle is completed.

Given these shortcomings, a better solution to the zoning dilemma may be to combine switches with software-based zone control residing on the hosts (see diagram). This hybrid approach would open the door to a number of possibilities, including the ability to:

  • Create an unlimited number of zones and sub-zones.
  • Zone devices and logical volumes on an individual basis, regardless of how (or even if) they're connected to a switch.
  • Freely move devices or logical volumes between zones without affecting SAN performance or modifying the physical connection scheme.

Step 4: Traffic Control

When considering the potentially unlimited connectivity of a SAN, one truth becomes clear: Just because a particular data pathway exists, that doesn't necessarily mean that all devices should be able to see and use it.

This issue can be partially addressed by the hybrid zoning approach, but remember that zoning only regulates access on a per-device or per-host basis; it does not address the multiple adapter pathways going to and from an individual host. For example, suppose a particular host is equipped with eight adapters, all of which can see a variety of RAID arrays and/or peripheral devices through a switch. Should all of those devices have access to all of the host's bandwidth resources? Perhaps under certain circumstances they should, but what if some of the devices carry a higher priority than other devices? Further, what if those priority levels change over time?

A truly comprehensive SAN management model should address these issues by providing the means to control which devices and volumes have access to a given host bus adapter at a given time. Technically speaking, this implies being able to direct the flow of I/O control blocks at the adapter level, which would allow administrators to:

  • Track and monitor all of a host's I/O activity and how it's distributed.
  • Share host bandwidth among all available devices, thereby enabling I/O loads to balance evenly across all adapters and all bus architectures.
  • Allocate host bandwidth according to any number of exclusive or overlapping configurations (for example, allowing RAID "A" to use adapters 0, 1, and 2; allowing RAID "B" to use adapters 0, 2, 5, and 6; and granting device "C" exclusive access to adapters 4 and 7).
  • Change adapter assignments dynamically, without interruption.

Step 5: Performance Monitoring

Although it is probably what most people initially think of when they see the term "SAN management," the ability to monitor the performance and health of the entire storage network is still not a feature of today's SAN management applications. There are software utilities that allow limited monitoring of certain devices, but in virtually every case these utilities are vendor-specific and do not function with other brands of hardware. In the absence of a single integrated management tool, the only way around this dilemma is to either commit to a single vendor for all current and future SAN hardware needs or maintain a patchwork of separate utilities for different devices from different vendors. Needless to say, both options are far from ideal.

Current SAN monitoring utilities also fall short in terms of the quality and usefulness of the information they provide. It may be important to know that the performance of a particular device has declined, but this information isn't worth much without a clear-cut way to diagnose and fix the underlying problem. In large switched-fabric environments, this lack of "intelligent" diagnostic tools can be especially problematic because of the sheer size and complexity of the network landscape. The solution? A new SAN management paradigm that employs "smart" diagnostic technology and thereby minimizes troubleshooting tasks.

Although the raft of challenges currently facing the SAN industry may seem daunting, there is no technical reason why these challenges can't be overcome. In fact, the various technologies needed to do so are already available; they just need to be brought together and implemented in a single set of tools. When that happens, total SAN management will finally be poised to make the leap from experimental possibility to functional reality.

Barry Robertson is chief technology officer at Radiant Software (www.radiantsw.com) in Santa Monica, CA..


Comment and Contribute
(Maximum characters: 1200). You have
characters left.