Outboard storage management techniques can eliminate some of the drawbacks of hierarchical storage management.
By Patricia Anderson
If you are running a typical IT organization, it comes as no surprise that storage requirements are growing by more than 100% per year at many sites. The explosive growth of information fueled by e-business access requirements has put enormous pressure on IT organizations to provide improved storage solutions while ensuring accuracy, reliability, and business continuance.
Information's rapid growth and immediate access requirements make cost-effective and secure storage management a tricky juggling act. The time allotted to business continuity processing is being drastically restricted by e-commerce's thirst for immediate access to current information. There is little time or resources left for indulgent data management processing. These challenges are driving corporations to seek progressive ways to manage their storage and information assets so they can maintain their competitive edge.
For years, mainframe environments have been using a storage management technique to provide cost-effective and high-performance data access that is consistent with IT's growing requirements. This concept is based on moving data from an expensive high-performance storage medium to a slower, but more cost-effective, one as access requirements become less frequent. This seemingly "have your cake and eat it too" strategy of transitioning data between storage media, based on cost/performance requirements, is referred to as hierarchical storage management (HSM).
In HSM, data is stored on the most appropriate device based on its activity level and performance requirements, and the data's location is managed transparently to end users. If immediate and frequent access is required, data will reside on high-performance storage devices. When usage falls below some preset threshold, the data is moved to more cost-effective media/device types.
In an HSM-optimized configuration, the ML1 pool is removed, and virtual tape is assigned as Migration Level 2. Inactive data is migrated directly from Level 0 to Level 2.
In the traditional storage hierarchy, data with a decreasing activity rate is moved from primary disk, referred to as Migration Level 0 (ML0), to compressed data on disk (ML1) and then to lower-cost tape (ML2) based on usage activity. This activity-driven movement of data to a cost-effective position in the storage hierarchy appears to provide a perfect solution that optimizes both cost and performance requirements.
However, this solution is not accomplished without significant hidden costs. HSM consumes host processor MIPS, system I/O bandwidth, and system disk resources to perform the MIPS-intensive software compression proces-ses and data transfers. These hidden costs can offset anticipated savings.
In an attempt to reduce the mainframe MIPS consumption of HSM compression routines, many users have simply bypassed the step of putting compressed data on ML1 by keeping data on ML0 longer and moving data directly to ML2 sooner than originally planned.
Unfortunately, this technique consumes additional disk storage and results in more frequent accesses to tape to retrieve data.
The expanding requirements for storing information so that it that can be accessed from multiple heterogeneous computing platforms is leading the charge for a solution that will deal with storage management as a separate function, and not an ancillary operation of the host computing platform. Outboard storage management functionality promises to provide superior and cost-effective storage resource management across the entire enterprise. Virtual tape, for example, is one step toward providing outboard storage management services by off-loading HSM compression processing, recouping system disk resources, and reducing system I/O bandwidth requirements.
The role of virtual tape
Extended virtual tape functionality virtualizes the entire tape workload by leveraging tape cache policy management and larger tape cache sizes to optimize HSM. The role of virtual tape in HSM optimization is to provide users with the performance benefits of ML1, without the host MIPS and host I/O bandwidth consumption of traditional HSM.
Deploying virtual tape in a traditional mainframe scenario can significantly reduce the hidden overhead of HSM operations while decreasing the number of physical tape drives required to perform the migration. By removing the ML1 pool and assigning virtual tape as Migration Level 2, inactive data is then migrated directly from level 0 to level 2 (see diagram). This allows virtual tape's outboard processors to decide whether to keep the data in virtual tape cache or on physical tape, and enables outboard control over the movement of data between them, without consuming host resources.
While virtual tape offers multiple on-board compression processors that compress the data rapidly, without the consumption of CPU MIPS, the real icing on the cake is that virtual tape's extended features can reduce or eliminate HSM recycle activity that typically devours system resources.
In traditional tape processing where one volume typically equals one cartridge, users set the HSM volume size high, storing a large amount of data in one volume so that physical tape cartridges are optimized.
The downside is that the larger the volume size, the higher the probability there will be "holes" of wasted space on the cartridge when individual data sets expire. To remove the holes, a recycle process must then be run to consolidate data.
By using virtual tape, you can direct HSM to make virtual volumes intentionally small without wasting physical cartridge space. This allows data set expirations and recalls to result in virtual scratch volumes that do not need to be recycled.
Furthermore, virtual tape with extended cache management functionality can intelligently hold HSM data in tape cache until its activity decreases, allowing any needed recycle activity to occur virtually. Since virtual volumes are accessed instantaneously, the recycle process is dramatically accelerated, providing vast perfor- mance improvement over conventional recycle functions that must retrieve valid data from physical tape.
Virtual tape can also provide cache management policies that will intelligently migrate the data directly to the attached tape library, without burdening CPU cy-cles or system I/O bandwidth. And the use of virtual tape drives and virtual volumes reduce the need for physical tape drives. Even those who currently send data directly from ML0 to traditional tape will receive dramatic improvements in recall times for all cache-resident data by switching to virtual tape.
Virtualizing HSM's ML2 pool is an approach that requires planning. However, the potential benefits provide an opportunity for cost savings and a reduction in total cost of ownership.
These benefits include reducing the CPU processing overhead of HSM compression routines, reducing or eliminating recycle activities, buying back system disk that would otherwise be consumed by ML1, improving I/O channel bandwidth, and reducing physical tape drive requirements.
Virtual tape can provide additional economic benefits that improve return on in-vestment and reduce total cost of ownership. These benefits are derived from applications such as disaster recovery, data center migration, backup, enterprise output management, and traditional batch processing.
What's more, as storage area networks begin to unite open-systems storage into a consolidated, centralized, and manageable pool of storage "appliances," outboard main-frame-class storage management facilities will become increasingly advantageous. This new breed of SAN-enabled, virtual outboard storage services promises to be even more beneficial in managing and optimizing storage resources across the entire enterprise.
Patricia Anderson is director of product marketing at Sutmyn Storage Corp. (www.sutmyn.com).