By Jack Fegreus
Symantec's Veritas Storage Foundation for Windows arms administrators with a centralized management tool to take on advanced storage management tasks.
—The latest version of Veritas Storage Foundation for Windows—5.0—provides system administrators with a common set of tools that works across all storage devices, providing services such as automated storage provisioning, high availability, volume virtualization, and disaster recovery.
To test the ability of SFW 5.0 to boost efficiency for storage resource management, openBench Labs set up a domain running on 32- and 64-bit versions of Windows Server 2003 R2 and supporting Microsoft Exchange. The bulk of the testing was done on three servers: a quad-processor HP ProLiant DL580 (powered by 64-bit Intel Xeon EM64T processors) and two dual-processor ProLiant ML350 servers (powered by 32-bit Xeon processors).
As part of our test scenario, one of the roles played by the ProLiant ML350 servers was that of a remote hot spare for the ProLiant DL580 Exchange server. To support this test scenario, primary storage was maintained on a 4Gbps Fibre Channel SAN and extended via a 1Gbps iSCSI SAN. Using this integrated SAN fabric, we created a four-tier storage hierarchy based on various vendors' disk arrays on the Fibre Channel (FC) SAN.
For the first two storage tiers (platinum and gold), we used a Xyratex FC-to-FC disk array with 15,000rpm and 10,000rpm Fibre Channel drives from Seagate. The third and fourth tiers (silver and bronze) were built on an IBM DS4200 FC-to-SATA array. For the silver storage tier, we accessed arrays on the DS4200 directly over the Fibre Channel fabric. In the bronze tier, we used a StoneFly 4000 Storage Concentrator, on which we created iSCSI target volumes from disk arrays on the DS4200, and exported those target volumes to servers and desktop systems in our test domain over TCP.
Across all operating systems, a primary goal of Veritas Storage Foundation is to give system administrators a central point of storage management down to the spindle level. Veritas SFW includes a drag-and-drop central Veritas Enterprise Administrator (VEA) console to enhance storage resource visibility, management, and reporting across host servers and diverse storage arrays.
SFW also leverages the logical abstraction of Dynamic Disk Groups to provide resource virtualization. It is important to note that dynamic disks as implemented in SFW are very different from the limited Logical Disk Manager (LDM) scheme for dynamic disks that Microsoft introduced in Windows 2000 (under a license from Veritas). In Windows, LDM simply provides a means to partition hard disks in a way that supports software-based RAID 0, 1, 5 and spanned volume configurations.
What separates the construct of Dynamic Disk Groups from Master Boot Record (MBR) or GUID Partition Table (GPT) schemes is the use of a database located in a special partition at the end of a disk. The database has a four-level description schema that contains information about the volume, component, partition, and disk. The database is then replicated over each member of a dynamic disk group.
Implemented in SFW, the database makes it possible to virtualize storage resources through block aggregation across multi-vendor storage pools. The ability to virtualize storage across multi-vendor hardware arrays provides a number of benefits. By presenting system administrators with a logical view of all storage resources, administrators can transparently perform advanced storage-management functions.
The SFW dynamic disk construct provides administrators with a non-disruptive mechanism to isolate and manage logical devices without physical constraints. To maintain compatibility with the Windows management schema limitations, however, Veritas Storage Foundation for Windows needs to place the boot and system disks in a special primary dynamic disk group—dubbed a "Windows-managed group"—which is limited to the LDM capabilities.
Using the dynamic disk construct on SFW-managed groups, system administrators can virtualize disk ownership, import and deport dynamic disk groups, and set up mirrors and volume snapshots. This enables administrators to protect data on two dimensions: First, local point-in-time copies create snapshots for fast rollback and quick recovery and, second, data replication on additional domain servers provides disaster recovery of all critical data.
In addition, dynamic disk virtualization constructs can play an equally important part in performance optimization. The four-level dynamic database schema supports the construct of a subdisk that can be treated as yet another virtual disk. The simplest example of a subdisk is the creation of a logical volume on a disk. In fact, when a logical volume is created with Veritas SFW, a corresponding subdisk is automatically created.
Since a subdisk is just another virtual device, it can also be migrated from one physical disk to another. Also, system administrators are free to subdivide any subdisk into a number of smaller pieces. This capability may initially appear to be a bit exotic; however, consider the issue of a database with a hotspot in a particular index table.
Typically, solving such an issue would require both a database administrator to re-architect the database and a storage administrator to address the impact of database changes on the SAN fabric. However, by using Veritas SFW, system administrators can readily subdivide a dynamic disk while the database remains online. Next, the administrator can monitor the subdisks that make up the logical volume on which the database resides. In this way, an administrator can discover the region of the logical volume that contains the hotspot for the database tables.
Once the hotspot is encompassed on a subdisk, the system administrator can transfer that subdisk, just like any other virtual volume, to another physical disk, without involving either a database administrator or a storage administrator. The virtually contiguous logical drive seen by the database administrator is now in reality distributed over multiple drives for optimum throughput. The ability to manipulate dynamic disks and simplify storage management became a focal point of the openBench Labs' assessment.
For our assessment tests, each server was configured with direct-attached SCSI disks. All additional storage was maintained on a collection of Fibre Channel and iSCSI SAN storage arrays from multiple vendors. A characteristic of the rapid storage growth experienced at data centers over the past several years is a mixture of products from different vendors.
Disk array vendors have been trying to build brand equity by offering more-advanced management software that is explicitly tied to their storage arrays. Each vendor's array in the openBench Labs' test SAN provides its own proprietary configuration GUI along with optional—and often expensive—utilities for replication services such as mirroring and point-in-time copies. This effectively creates vendor-specific technology "silos" in SANs, which runs counter to the rationale for deploying a SAN.
Worse yet, there are few products designed to help storage or system administrators bridge the gaps between silos of SAN array technology. One strength of Veritas Storage Foundation for Windows is its ability to configure services, manage those services, and even virtualize volumes independently of the underlying physical disk arrays and the software for those arrays. By providing centralized management for different vendors' arrays, SFW leverages existing investments in SAN fabric technology while minimizing future investments.
We installed the client component of Veritas SFW on a workstation running Windows XP Pro. From that workstation, we could connect to any server and fully configure all of its storage resources. In so doing, the one element in common for nearly every SFW utility targeting a dynamic disk is the involvement of that disk's dynamic disk group. That's because all of the information about that disk, and, conversely, all of the disks for which it has detailed information, is replicated in a database across that group. As a result, the choice of how dynamic disk groups are defined directly impacts the effectiveness of a majority of the functions in SFW.
With that in mind, openBench Labs initially began by grouping dynamic disks based on storage tiers. The logic for this scheme was to improve storage resource utilization within our defined performance tiers. By combining the storage capacity of disk arrays from multiple vendors into a single storage resource, we created a storage pool that we could manage from a central point.
As an immediate result of our performance-based scheme for grouping dynamic disks, we were better able to manage the total pool of dedicated spare storage on our SAN. Without SFW, we needed to maintain spare storage for expansion on every array. That meant continuously balancing the global storage needs for our SAN with the local needs of every array.
Using the block aggregation capability of SFW, we could transparently expand any dynamic disk volume using any spare space within the disk group. Furthermore, a wizard provides the means to automate volume growth. Whether volume expansion is manual or automatic, it is the construct of the dynamic disk group, which provides the repository of free blocks for volume growth. As a result, we were able to maintain spare storage on a limited number of arrays, provide each disk group with spare storage, and avoid any negative impact on SAN resiliency. Moreover, the ability to use blocks from any disk in a dynamic disk group to construct a logical volume plays a key role in performance tuning with SFW.
In addition to providing tools for capacity-oriented issues, Veritas SFW provides tools to measure and tune the performance of storage resources. The monitoring of I/O statistics for dynamic disk objects—which includes physical disks, virtual volumes, and subdisk regions—is intended for use in identifying disk hot spots. While performance data collection utilizes dynamic disk constructs, the process is independent of the notion of disk groups.
In particular, Veritas SFW collects performance data on reads and writes in terms of both the number of operations—dubbed requests—per second (IOPS) and the volume of data throughput in disk blocks per second. In addition, SFW can monitor the average time in microseconds taken to read or write a disk block, as well as the queue depth (the number of read-and-write requests queued for a disk).
While I/O monitoring of dynamic disks has no group dependencies, tuning actions such as moving volumes or volume subdisks are tightly linked to the dynamic disk group construct. Typically, tuning involves moving a logical volume to another less-utilized physical disk within a group. SFW, however, has even finer granularity in the construct of a subdisk, which can be moved as easily as a full volume. For volumes containing a database, the ability to move the blocks supporting a high-access table without changing the logical presentation of a virtual volume provides system administrators with a powerful tuning tool that can be used with no need to consult a database administrator.
Beyond the management of virtual disk volume RAID characteristics or capacity, the database of dynamic disk characteristics provides the means to virtualize volume ownership without requiring special software on the disk array. This is especially important in a Windows environment, where each system will attempt to take ownership of any shared disk volume. (Windows assumes it has exclusive ownership of every disk that it discovers.)
That assumption of exclusive ownership by Windows can lead to disastrous results on a SAN. Should two systems discover and mount the same volume for reading and writing data, they will both function independently with no knowledge of what the other system is doing. The moment that both systems have written data to the disk, neither system has a consistent view of the actual content on the drive. With neither system having a consistent view of the disk's block structure, both systems are free to write over blocks used by the other system. This can corrupt the disk so badly that it becomes incapable of being mounted.
That scenario makes it essential for storage administrators to either virtualize each logical volume at the array exporting that volume, or create a complex zoning scheme based on Fibre Channel port connections at each SAN switch. Via the on-disk SFW database, system administrators have a hardware-independent mechanism with which to work with shared volumes: Via SFW, system administrators can assign volumes to a host, and more importantly, easily move those volumes from host to host without changing the topology of the SAN fabric or requiring intervention by a storage administrator.
Once a shared SAN volume is converted into a dynamic disk and made a member of a dynamic disk group, all of the information about that disk and all of the other members of its group becomes readily available to each server running SFW. When a server running SFW mounts a shared volume with read-and-write privileges, every other server becomes aware of the volume's mount status. More importantly, as long as the volume remains mounted on the first server, SFW prevents any other server from also mounting the volume with the ability to write. Rebooting servers, however, could allow another server to mount the disk first.
To ensure a particular server always owns a disk group, unless explicitly changed by a system administrator, the administrator can make the disk group a private group for a specific server. In doing so, SFW simply tags each volume in the disk group with a SCSI reservation—applicable for either a Fibre Channel or iSCSI SAN—for the desired host. In this way, the administrator has completely virtualized the volume.
Through block aggregation across multi-vendor storage pools, storage appliances can provide storage administrators with high-level virtualization that simplifies the complexities of a heterogeneous environment. Veritas SFW provides that same ability with a significant added value: SFW is a pure software solution intended for use by system administrators without any involvement from storage or database administrators.
Through the easily understood construct of dynamic disks, Veritas Storage Foundation for Windows enables system administrators to improve storage resource utilization by combining the storage capacity of many disk arrays into a single storage resource—a dynamic disk group. That gives system administrators a vendor-agnostic storage pool that they can easily manage from a central point.
Part 2 of this review will focus on the functions that use dynamic disks to support backup-and-recovery, snapshots, and remote replication.
Jack Fegreus is CTO of openBench Labs. He can be reached at email@example.com.
openBench Labs Scenario
Storage virtualization and management software
WHAT WE TESTED
Veritas Storage Foundation for Windows 5.0
HOW WE TESTED
From a Windows XP Pro workstation, we were able to monitor any domain server running Veritas SFW. We simultaneously opened views of our MS Exchange server (HP DL580) and our Exchange disaster-recovery system (Wombat). On discovering Exchange active, SFW automatically integrated itself with Windows Server 2003 VSS. Similarly, on discovering the MS iSCSI initiator, the SFW software integrated with iSNS, which was running on our domain controller to provide both Fibre Channel and iSCSI SAN information.
Using Veritas SFW, we were able to perform tasks such as expanding or shrinking the capacity of a virtual disk without taking the system offline or rebooting. When we expanded a volume belonging to a Tier-1 disk on our Exchange server, SFW used free space on the physical drive, along with any free space on any drive that was a member of the same dynamic disk group.
To assess the monitoring of statistics in Veritas SFW, we launched our oblLoad benchmark, which simulates a database transaction processing I/O pattern, on a Tier-1 and Tier-4 volume. Critical IOPS and throughput levels reported by SFW were consistent with benchmark logs: IOPS rates centered around 13,500 on the FC-FC Tier-1 volume and 5,500 on the iSCSI/SATA Tier-4 volume.
If a storage volume is not virtualized at the storage array where it was created or restricted by zoning at the Fibre Channel switch, it will be visible to all Windows-based systems and each system will attempt to mount the volume. If that volume is made a member of a secondary dynamic disk group, SFW will allow only one system at a time to import the volume with read-and-write privileges.