Building a SAN for SMBs, part 3

Posted on July 01, 2004

RssImageAltText

Smart software, switches, HBAs, and even smarter storage arrays render the notion of direct-attached storage absolutely dumb.

By Jack Fegreus

For IT, the ultimate goal is to make storage a resource that can scale without disruption, be self-managing, and provide users with intuitive access to features. To get to this storage nirvana, vendors such as iQstor have begun building serious amounts of intelligence into their storage products. The idea behind this is simple: Differentiate storage arrays by reducing their total cost of ownership (TCO) and driving down the cost of storage management.

To this end, iQstor, like QLogic with its SANbox 5200 (see "Building a SAN for SMBs, part 2," InfoStor, June 2004, p. 36), has followed several basic strategies. Most prominent is to reduce the knowledge and experience needed to manage a SAN infrastructure. The tactic employed by iQstor is to move intelligence normally associated with a server into the storage platform.

The purpose of a SAN is to share physical storage by distributing logical devices to multiple systems. The key to taking advantage of disk sharing within a SAN is the ability to create large-capacity disk arrays and present these arrays to systems as virtual local storage. As a result, storage virtualization is a critical software technology that all storage vendors need to address in any array designed for use in a SAN.


Figure 1: For full functionality of the iQstor SAN Manager software, we used Brocade switches because support for our QLogic 5200 switches was under development at the time of our review. Under the current release of SAN Manager, all of the devices connected through the QLogic 5200 appear independent of their connections. Clicking on a managed switch brings up the switch's properties and a menu of switch management functions.
Click here to enlarge image

iQstor takes storage virtualization one step further than most competitors. The software provided by iQstor, for both Fibre Channel and Serial ATA-based disk arrays, uses virtualization as the foundation for a growing suite of enterprise-level data management services. These services for storage volumes include creating snapshots, local mirroring, remote replication, and policy-based provisioning.

Moving all these management features out to the storage array in turn gives iQstor a number of significant edges to realizing its goal of reducing storage TCO. First, it reduces administrator overhead by centralizing a number of repetitive tasks that can now be applied at a central storage array that is being accessed by multiple servers.

More importantly, reducing the repetitiveness of the tasks reduces the probability of human errors and thereby increases storage reliability. In addition, the creation of a mechanism for policy-based storage provisioning at the array has a positive impact on storage availability and business continuity.

Currently, iQstor's software is released on Solaris and Windows. A Linux version for Red Hat v9 is in advanced beta testing.


Figure 2: Using QLogic SANbox 5200 switches in the fabric, we measured throughput from a server running our oblDisk benchmark on SuSE Linux v9.0. Using a virtual disk with a 64KB chunk size served by the iQstor 1000 array, throughput averaged about 133MBps. On a virtual disk with a 256KB chunk size served by the nStor 4520 array,performance rose approximately 20%, registering about 165MBps.
Click here to enlarge image

The iQstor 1000 Storage System is essentially a "building block" product line. At the heart of the hardware is the iQ1000 storage array, which has a default configuration featuring fully redundant hot-swappable components, including storage processors (a.k.a. controllers), disk drives, power supplies, and cooling modules. Each of the dual active SP100 storage processors is powered by a 400MHz RISC CPU, sports a hardware parity accelerator board, and has two individually addressable 2Gbps Fibre Channel ports. The two SP100 processors communicate through an internal high-speed bus.

Also, Linux servers running the 2.6 kernel can now take full advantage of the transparent fail-over support provided by dual active-active controller configurations to increase storage availability. The driver for the QLogic 2340 host bus adapter (HBA) that is included with SuSE 9.1 no longer reports each virtual drive as two drives: one from each controller. Each controller is now interpreted as providing an alternative path to the same drive. As a result, servers running the 2.6 Linux kernel and equipped with QLogic 234x HBAs can take full advantage of SAN infrastructure.

Each iQ1000 supports up to 15 Fibre Channel disk drives. As storage requirements grow, another "building block" component—the J1000 JBOD enclosure—can be attached to the iQ1000 to dynamically scale storage capacity without disrupting any processing. Up to seven J1000 JBOD enclosures can be attached to the iQ1000 enclosure to provide up to 17.6TB of capacity.

Classical storage management functions are not the only features to be found in the iQstor SAN Manager software, however. When storage devices come from multiple vendors, management complexity is significantly compounded. To alleviate this SAN management issue, iQstor's vision of reducing storage TCO literally goes outside of the storage box.

In particular, iQstor incorporates the API interface specifications of popular Fibre Channel switches to provide administrators with a complete SAN management console. This console provides a rich view of the entire SAN fabric.

On first discovery, icons for the nodes are displayed in three columns according to their device type: host (initiator), switch, or device enclosure (target). These icons can then be arranged in a manner that makes sense for the site.


Figure 3: The iQstor SAN Manager software provides wizards and forms to easily create storage hierarchies. Particularly useful is the ability to display the relationships that define these hierarchies graphically. LUN numbers are automatically assigned sequentially to the virtual disks as they are created.
Click here to enlarge image

From the SAN topology window, this integration allows an administrator to probe switches and perform switch management functions through the iQstor console. When configured in a fabric with Brocade SilkWorm switches, we were able to create and configure port-based zones.

While switch management is nice to have, the real value of being able to probe a switch comes to the forefront for administrators attempting to virtualize storage. Like nStor's StorView (see "Building a SAN for SMBs, part 1," InfoStor, May 2004, p. 34), iQstor's SAN Manager software reports all of the HBAs that it finds on the fabric by each of the HBA's unique WWN code. For most administrators, however, a list of unique WWN codes is quite useless. What an administrator really needs to know is in which server an HBA is installed.

The easiest way to trace this information is to determine the switch port to which a server HBA is connected. Once this is done, the hexadecimal ids can be replaced with friendly names to simplify configuration tasks.

Storage virtualization using the iQstor 1000 system follows a four-layer hierarchy model:

  • Disks
  • RAID arrays
  • Storage pools
  • Virtual disks (Vdisks)

The process of creating virtual storage volumes begins with the creation of RAID arrays. These arrays can be formatted as RAID level 0,1, 3, 4, 5, or 10 (0+1). Interestingly, the chunk or stripe size for data in these arrays is limited to 64KB, which is the maximum I/O size for Windows NTFS systems.

This Windows-inspired limitation is unusual, given iQstor's roots in the Solaris arena. The ability to create arrays with data stripes of 128KB or even 256KB frequently characterizes systems that come out of a high-end Unix market. In particular, Linux attempts to bundle I/O requests into 128KB blocks. The QLogic HBA driver on Linux distributions based on the 2.4 kernel goes even further by attempting to bundle I/O requests into 512KB I/O requests. As a result, large I/O requests from a Linux server streaming data are likely to split inefficiently over too many disks.

With these smaller chunk sizes, we expected to measure slightly lower throughput when compared to virtual disks served by our nStor 4520 disk array. The nStor system provides for chunk sizes of up to 256KB. The performance difference turned out to be about 20%. Nonetheless, the data services provided by the iQstor 1000 easily counterbalance this performance issue.

Once arrays are created, the next step in virtualization with the iQstor system is to assign arrays to storage pools in which all of the arrays have identical RAID levels and underlying chunk sizes. In essence, each of these pools represents an abstraction of an array.

Next, virtual disks—Vdisks in the argot of the iQstor SAN Manager software—are created from these pools. Along this vein, another important function of these homogeneous pools is to serve as a resource for the automatic expansion of virtual disks as they fill with user files. The creation of virtual disks, however, does not mark the end of the virtualization process. An administrator must now make each of these virtual disks available to the correct server.


Figure 4: Using wizards in iQstor's SAN Manager software, we were easily able to set up a snapshot service for a virtual disk that was made accessible to a Linux server. As part of the service, a policy was set to grow the Vdisk when user files consumed 80% of the Vdisk's capacity. We also used the iQstor wizards to set up synchronous remote replication of an iQstor Vdisk on our nStor system.
Click here to enlarge image

The most critical and least intuitive steps in the virtualization of storage using the iQstor system involve the mapping and masking of LUNs. As each virtual disk is created, it is assigned a sequential LUN number. By default, all of these virtual disks are masked from the view of any host system on the fabric. This same default is used by nStor. Nonetheless, nStor's GUI—StorView—handles the mapping of LUNs in a much more intuitive fashion.

Under StorView, the administrator is prompted to map the virtual disks by explicitly assigning a unique LUN that will be visible to a specific HBA. With its greater degree of abstraction created by the four-tier hierarchy, iQstor's SAN Manager needs to take a more syntactically rigorous approach.

Under iQstor's SAN Manager, mapping LUNs and masking LUNs are considered to be orthogonal activities. Since all LUNs are by default created as masked to all HBAs, creating a new LUN mapping of a virtual disk for an explicit HBA will just create a new LUN mapping that will also be inaccessible at the explicit HBA. Any LUN mapping created in this manner will be marked as inaccessible in the graphical "Host to Vdisk" display.

To make a virtual disk visible to an HBA the administrator unmasks the existing general mask for the explicit HBA. Once the general mask is unmasked for that specific HBA, the "Host to Vdisk" display will show the virtual disk as accessible by that HBA.

For Linux systems, this unmasking process is all that is necessary to make a virtual disk accessible to the operating system using the LUN number that was automatically generated by SAN Manager. This is not the case for systems running Windows Server.

Using the QLogic HBA management GUI on SuSE Linux 9.0 (kernel 2.4), we see four disks being presented to the operating system. Since this is on the 2.4 kernel, the four drives are in reality two drives (one from the iQstor system and one from the nStor system). On the 2.6 kernel the QLogic HBA driver correctly registers the environment as two physical drives, each having an alternate path.

More importantly, the iQstor virtual disk has a single LUN, which is the original auto-generated number. The virtual disk presented by the nStor system, however, has two LUNs: 0 and 1. LUN 0 represents the nStor Wahoo RAID controller and was automatically generated when we assigned LUN 1 to the virtual disk mapped to this server via StorView.

Unlike with a server running Linux, any series of virtual disks being made accessible to a server running Windows must have one LUN explicitly set to zero. That's the reason why nStor automatically reserves LUN 0 for the RAID controller in every series of drives that is mapped to an HBA. This scheme to ensure Windows compatibility does have one additional piece of overhead: A pseudo driver for the controller must be installed on Windows. Without such a driver, Windows will attempt to install the controller every time it is booted into the SAN.

When using the iQstor system, an administrator must map one Vdisk to LUN 0 in any series of drives that are to be made accessible to an HBA installed in a system running Windows Server. Once this final step is finished, all of our servers gained access to the iQstor storage system.

Now the real value-added features of the iQstor SAN Manager package can be brought into play. These features come under the umbrella of "data services." The initial focus of these services is on the issue of maintaining business continuity. The inclusion of software to facilitate activities such as snapshotting and mirroring is not unusual for file-serving NAS appliances; however, it is rather unique for block-serving SAN devices. To this end, iQstor is adding more intelligence to recognize and help manage the files stored on its virtual disks. Currently the SAN Manager software can handle NTFS formatted Vdisks but is limited to Ext2 when dealing with Linux servers.

The managed snapshot services provide a quick and easy way to create point-in-time copies of data. When snapshots are done at the device level, all users, regardless of the systems they are using, are able to utilize this facility. It also creates a way to avoid most user restore requests, which are triggered by mistaken file deletions.

As many as 128 writable snapshots for each virtual disk can be managed. Snapshots can be created on demand or scheduled on a regular basis. What's more, policies can be set up to automatically increase the capacity of the virtual disks.

Along with the creation of snapshots, iQstor SAN Manager software has facilities for mirroring virtual disks. These are easily set up and managed with wizards. This makes it especially easy to break a mirror, access it, and then resume synchronization. This also makes it very easy to access production data in a non-disruptive manner for applications such as database analysis, data mining, and fast backup without the overhead of dealing with open files.

There is one major restriction on disk mirroring: It must be done within an iQstor storage system. Nonetheless, in another demonstration of "outside-of-the-box" thinking, an iQstor system can play the role of an initiator as well as a target on a SAN fabric. In other words, an iQstor system can initiate the sending of data to another storage system on a SAN. Dubbed "Remote Replication," this service can be set up as either a synchronous or asynchronous service. We easily set up remote replication of a test virtual disk on the iQstor array with another virtual disk located on our nStor array. The combination of remote replication and 2km fiber-optic cable runs provides a unique solution to an old business continuity issue: disaster recovery. Off-site file storage (data vaulting) can be instituted in a unique and cost-effective manner. By using real-time replication of critical data between multiple storage arrays, IT can restart mission-critical applications after a primary site disaster, bringing critical activities back online instantaneously.

Jack Fegreus is technology director at Strategic Communications (www.stratcomm.com). He can be reached at jfegreus@stratcomm.info.


Under examination

Intelligent SAN storage arrays

What we tested

iQstor 1000 Storage System

  • Two SP100 RAID controllers
    • 1GB cache
    • Hardware RAID parity accelerator
    • 400MHz RISC CPU
    • Dual 2Gbps Fibre Channel ports (SFP)
  • Active-active configuration
  • Controllers communicate via high-speed bus
  • Cache shadowing
  • 15 Seagate Cheetah drives (73GB, 10,000rpm)
    • 2Gbps FC-AL interface

iQstor SAN Manager software

  • Snapshot services
  • Local mirror service
  • Remote replication service
  • Intelligent capacity management

How we tested

Two QLogic SANbox 5200 stackable switches

  • 16 2Gbps Fibre Channel ports (SFP)
  • Four 10Gbps ISL ports (copper)
  • Port-based incremental licensing
  • Non-disruptive code load and activation

Three QLogic QLA2340 HBAs

  • 133/100/66MHz PCI-X and PCI compatibility
  • Full-duplex 2Gbps Fibre Channel
  • Linux 2.4 kernel driver bundles I/O into 512KB blocks
  • Linux 2.6 kernel driver provides transparent fail-over

QLogic SANsurfer Management Suite

  • Host-based for Linux, Solaris, Windows
  • Modules for switches and HBAs
  • Wizards for configuration, zoning, and security
  • Performance monitoring
  • Fabric health alerts

Two Brocade SilkWorm 3200 switches

  • Eight 2Gbps Fibre Channel ports (SFP)
  • ISL port trunking
  • WebTools for fabric monitoring

HP ProLiant ML350 G3 server

  • Dual 2.4GHz Intel Xeon CPUs
  • 1GB PC2100 DDR memory
  • Four 100MHz PCI-X expansion slots

Appro 1224Xi 1U server

  • Dual 2.4GHz Intel Xeon CPUs
  • 1GB PC2100 DDR memory
  • 133MHz PCI-X expansion slot

Dell PowerEdge 2400 server

  • 800MHz Intel PIII CPU
  • 512MB ECC registered SDRAM memory
  • Four 66MHz PCI expansion slots

SuSE Linux 9.1 Professional

  • Linux kernel 2.6

SuSE Linux 9.0 Professional

  • Linux kernel 2.4.21

Windows Server 2003

  • NET Framework 1.1

Benchmarks

  • oblLoad v2.0
  • oblDisk v2.0

Key findings

  • iQstor SAN Manager supports end-to-end fabric management, including switches.
  • iQstor Storage Server provides wizard-based data services for
    • Snapshots of virtual disks
    • Internal virtual disk mirroring
    • External virtual disk replication
    • Automated virtual disk expansion
  • RAID chunk size limited to 64KB
  • QLogic HBAs provide transparent SAN fail-over on Linux 2.6 kernel.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives