Lab Review: One-button iSCSI management

Posted on October 03, 2008

RssImageAltText

Using a fifth-generation protocol stack, load-balancing, and sophisticated management software, StorMagic makes short work of SAN configuration and provisioning for SMBs.

By Jack Fegreus

-- With attributes such as lower infrastructure acquisition costs and the ability to run on existing IP networks, iSCSI storage solutions are particularly attractive to small and medium-sized businesses (SMBs). However, when faced with the constantly expanding need for storage capacity, IT decision-makers have shown surprising coolness toward adoption of iSCSI. General satisfaction with the performance of existing direct-attached storage (DAS) and NAS, along with concerns over the level of expertise required to manage a SAN, have combined to slow iSCSI adoption at SMBs.
 
While both DAS and NAS have served IT well in the past, the new focus on achieving higher levels of resource utilization requires new technologies based on shared network storage rather than DAS. For any SMB IT department, building a SAN for the first time will raise concerns over configuration and management complexity. Intent on demolishing old notions about iSCSI complexity, StorMagic created its SM Series of iSCSI arrays and software to simplify SAN configuration and management tasks.

 

Administrators have two GUI choices for SM Series arrays. As with other iSCSI arrays, there is an embedded Web browser that is independent of the client OS. This interface provides the means to configure and manage all of the array components, including physical storage pools, logical disk targets, and iSCSI sessions. For iSCSI administrators with Windows clients, the SM Disk Manager simplifies iSCSI configuration and management.

As an iSCSI appliance, an SM Series array is client OS-agnostic. However, most SMBs run Windows, and StorMagic has created a special software package -- SM Disk Manager -- for Windows clients. For administrators with little or no SAN experience, SM Disk Manager on a Windows server turns an SM Series array into an automated, self-managed storage system.

Strong growth in server virtualization is also spurring iSCSI adoption. VMware Virtual Infrastructure (VI) simplifies the requirements for storage virtualization by eliminating the mandate to ensure exclusive ownership of disk volumes in a SAN. The VMware file system (VMFS) eases this constraint by handling logical disks analogously to CD-ROM image files. As a result, advanced VI environments are designed specifically to leverage shared storage, which opens the door to using iSCSI as the transport mechanism for a cost-effective, easy-to-manage SANs.

In assessing the StorMagic SM Series array, openBench Labs set up a testing scenario to focus on the issues of functionality, manageability, and performance in the context of an SMB environment. For an iSCSI client system, we chose a Dell PowerEdge 1900 server with a quad-core CPU and a Gigabit Ethernet TCP offload engine (TOE).
 

We set up all iSCSI sessions with targets via the Microsoft's initiator. For each target drive, we created two active sessions. Each session associated with a target exposed a unique device defined by a distinct NIC port on the SM array and a distinct port on the QLogic iSCSI HBA. As a result, we had two active independent parallel paths connecting each target disk with our host client. This gave us a fabric topology for round-robin I/O request load-balancing and fail-over.

For SAN connectivity, we installed QLogic's Fibre Channel and iSCSI HBAs. We chose to use iSCSI HBAs rather than server NICs to test the versatility and ease-of-use in a Windows environment that StorMagic touts as central to the value proposition of its SM arrays.

We ran Windows Server 2003 and installed Micrsoft's iSCSI initiator, QLogic's SANSurfer HBA management software, and the SM Disk Manager. The SM array was provisioned with eight 500GB SATA drives from Seagate, and two Gigabit Ethernet ports for iSCSI connections. Optionally, the array could have been configured with four Gigabit Ethernet ports and up to 12 SAS drives.

For performance testing, we used our oblDisk benchmark to assess a RAID-0 pool on the SM disk array to test support for applications that use large files, such as digital content creation, editing, and distribution. We then used the Iometer benchmark and a hardware-based RAID-5 pool to test the array's suitability for use in a database environment with a transaction processing application. While I/O throughput performance was impressive, it was the iSCSI functionality and ease-of-management of the SM Series array that stole the show. 

When configuring or managing the SM Series array, an administrator can work with either the array's embedded Web GUI or install the Windows-based SM Disk Manager utility. While our test plan called for beginning the evaluation process with the more sophisticated Web GUI, we installed the SM Disk Manager before starting the tests. That choice provided a serendipitous and important benefit: As part of the installation process, SM Disk Manager made the unique iSCSI qualified name (iqn) of our test server's initiator known to the SM array. As a result, we did not have to make the initiator iqn known manually to the software running on the array. Mapping a client's iSCSI iqn to an iSCSI array is an onerous task needed to make disk targets on an array available to the client system.

For seasoned storage administrators, StorMagic's automatic registration of a system's initiator iqn provides a relief from an annoying task, which many administrators have likely automated with their own cut-and-paste routines. On the other hand, thanks to automatic iqn registration, system managers installing their first SAN will avoid a very arcane task that would have reinforced any misconceptions that networked storage adds more overhead to system administration than it eliminates.

A key element of our test scenario strategy was to use the Microsoft iSCSI software initiator in conjunction with QLogic's hardware-based iSCSI HBA. QLogic's QLA4050C HBAs unburden a host processor from all of the overhead associated with the processing for both TCP and SCSI command packets. From the perspective of system overhead, the presence of a QLogic iSCSI HBA is little different from that of a QLogic Fibre Channel HBA. What's more, this dovetails with Microsoft's software architecture for extending advanced I/O functionality.

 QLogic's QLA4050C iSCSI HBA incorporates a TOE, which handles all of the TCP packet processing, and a SCSI command engine that handles both command processing and CRC error checking. This allows the HBA to appear to the host operating system not as a network device, but as a disk controller via QLogic's storport driver. In turn, the QLA4050C provides the ability to boot a host system directly over iSCSI without additional software.

Complementing QLogic's HBA, Microsoft's iSCSI initiator includes a kernel mode mini-port driver and an iSCSI initiator service. When installed with the QLogic iSCSI HBA, Microsoft's initiator service uses the HBA while handling login and logout for all iSCSI sessions with disk arrays. The Microsoft iSCSI service also provides support for advanced SAN I/O functionality, including multi-pathing of I/O requests and load-balancing, when there are multiple active session connections for a target device.

The SM Disk Manager leverages DSM to create a number of one-button provisioning utilities, including a migration utility. Using the SM Disk Manager, users simply select an existing disk and invoke the migration utility, which provisions a new logical disk from a storage pool on the array. Once the new disk is created, the utility creates a pair (active-passive) of iSCSI sessions to mount the new disk with a fail-over policy. Next, the utility copies the contents of the old disk to the new, dismounts both disks, and remounts the new disk using the Windows ID of the original. As a result, we didn't have to provide any iSCSI information in order to migrate an existing disk onto a new disk.

Version 2 of the iSCSI initiator provides administrators with two options for configuring automatic aggregation and load-balancing of I/O over multiple active TCP connections: an iSCSI protocol option dubbed Multiple Connections per Session (MCS or MC/S) and Microsoft's Multipath I/O (MPIO). While both options spread I/O request packets for a single application accessing a single logical volume across multiple TCP connections, the means to that end for each of the options is entirely different.

For the iSCSI protocol stack, MC/S is a new option that creates multiple paths in the iSCSI session layer. Both the host initiator and the storage system that provides the logical target must support MC/S within their respective iSCSI stacks in order to configure this option. Moreover, since MC/S is implemented within the iSCSI stack, it is OS-agnostic. Nonetheless, this is very disruptive for storage vendors to support and is particularly problematic for HBAs, which put the iSCSI stack in firmware.

We used SM Disk Manager to automatically provision a logical drive on our server. Running the oblDisk benchmark, I/O throughput reached 110MBps for both reads and writes. Upon adding a second active connection using the second port on our iSCSI HBA, throughput was aggregated over two connections and exceeded the 1Gbps limitation of a single connection.

While OS-specific, Microsoft's MPIO is transport agnostic: it works equally well with iSCSI, Fibre Channel, InfiniBand, Fibre Channel over Ethernet (FCoE), or any other transport. To accomplish that task, MPIO requires storage vendors to develop a Device Specific Module (DSM), which provides an interface between the MPIO driver and the storage vendor's hardware.

StorMagic leverages this multi-tier software architecture by tightly integrating all of the proprietary hardware and software for the SM Series arrays with a Windows OS running on an iSCSI client system. The proprietary software includes a fifth-generation iSCSI stack for the DSM and SM Disk Manager, which is installed on the client and acts as StorMagic's vehicle for integration.

In particular, StorMagic leverages the DSM to set up a default iSCSI configuration that features high availability. The StorMagic DSM sets up an active-passive pair of MPIO sessions between a logical disk target and a host system. The sessions use distinct NICs on the storage array and share the host's default iSCSI port.

The DSM is only aware of the host's default iSCSI address, which makes an active-passive configuration the logical choice. Administrators can override the default high-availability MPIO scenario by configuring multiple active connections: DSM automatically takes advantage of those multiple active connections by instantiating a round-robin load-balancing policy for I/O packets on all active sessions.

To test the throughput scalability provided by MPIO load-balancing, we created a software-based RAID-5 volume, which incorporated five logical drives with two active connections with round-robin load-balancing. Running the oblDisk benchmark on the dynamic disk volume, read performance jumped dramatically for all I/O block sizes compared to our initial RAID-0 volume. This was particularly true for small-block I/O, which rivaled the small-block I/O throughput from 15,000rpm FC drives in our nStor array.

More importantly, StorMagic extends the notion of providing advanced Windows MPIO configurations for iSCSI clients by giving administrators storage-provisioning utilities that also leverage the advanced capabilities of DSM. With the simplicity of one-button control, administrators can provision an iSCSI target using the SM Disk Manager GUI more easily than a DAS volume.

The SM Disk Manager automates the entire iSCSI disk provisioning process. Starting with the allocation of space from a storage pool, adding the configuration of active and passive iSCSI sessions for fail-over, and formatting the new disk on the client; the SM Disk Manager eliminates the need for specialized SAN or iSCSI knowledge or experience.

StorMagic enhanced the one-button disk provisioning with a one-button disk migration feature. By selecting an existing disk (typically a DAS partition), an administrator can launch a migration process that provisions a new iSCSI target from an existing storage pool; configure two iSCSI sessions for high availability; format the new drive; copy all of the data from the original drive; and mount the new drive with the drive letter of the original volume. As a result, novice administrators can use the SM Disk Manager to rapidly carry out a task that is typically reserved for seasoned storage administrators.

The SM Series arrays are populated with Seagate Barracuda ES.2 drives, which come in capacities ranging from 500GB to 1TB and with either SAS or SATA interfaces -- a choice that is transparent to the array's DSM and SM Disk Manager.

The SM Series array tested by openBench Labs was provisioned with eight 500GB SATA drives. To best leverage the performance of the SATA drives in this array, we tested two configurations. First, we grouped all eight drives into a single RAID-0 storage pool, using a Windows-friendly 64KB stripe size. This provided us with a storage foundation featuring peak capacity and optimal throughput performance for clients. Second, for sites that require greater hardware resiliency, openBench labs configured seven drives in a RAID-5 set with an eighth drive as a hot spare.

We began performance testing with a RAID-0 storage pool. To examine streaming read-and-write I/O performance, we configured a single logical drive. We created this drive using the SM Disk Manager to handle the provisioning process automatically. The result was an iSCSI target backed by our eight-drive RAID-0 storage pool. Running our oblDisk benchmark, read-and-write I/O throughput converged on a level of about 110MBps, which represents iSCSI wire speed for a 1Gbps connection.

Maximum I/O throughput using the target drive was completely consistent with the default MPIO configuration that features a Fail-Over-Only policy set up by the StorMagic DSM. In particular, we first set up an active MPIO session from port 0 on the SM Series array to port 0 on the QLogic iSCSI HBA. Next, we set up a passive session from port 1 on the SM Series array to port 0 on the QLogic HBA.

Using the Microsoft iSCSI initiator, openBench Labs next reconfigured the passive session with the target. We first switched the connection from port 0 to port 1 on the QLogic HBA and then made the connection active. With two active iSCSI sessions, StorMagic's DSM automatically reset the MPIO policy to load-balancing with round-robin I/O packet transmissions and throughput immediately exceeded 1Gbps. In particular, I/O throughput jumped by approximately 20% as read-and-write throughput converged on 126MBps for 64KB blocks.

The 20% boost in throughput that openBench Labs measured with active-active iSCSI connections and round-robin load-balancing has significant implications for throughput and the scalability of iSCSI sessions. More sessions using more logical volumes spread I/O more finely across multiple connections, which helps to scale multiple applications and larger user populations.

We leveraged the scalability of iSCSI sessions to extend sequential I/O throughput for the oblDisk benchmark. In particular, we used the MPIO load-balancing of the StorMagic DSM in conjunction with dynamic disk support in Windows Server 2003. We began by importing five distinct RAID-0 target volumes, each with two active sessions with round-robin MPIO load-balancing. We then used the dynamic disk capability of Windows Server 2003 to bind the five target volumes as a single software-based RAID-5 logical volume.

I/O performance testing with our RAID-5 volume built with multiple logical disks from the RAID-0 pool proved enlightening. Running the oblDisk benchmark, peak I/O streaming for reads was about 10% greater than what we had measured using a single logical volume. Moreover, that peak-streaming I/O performance was about 20% greater than that of a 2Gbps Fibre Channel array with 15,000rpm Fibre Channel drives. Equally important, throughput for small-block reads was virtually identical to the Fibre Channel array.

That throughput edge was in part attributable to the I/O load-balancing performed by the StorMagic DSM on the SM Series array. All read-and-write I/O operations on our dynamic RAID-5 volume, which was built with five disk targets that had active-active round-robin MPIO connections, were balanced perfectly at the two ports of the QLogic iSCSI HBA. As a result, we had both an iSCSI session with an effective 2Gbps connection and a logical disk that exhibited the rotational latency of a high-speed SCSI array.

Unlike digital-content applications, which stream large-block sequential I/O, transaction-based databases typically generate large numbers of I/O operations that transfer data using small -- 4KB to 8KB -- blocks from random locations across a logical disk. To assess potential performance of the SM Series array in transaction-processing scenarios, we ran Intel's Iometer benchmark.

Iometer stresses data access as well as data throughput. In our tests, we fixed both the number of processes making read-or-write transactions and modified process characteristics by varying the number of outstanding requests that an I/O process was allowed to have open and continue issuing requests.

In many of the transaction-processing applications at SMB sites, the number of processes involved in executing transactions is often limited to a few proxies. Microsoft Exchange, for example, uses a JET b-tree database structure as the main mailbox repository. All transactions are passed to an Exchange store-and-retrieve process, dubbed the Extensible Storage Engine (ESE), which creates indexes and accesses records in the database.

For both read-and-write I/O requests, openBench Labs used an 8KB I/O block size on the Iometer tests and recorded the number of I/Os per second (IOPS). For a base comparison, we ran the same set of benchmark tests on a logical disk exported from an nStor 4540 Fibre Channel array with Fibre Channel drives configured as a RAID-5 storage pool. Finally, to better analyze Iometer results, we plotted the average number of IOPS as a function of the number of outstanding I/O requests, which represents the effective I/O queue length for the array's controllers and drives.

For SMB-class applications, the transaction processing level reached using the StorMagic iSCSI array with its hardware-based RAID-5 pool of SATA drives was comparable in IOPS to a Fibre Channel array with a RAID-5 pool of Fibre Channel drives. That level of performance would satisfy the most demanding SMB application requirements. More importantly, in reaching that level of performance, the StorMagic DSM played a critical role as it balanced traffic across multiple iSCSI HBA ports.

The results of the openBench Labs tests make it clear that administrators can provision an SM Series array with low-cost high-capacity SATA drives without compromising application performance. Not only is the SM Series array ideal for handling data files sequentially, but also it is equally good at optimizing targets for use with applications that feature large amounts of short transactions.

Jack Fegreus is CTO of www.openbench.com

************************************************************************

OpenBench Labs scenario

UNDER EXAMINATION
iSCSI storage server and software

WHAT WE TESTED
StorMagic SM Series disk array
--One-button iSCSI management via SM Disk Manager
--Automated disk provisioning and migration to iSCSI storage
--Web-based management GUI

HOW WE TESTED
--Windows 2003 Server SP2
--QLogic QLE4052 iSCSI HBA
--nStor 4540 disk array

BENCHMARKS
--oblDisk
--IOmeter

KEY FINDINGS
--1,500 IOPS throughput (8KB requests using SATA drives)
--137MBps throughput from one iSCSI target with two active-active connections
--Round-robin iSCSI MPIO load-alancing based on StorMagic's DSM provides multi-gigabit throughput


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives