Roll-your-own RAID test results

Posted on June 01, 2002

RssImageAltText

Our lab builds a RAID subsystem based on a high-availability, Ultra160 SCSI controller.

By Jack Fegreus

As more and more companies move Linux into mainstream, mission-critical IT processing roles, no CIO can afford to dismiss this operating system as a toy for geeks or simply a cheap way to host a Website. As Linux moves into key IT roles, the issue that takes center stage is high availability, and when that happens, the threshold for what constitutes an acceptable storage subsystem becomes more stringent.

Whether we are talking about simple device redundancy or fail-over clusters, one of the critical components is always a high-availability storage system. These systems customarily include a RAID controller(s), one or more disk enclosures, disk drives, rack-mount or desk-side cabinetry, and power supplies. Among those integrated components, the most important is the RAID controller, which dictates overall functionality and plays the primary role in performance.

Because of the importance of the RAID controller's role, we departed from our routine practice of testing only end-user products and took on the task of playing a storage systems integrator. For the controller part of the equation, we chose CMD Storage Systems' Titan 7040 RAID controller.


For our test, we mapped the first four of RAID Set 0's 12 partitions to our Linux server and the next four to the Windows 2000 server using the VSM utility. Using the Computer Management tool within the Windows Administration toolset, we verified that Windows 2000 only had a knowledge of, and access to, the four partitions designated via Vision SMU.
Click here to enlarge image

CMD has long been known for its dual-porting support of shared SCSI devices in clusters. Recently, Silicon Image acquired CMD. Now a new Java-based Vision Storage Management (Vision SMU) utility has been released for all Titan controllers—both SCSI and Fibre Channel—which enhances the configuration flexibility of the controller on Linux, Windows, HP-UX, and Solaris.

Vision SMU does what a Java application is supposed to do: run on any system that supports a Java Runtime Environment (JRE). After testing countless direct-, NAS-, and even SAN-attached storage systems that work with Linux, but must be configured on Windows, it was refreshing to have a choice. Vision SMU communicates with each controller via an RS-232 or Ethernet port, which makes each controller manageable from any point on the network.

All of the critical components of the Titan 7040 are hot-swappable. The engine for each internal controller—the 7040 can have two internal controllers—is an SA-110 StrongARM RISC CPU clocked at 233MHz. Each controller can be configured with up to 1GB of cache (512MB when mirrored). With two internal controllers, the 7040 provides transparent fail-over and fail-back for any host.

Whether configured with one or two controllers, each Titan 7040 has two configurable host bus interfaces with in and out ports. These Ultra2 SCSI Low Voltage Differential (LVD) ports can be configured to support a single host with a single SCSI host bus adapter (HBA) to dual hosts, each with dual redundant SCSI HBAs. In turn, there are two Ultra160 SCSI LVD ports, which support two independent disk buses.


Comparing the performance results of the CMD Titan and the Adaptec DuraStor is an intriguing exercise. Within a particular file architecture such as NTFS, JFS, or ReiserFS, performance patterns show a strong resemblance. As expected, the larger cache that we configured for the CMD Titan paid off with more processes active. Yet with a single process, read performance (as shown with a Linux ReiserFS partition) was lower.
Click here to enlarge image

That gives each controller a potential pool of 28 Ultra160 SCSI drives from which to create RAID sets. These sets can be configured as single drives (JBODs), striped volumes (RAID 0), mirrored volumes (RAID 1), striped and mirrored volumes (RAID 0+1), and parity volumes (RAID 3/5). In addition, the disk bus ports support Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.), which reports on mechanical or electronic degradation of components to predict and warn about possible future failures.

When the underlying multi-channel architecture of the Titan controllers is combined with the Vision SMU configuration software, the result is a flexible subsystem for any IT configuration. To test the possibilities, we configured an I/O subsystem using a Cremax ICY Dock, which provides for four 1-inch hot-swappable drives with 80-pin SCA connectors in a form factor that occupies three standard 5.25-inch drive bays. We used this dock to support four Seagate Ultra160 SCSI disk drives and configured each controller with 192MB of cache.

The initial configuration of a RAID set with Vision SMU is almost the same as the steps required by numerous other subsystems. In our case, we configured the four Seagate Ultra160 drives as a RAID 5 stripe set (RAID Set 0) and made the owner controller A. Then we proceeded to set up a dual-port configuration with a Dell PowerEdge Server running Windows 2000 and an HP Netserver running SuSE Linux 7.3. We were able to "virtualize" the storage so that it was impossible for an administrator of one system to corrupt one of the volumes being used by the other system.

The real differences in the Titan controller first emerged when we began to logically partition the drive. The first example is the option to create a private partition on the RAID set to store the contents of the write-back cache in case of an unexpected shutdown.


In our transaction processing benchmark, performance ran true to form with the larger cache in the Titan configuration paying a distinct dividend.
Click here to enlarge image

What really makes a difference, however, is the extent to which an administrator can configure the partitions that will appear as logical disks at the hosts. Because of the multi-channel architecture of the controller, each partition can be individually assigned to an individual host connection channel, both of the host connection channels, or none of the channels.

For our test, we mapped the first four of RAID Set 0's 12 partitions to our Linux server and the next four to the Windows 2000 server.

We wanted to allow a Windows and a Linux server to share the same storage resources, which is a tricky exercise given the penchant of Windows for gobbling up every LUN in sight and writing a disk signature on the master block. With the Titan controller, the solution to this problem is trivial. We simply connected the SCSI HBA in our Windows 2000 Server to the first host SCSI port on the Titan, which is mysteriously dubbed Channel 2. (There are no Channels 0 or 1.) In turn, we connected the HP Netserver running SuSE 7.3 Linux to the second host SCSI port, dubbed Channel 3. We then could assign any partition to either channel using the Vision SMU GUI.

The result: Our Linux server saw four logical drives that we formatted as Ext2, Ext3, JFS, and Reiser, while our Windows 2000 system saw four different logical drives, which it formatted as NTFS. The only thing left to do was test the performance of the Titan controller with the Adaptec DuraStor RAID array that we previously tested (see "Testing Linux file systems on RAID," InfoStor, April 2002, p. 46).

The test results were a mix of the highly predictable and the curiously inscrutable. The performance differences between the journaled file systems followed the same patterns, with JFS and Reiser having the same edge using the Titan controller that they sported in the Adaptec DuraStor tests.

Since we had increased the controller cache size by 50% and had given all of the internal optimizations within the Linux kernel to leverage every possible advantage from cache, we expected to see a consistent improvement across the board within a given file system. We didn't.

Within each file system, sequential read performance on the Titan controller followed the same pattern as it did on the DuraStor array. There was, however, one distinct difference: It was lower. Nonetheless, with 16 read threads, in every case we saw the kind of improvement that we would expect with the larger cache.

Even more intriguing was the performance on writes. Once again, the relative patterns between the DuraStor and the Titan were the same. And once again the performance on the Titan was considerably slower. In fact, the differences were such that we triple-checked our configuration parameters to make sure that we had not somehow negated the write-back configuration of the cache and transformed it into a write-through cache.

Our last test was our OBLload benchmark, which stresses the transaction processing capabilities of the I/O subsystem. In this test, everything went as expected. With its larger cache, the Titan controller could fulfill double the number of I/Os per second that we could process with the DuraStor and could respond to twice as many processes before average access time exceeded 100ms.

Jack Fegreus can be contacted at jfegreus@customcommunications.com.


Labs scenario

Under examination
•Dual-port external RAID

What we tested

  • CMD Titan CRD-7040 RAID controller
  • Vision Storage Management utility

How we tested

  • HP Netserver LP 1000r with 512MB RAM
  • Dell PowerEdge 2400 Server with 512MB RAM
  • Two QLogic QLA12160 SCSI HBAs
  • SuSE Linux 7.3
  • Windows 2000 Server
  • Cremax ICY Dock MB018
  • Four Seagate Cheetah Ultra160 SCSI disk drives

Benchmark tests

  • OBLdisk
  • OBLload


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives