iSCSI vs. FC, Windows vs. Linux

Our first shot at creating an iSCSI-based IP SAN for SMBs yields impressive results, but not without caveats.

By Jack Fegreus

With internal demands to improve corporate performance and external regulations to document corporate performance both driving the growth of storage requirements, the mandate for IT is simple: Manage storage growth. While the mandate may be simple, execution of that mandate involves multiple tasks that can be quite complex.

The problems begin with sloppy terminology surrounding our notion of a SAN. A SAN is not about sharing storage; it’s about partitioning and allocating virtual storage devices out of a central pool for efficient resource utilization. With one notable historic exception, the VMS Cluster, computer systems do not share storage devices well. Whether we are talking about Windows, Linux, or Unix, today’s operating systems expect uncontested ownership of storage devices. Outright ownership of storage devices is essential for the implementation of such operating system constructs as journaled file systems.

As a result, the topology of a SAN fabric goes a long way to defining the resiliency of the SAN to faults and disruption. Naturally, the cost of equipment for adding multiple controllers to arrays and multiple HBAs to hosts will be pivotal in choosing that topology. That said, the rise of lower-cost storage networking alternatives to traditional Fibre Channel SANs should come as no surprise.

The most interesting of these alternatives is iSCSI. Under this scheme, SCSI commands and data are encapsulated in TCP/IP packets and transmitted over Gigabit Ethernet networks.

The simplicity of iSCSI plays well into marketing hoopla that touts the absolute minimum investment necessary to set up a working SAN fabric. Network cards, switches, and cables for Gigabit Ethernet are a fraction of the cost of Fibre Channel components.

To investigate iSCSI possibilities for an SMB site, InfoStor Labs evaluated the VTrak 15200, a 15-drive iSCSI storage array from Promise Technology. In many ways, the VTrak 15200 fits perfectly into the role of poster child for the SMB-two- to six-node fabric-SAN revolution.

Priced at $5,999, the 3U VTrak 15200 does not include disk drives in the standard package. Users are free to configure the system with up to 15 off-the-shelf, low-cost Serial ATA (SATA) or Parallel ATA (using an adapter) disk drives. Our test system was populated with 14 Hitachi Deskstar 7K400-400 SATA disk drives. These drives have a rotation speed of 7,200rpm, a capacity of 400GB, and a current street price of $369.

To fulfill its role as a cost-effective iSCSI target, the VTrak 15200 disk array features a dual-port TCP Offload Engine (TOE) from QLogic to provide efficient iSCSI throughput and fail-over protection. The QLogic TOE is an ASIC that implements the TCP and iSCSI protocol stacks in silicon to provide very fast TCP/IP packet processing. With the maximum theoretical bandwidth for an iSCSI Ethernet port at approximately 100MBps, a dual-ported TOE is essential.

The VTrak 15200 enclosure features N+1 hot-swap, redundant, field-replaceable power and cooling units. The I/O controller, power supplies, and fans are cable-less for plug-and-play access without tools. In addition, the VTrak 15200 comes with a 72-hour battery backup as a standard feature. Configuration of the VTrak 15200 is done through either a menu-driven command line interface (CLI) using Telnet over a serial connection or a Web GUI and the Promise Array Manager (WebPAM) software, which is a J2EE application. Installation of WebPAM, whether on Windows, Linux, or Unix, includes installation of a number of open source components, such as the Apache Web server and the Tomcat Java Application Server. Once WebPAM is installed on a central server that communicates with the VTrak’s network management Ethernet port, administrators can log into the WebPAM application from anywhere on the net using a browser that supports dynamic HTML.

Click here to enlarge image

To test typical file I/O performance we averaged the throughput reading of all of the files in a 3GB directory. This provides a good real-world assessment for Windows NTFS file system performance. Using aggressive asynchronous I/O will provide more than twice the throughput level, but is unrealistic for typical applications. Most significantly, we needed to use seven spindles with Windows Server 2003 to maximize throughput from the SATA-based logical array and still fell short of the throughput measured with SLES 9 and the Reiser file system on a four-drive array using 64KB stripes.

Beyond creating and expanding disk arrays, there is precious little to manage or configure on a long-term basis with WebPAM. Even data-cache policies are automatically adjusted based on data patterns, thereby optimizing performance for different application profiles. Although the VTrak is OS-agnostic when it comes to clients, its maximum stripe size of 64KB for logical disks makes it much more optimized for a Windows client than one running Linux for which 128KB would be the default choice. Nonetheless, the results of our file-throughput test proved that configuration issue to be of little concern.

Promise also provides several background processes that head off potential maintenance emergencies. Media Patrol performs media scans and remaps bad sectors before a user process incurs a media fault during an I/O request. Predictive Data Migration (PDM) detects possible drive failure situations and proactively migrates data prior to failure to reduce the risk of data loss.

To assess the performance of the VTrak 15200, we created an iSCSI SAN network with four nodes: three servers and the VTrak array. At the heart of this SAN fabric was a 5-port Netgear Gigabit Ethernet switch. The server generating I/O loads was an Appro 1142H, which is a 1U quad-processor Opteron system with dual Gigabit Ethernet ports and 133MHz PCI/X support. To provide a baseline comparison to a Fibre Channel SAN, we also installed an Emulex LightPulse LP1050 HBA in the Appro’s open PCI/X slot.

For these tests we installed both Windows Server 2003 and SuSE Linux Enterprise Server v9 (SLES 9) on the quad-processor Appro. In both cases we used the Appro’s built-in Broadcom Gigabit Ethernet NIC along with a software iSCSI initiator. Under Windows Server 2003, we used Microsoft’s iSCSI software initiator, which makes connecting to an iSCSI target like the VTrak 15200 a simple point-and-click exercise. Under SLES 9, we used the open source iSCSI initiator from Cisco.

To set up this initiator it is necessary to edit the /etc/iscsi.config file. There are many settings, each exhaustively detailed in this file. What is lacking, however, is a note detailing the two variables that are essential to connect to an iSCSI target: DiscoveryAddress and TargetName.

Click here to enlarge image

With both the iSCSI and Fibre Channel arrays, Windows Server 2003 had a distinct edge over SLES 9 running our I/O load benchmark. The oblLoad benchmark was configured to run 8KB reads in a database-oriented pattern. We used a four-drive logical volume on each server; however, the 15,000rpm Seagate drives in the nStor array gave it a distinct advantage. Nonetheless, the ability of the VTrak 15200 (from either Windows 2003 Server or SLES 9) to sustain a load of approximately 3,000 I/Os while maintaining an average response time of less than 100ms is more than sufficient to support most transaction-processing-oriented applications.

Once these two variables are set and the iSCSI daemon started, YaST will request if the user desires to configure any newly discovered iSCSI drives. Using the YaST Partitioner, however, leads to a nasty problem.

YaST does not know that these are iSCSI drives and therefore puts all of the mounting information in /etc/fstab. The next time the system is booted, it checks /etc/fstab for drives to mount and runs a file system check (fsck). Unfortunately, the iSCSI daemon is initialized later in the boot sequence. As a result, the drives will not be available and the boot will fail. To rectify this problem, it is necessary to remove the iSCSI drive information from /etc/fstab and put it in /etc/fstab.iscsi, which is accessed when the iSCSI daemon is started.

For our WebPAM server, we used an HP ML350 G3 server running SuSE Linux 9.2. We found the performance of the Windows version of the Tomcat J2EE server to be a bit quirky on Windows Server 2003. When we ran WebPAM on a Linux server, we had no problems accessing the application from either a Windows or Linux client.

Testing: one, two, three

We started by configuring three logical drives. Each logical drive was made up of four physical drives configured in a RAID-0 stripe set. This matched our Fibre Channel SAN test configuration. It should be noted, however, that the Seagate Cheetah Fibre Channel drives in our nStor 4520 Storage System had half the rotational latency-spinning at 15,000rpm-of the SATA drives.

We expected to see a dramatic performance difference in a head-to-head comparison with only four drives in each array. While SATA drives offer users lower-cost storage (typically, one-third to one-half the price of SCSI drives), RAID performance on Windows usually requires 50% more spindles using ATA drives to get equivalent throughput.

For this assessment, rather than test for maximum throughput using asynchronous reads, we chose to use normal file I/O encumbered with all of the operating system overhead to get a better user perspective. Typically this conservative approach results in throughput that is about 45% of what we would measure using aggressive asynchronous reads.

We first tested throughput under Windows Server 2003. With four drives in a RAID-0 stripe set, we measured average file I/O throughput from our Fibre Channel nStor 4520 at 79.3MBps. Reading from three distinct logical arrays simultaneously, throughput scaled to 151MBps.

The four-drive SATA array over iSCSI delivered a meager 27.4MBps. Reading simultaneously from three logical arrays, throughput from the iSCSI server scaled to 71.4MBps. Adding physical drives to the VTrak’s logical array significantly improved throughput. The addition of one drive raised throughput by 35%. We found optimal performance to be with a seven-drive array, where average throughput rose to 48.7MBps from our logical volume.

Running the same throughput tests on SLES 9 yielded dramatically different results. From our Fibre Channel storage server, a single logical array averaged throughput that was 20% higher, at 86.2MBps. Using three arrays, throughput rose to 163.2MBps. Even more impressive, however, was the improved throughput using the logical disks exported from the VTrak 15200.

With file I/O throughput at 55.1MBps on SLES 9 using a single four-drive array, we surpassed what we had achieved with Windows using a seven-drive array. With three arrays, file I/O throughput rose to 87.2MBps. More importantly, running under SLES 9, the SATA-based arrays that were exported by the VTrak 15200 scaled just as well as Fibre Channel arrays exported by the nStor 4520. Our initial four-drive RAID-0 array provided exceptional baseline performance and adding drives provided only marginally higher throughput.

Given the rotational latency advantage of our Fibre Channel drives, and the onboard hardware context cache on the Emulex LP10000 HBA for high transaction performance, we expected that we would find a dramatic difference in I/O load support for transaction-oriented database operations.

These expectations were clearly measured and were most dramatic under Windows Server 2003. Nonetheless, the ability to support more than 3,100 I/O operations-each 8KB-while maintaining an average response time of less than 100 milliseconds means that the VTrak array is capable of supporting most transaction-oriented applications.

Although configurability of the storage array and throughput performance would easily meet the most stringent requirements for SMB applications, manageability of our fabric was nonexistent. None of the Gigabit Ethernet hardware or software that we used was designed to handle the issues of a SAN. First and foremost of these issues is virtualization. As a result, all of our servers could see and access all of the drives.

On our Fibre Channel fabric, StorView (nStor’s configuration software) identifies each of the HBAs to which a logical drive is mapped via that HBA’s unique wwn number-analogous to an NIC’s MAC address. In the HBA mapping process, logical drives can be revealed or masked via the HBA to ensure a 1-to-1 mapping of logical disks to systems.

While differences in file systems would prevent the accidental cross-mounting of logical drives between Windows and Linux systems in our test scenario, there was absolutely nothing else-other than the brain matter between the ears of the systems administrator-to prevent servers running the same operating system from scribbling all over each other’s drives.

Clearly, iSCSI SANs work. Making these SANs work for you, however, requires more investment-certainly more investment than the acquisition of a few Ethernet cables.

Jack Fegreus is technology director at Strategic Communications (www.stratcomm.com). He can be reached at jfegreus@stratcomm.info.

InfoStor Labs scenario


iSCSI disk array and SAN


Promise VTrak 15200 Storage System

  • Supports Serial ATA (SATA) and Parallel ATA (with adapter) drives
  • RAID Levels 0,1, 3, 5, 10, 50
  • Dynamically extend and change RAID level of existing arrays
  • Dual Gigabit Ethernet iSCSI ports and one management port
  • 256MB predictive data cache
  • 72-hour battery backup
  • Web-based Promise Array Manager (WebPAM) software

14 Hitachi Deskstar 7K400-400 SATA hard disk drives (400GB, 7,200rpm)


  • Appro 1142H 1U Server
    • Quad AMD Opteron CPUs
    • Dual Gigabit Ethernet ports
    • 133MHz PCI-X expansion slot
  • Windows Server 2003
  • SuSE Linux Enterprise Server v9
  • Emulex LP 1050 Fibre Channel HBA
    • PCI-X support
    • Onboard hardware context cache for high transaction performance
  • nStor 4520 Storage System
    • Two WahooXP RAID controllers
    • 12 Seagate 15K Cheetah Fibre Channel drives


  • oblFilePerf
  • oblLoad


  • Maximum stripe size for RAID limited to 64KB
  • Dynamic support for expanding and restructuring arrays
  • No support for LUN virtualization
  • Linux file throughput double that of Window Server 2003 using a single four-drive RAID-0 array.

This article was originally published on April 01, 2005