NAS-SAN fusion: Windows vs. Linux; Building a SAN for SMBs, part 5

Posted on November 01, 2004

RssImageAltText

We compare a NAS-SAN gateway based on Microsoft's Windows Storage Server 2003 to a file server running SuSE Linux Enterprise Server 9.

By Jack Fegreus

Long anticipated among those in the Linux community as the dawn of the "next big thing," the Linux 2.6 kernel is now shipping in "enterprise-ready" distributions. Following the release this summer of 2.6 kernel distributions carrying caveats about use in production environments, the new SuSE Linux Enterprise Server 9 (SLES 9) is ready to muscle into the glass house.

For this review, we set our sights on a narrow and well-defined target. Our goal was to compare the performance of SLES 9 as a file server to that of Microsoft's Windows Storage Server 2003.

We set the stage for this assessment in the last issue of InfoStor with our review of the Hewlett-Packard StorageWorks NAS 9000, which serves as a LAN portal, or gateway, to SAN-resident data (see "Turning shared blocks into shared files," October 2004, p. 42). The concept of NAS-SAN fusion is bolstered by a compelling TCO/ROI argument that is particularly appealing to small and medium-sized businesses (SMBs).

Once a site buys into the argument that storage consolidation is a smart move, the next question is how to distribute the data residing on that consolidated storage back out to desktops. A cost-effective solution is a NAS server, or gateway. While the prices of Fibre Channel HBAs and switches have fallen precipitously, they are still an order of magnitude greater than Ethernet cards and switches. Furthermore, block-level devices on a SAN are not readily shared among multiple systems. As a result, a logical solution is to put one or more servers on the SAN to act as file-sharing portals that provide all LAN-attached desktops with access into the SAN.

HP provides a packaged solution for this scenario in the StorageWorks NAS 9000. This 4U appliance sports dual Xeon processors, 4GB of RAM, dual load-balanced Gigabit Ethernet NICs, a hardware-mirrored system drive array, hot-plug PCI-X backplane, and remote hardware control over IP, not to mention fully redundant hot-swap mechanical components that exhaust any space that might have been available for storage components.

On closer inspection, the StorageWorks NAS 9000 turns out to be a densely packed version of HP's Pro-Liant DL580 G2, a two-way implementation of a server designed for environments requiring maximum compute power. The operating system is Windows Storage Server 2003, the low-cost OEM appliance edition of Windows Server 2003. Key Windows Storage Server 2003 features include Microsoft Services for NFS and Volume Shadow Copy Service (VSS) to create point-in-time copies of data.

Click here to enlarge image

Windows Storage Server 2003 does not include the event-logging policies of Windows Server 2003 that are designed to support the role of an application server. Also missing is the Windows Update service.

To assess SuSE Linux Enterprise Server 9 vis à vis Windows Storage Server 2003, we started with an HP Pro-Liant DL580 G2 that mirrored our earlier HP StorageWorks NAS 9000 configuration. The ProLiant DL580 G2 is packed with a number of high-availability features and specialty hardware. In particular, our test server was configured with two 2.7GHz Xeon CPUs, 4GB of memory, two Gigabit Ethernet NICs, and HP's integrated Lights-Out (iLO) Management system. To help support all of this hardware when running SLES 9, HP adds a SmartStart pack of software drivers and documentation.

On an HP StorageWorks NAS 9000, a wizard is integrated into the NIC control panel. On the HP DL580 G2 running SLES 9, NIC bonding is nowhere to be found within YaST. Fortunately, the SmartStart package provides directions for creating configuration files that need to be put in /etc/sysconfig/network-scripts in order to enable NIC bonding to work.

So whether under the moniker of StorageWorks NAS 9000 or ProLiant DL580 G2, one characteristic of this server is that it is designed for environments requiring maximum compute power. This is precisely what is necessary for an enterprise-class system that will bridge a SAN and LAN. The key will be moving large amounts of data quickly and efficiently to a large number of clients. To do that, any server will need an underlying architecture geared to provide high data throughput.

We began our testing by running version 3.0 of the oblCPU benchmark suite. This benchmark contains 34 calculation-intensive kernels that are rich in floating point arithmetic. The suite's baseline for performance is derived by running a version compiled with GNU C 3.2 and executed on a 1GHz P-III system running SuSE Linux Enterprise Server 8.

Recompiling the benchmark on SLES 9 with GNU 3.3 pegged the geometric mean for the performance of the 34 kernels at 167, or 1.67 times that of a 1GH P-III. That's not very impressive given that performance on the same system running Windows Storage Server 2003 and compiled with Visual C++.Net 2003 was 50% greater, with a geometric mean of 249 (see figure).

The next benchmark that we ran measured memory bandwidth. Memory latency has become a major bottleneck for achieving high performance for various applications, including those that are I/O intensive. To this end, front side bus (FSB) speeds have risen at a rapid rate in an attempt to take advantage of DDR memory technology.

Given the DL580 G2's theoretical memory bandwidth of 6.2GBps, which is a function of the clock speed of the FSB and the 4:1 memory interleave, we expected to see a better than average throughput rate. On the surface the hardware did not appear to be more imposing, however, than AMD's 64-bit Opteron architecture, which integrates a memory controller on the CPU die. Using SLES 8, an Opteron-based server with 333MHz DDRAM provided considerably greater throughput than a Xeon-based CPU using the Intel E7501 chipset, which featured a 533MHz FSB, a 2:1 memory interleave, and 266MHz DDRAM.

Using SLES 9 and the 4:1 interleaved DL580 G2, oblMemBench pegged average data throughput on 4-byte strides through memory to be 35% greater than what we had measured on the 64-bit Opteron-based server. In both cases, oblMemBench was compiled with the latest version of Intel C++. These short-stride performance results provide a good indication of the throughput level that can be expected from cache hits when doing file I/O.

As stride sizes increased, performance of the DL580 converged on that of previous Xeon-based systems. Nonetheless, the key measurement remains performance on short strides. Running SLES 9, the 4:1 interleaved two-way DL580 G2, which appears to the operating system as a four-way system, is in a class by itself when it comes to memory bandwidth.

Clearly, sites gain an edge in exploiting the fundamental processor and memory performance characteristics of the DL580 G2 by running under SLES 9. What's more, this edge also applies to bedrock I/O performance and features as well as with new features that include asynchronous I/O, optional I/O scheduling algorithms, and multi-path I/O connections.

Click here to enlarge image

Using the monitoring facilities of the QLogic 5200 switch to visualize SAN data traffic, we observed three very different server-disk I/O throughput patterns when copying a 1.5GB zip file from a client. In the three tests, we used a Windows XP client to copy the file to the HP NAS 9000 using (1) a Windows CIFS share, and to the HP DL580 G2 using SLES9 to provide (2) a Samba 3 and (3) an NFS 3 share. The elapsed time for the CIFS transfer can be compared to the other shares by superimposing the CIFS plot on to the other two, both of which complete in less time.

Unfortunately, there appears to be no easy way to find, let alone configure, the new I/O scheduler through YaST. Administrators will have to scour through source files to track down references in kernel configuration files. For all tests in this review we accepted the CFQ default. We'll be looking at the other options with new hardware in the near future.

SLES 9 includes multi-path I/O, which enables the Linux kernel to access a storage device through multiple channels at once. This is precisely what is necessary to enable the transparent fail-over features that SANs tout and that neither Windows nor Linux servers have been providing out of the box. In particular, the QLogic driver for the QLA2340 Fibre Channel HBA, which is included in the SLES 9 distribution, automatically implements the multi-path capabilities of the Linux 2.6 kernel.

This time, what does not show up in YaST is a good thing. Running the YaST Partitioner on SLES 8 or the Windows Server 2003 disk management utility produces multiple logical instances of the same physical volume. This has two negative consequences. It makes it possible to inadvertently mount and corrupt a drive, and it makes it necessary to halt and reboot the system should the primary path to a disk fail.


Using Gigabit Ethernet, we repeated our three file copy tests. Throughput scaling for Windows CIFS and Samba 3 were similarly lackluster. NFS throughput scaled far better and greatly extended the performance gap. Once again, the elapsed time for the CIFS transfer can be compared to the other shares by superimposing the CIFS plot onto the other two.
Click here to enlarge image

Running the YaST Partitioner on SLES 9, only the primary path to a physical volume appears. If the path fails, any process in the midst of an I/O operation will fail; however, any new I/O operation will transparently begin over an alternate path to the volume.

We then began our NAS testing on SLES 9 by building on the results for the HP StorageWorks NAS 9000 in our previous review. In that review, our goal was not to stress the HP NAS 9000 but, rather, to determine the best client access strategy for utilizing it as a SAN gateway.

For typical Fast Ethernet (100Mbps) connections, Windows clients clearly demonstrated the best throughput performance when running native CIFS shares. Using Samba to connect Linux clients to CIFS shares on the Windows server was far less sanguine as throughput turned to a sludge-like 3MBps to 4MBps. Meanwhile, the more efficient alternative to using NFS for Linux clients can easily turn into an administrative nightmare using Microsoft Services for NFS.

To handle the fundamental differences in user accounts and security between Unix and Windows, an administrator must create a mapping between Windows users and groups and Unix users and groups. In creating this mapping, Microsoft Services for NFS cannot handle anything other than a 1-to-1 mapping scheme from Windows users to Unix users. A Unix user can be mapped to multiple Windows users, but not the other way around. Why a many-to-one mapping from Unix to Windows should make any difference is a mystery-or perhaps not if you take client-access licenses into consideration. Given this restriction, the bigger mystery is why Microsoft Services for NFS does not check for violations of this restriction that can be created by its automated mapping options.

Fortunately, the improved Network Service modules in YaST make setting up NFS sharing a trivial matter. We also added NFS v3 client capability to our Windows XP workstation by installing DiskAccess from Shaffer Solutions. In our tests, server I/O throughput using NFS shares was identical when triggered by either a Windows or a Linux client. We encountered none of the throughput asymmetry between Windows and Linux clients measured on both Windows Storage Server 2003 and SLES 9 servers using CIFS and Samba shares.

We repeated our oblDisk tests with the HP DL580 and to avoid caching issues, added a simple copy test using a 1.5GB zip file. The results were not entirely what we expected. Samba 3 shares on SLES 9 servicing Windows XP clients provided faster throughput than CIFS shares on Windows Storage Server 2003. Remarkably, the same throughput degradation that affected Linux clients using Samba 2 to connect to CIFS on Windows resurfaced when we connected a Linux client running Samba 2 to our Linux server running Samba 3.

Monitoring back-end SAN disk traffic with the facilities provided with the QLogic 5200 switches as we copied our 1.5GB zip file from clients to servers, we were able to explain the improved performance for Windows clients on a Samba share. It appears that Samba 3 exploits the inherent speed advantage Linux gets over Windows via the bundling of I/O requests into larger blocks.

Looking at byte traffic to our nStor disk array during a file copy from a Windows XP client to the HP StorageWorks NAS 9000, I/O throughput to the disk remained reasonably steady at just under 10MBps. Repeating this test using a Samba 3 share on the HP DL580 G2 provided a different throughput picture. The I/O from the DL580 G2 was not coming in a continuous flow but in rapid-fire staccato bursts averaging about 25MBps and frequently reaching 50MBps.

For traditional clients networked over 100Mbps Ethernet, SLES 9 clearly provided our server with a more efficient environment. We then scaled our LAN environment to Gigabit Ethernet connectivity, which is more typical of a back-end IT server network. In such an environment, fast networking is most likely the foundation for backup operations. When workstations are involved in a gigabit LAN, the workstations are typically involved in applications such as number crunching for complex mathematical models or video editing.

In our Gigabit Ethernet scenario, CIFS share throughput on the HP StorageWorks NAS 9000 doubled to just under 20MBps when running our copy test. Samba 3 performance followed suit. While running our copy test, we observed the same doubling of throughput and a distinct pattern of bundled disk writes from the server.

The best throughput performance scaling occurred with NFS 3 shares. This was equally true for both Windows and Linux clients. Using NFS 3 on SLES 9 for file sharing, throughput rose by a factor of 400% on Gigabit Ethernet connections. This was also more than double the performance of Microsoft Services for NFS as measured under Windows Storage Server 2003. Fundamentally, with the ROI arguments for moving storage onto a SAN continuously getting stronger, the arguments in favor of SAN-NAS fusion equally strengthen. With the introduction of enterprise-ready Linux distributions based on the 2.6 kernel and capable of handling more-powerful servers, the advantage in this arena remains with the Penguin.

Jack Fegreus is technology director at Strategic Communications (www.stratcomm.com). He can be reached at jfegreus@stratcomm.info.


Editor's note: This article is an excerpt from a much larger review. To read the full review-which includes considerable detail on SLES 9 and specific hardware performance numbers-visit www.open-mag.com/0813548697.shtml.


InfoStor Labs scenario
UNDER EXAMINATION

Linux as a SAN-NAS fusion platform

WHAT WE TESTED

SuSE Linux Enterprise Server 9

  • Linux kernel 2.6 with support for:
    • 64GB RAM
    • 16TB file system size
    • 1 billion process IDs
    • Non-Uniform Memory Access (NUMA)
  • Enhanced YaST management
    • Mail server configuration
    • Full Samba 3 configuration
    • VPN configuration
    • User-mode Linux virtualization
HP ProLiant DL580 G2
  • Dual 2.7GHz Xeon CPUs
  • Dual Gigabit Ethernet NICs
  • Hot-plug PCI backbone
  • Integrated Lights-Out Management port
  • HP SmartStart Support Pack for SLES 9
    • NIC bonding driver
    • Lights-Out drivers
    • SNMP agents
QLogic QLA2340 HBAs
  • 133/100MHz PCI compatibility
  • Full-duplex 2Gbps Fibre Channel
  • Linux 2.6 kernel driver provides transparent fail-over

HOW WE TESTED:

Shaffer Solutions' DiskAccess

  • NFS client software for Windows
  • Tools for discovering NFS services
  • Integration with NIS
Intel C++ Compiler for Linux
  • Free license for non-commercial development
  • Advanced optimization supporting Hyper-Threading
  • Source and object code compatible with GNU C
Two QLogic SANbox 5200 stackable switches
  • 16 2Gbps Fibre Channel ports
  • Four 10Gbps ISL ports (copper)
  • QLogic SANsurfer Management Suite
  • Host-based software for Linux, Solaris, and Windows
  • Wizards for configuration, zoning, and security
  • Performance monitoring
  • Fabric health alerts
nStor 4520 Storage System
  • Two WahooXP RAID controllers
  • Active-active configuration
  • nStor StorView Management Software
Benchmarks
  • oblCPU v3.0
  • oblMemBench v2.0

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives