Testing HP's SureStore NetStorage 6000 NAS

Posted on November 01, 2000

RssImageAltText

OpenBench Labs goes under the hood of Hewlett-Packard's NAS server, unveiling some interesting performance results.

By Jack Fegreus

The explosive growth in storage is expressing itself in new definitions of what constitutes "the system." An interesting idea for system managers is attached storage, particularly network-attached storage (NAS).

Just a few years ago it was unthinkable to consider adding storage to a "system" without taking the system down, taking the hardware apart, and physically putting disks into an enclosure. Once the hardware was updated, the system-management tasks, including initializing and formatting drives, began.

Enter the realm of NAS and the Hewlett-Packard SureStore NetStorage 6000. This month, OpenBench Labs puts this NAS server through its paces and assesses how users and administrators can benefit from the convenience, expandability, and performance of the HP NetStorage 6000.

Whether or not users are aware of the NAS device depends on just how aware they are of their network environment, and how well the operations staff has hidden device configuration from view. Linux users may be aware of a new file mount point, while Windows users may be aware of using a new or different drive letter. However, nobody needs to be made aware that the device can be actively shared between Linux and Windows systems.

Looking inside HP's NetStorage 6000 yields a different view. The NetStorage 6000 is a fully qualified file server with substantial power and capability. By default, there is 256MB of RAM (which can be upgraded to 512MB), a 700MHz Pentium III processor with 256KB cache, and an array of up to ten 36GB hot-swap drives, as well as tape drives to back up local and remote data. The NAS server is powered by a BSD-based network operating system (NOS) stored on FLASH ROM.

An embedded Ultra2 SCSI RAID controller manages the storage devices. The storage server is typically presented to the network via an integrated 10/100Mbps Ethernet card. The NetStorage 6000 also offers upgrades to Gigabit Ethernet and 1000Base-SX fiber to cover the need for faster network-transmission speeds. Most of the 256MB of memory is dedicated to serve as a cache for clients connecting to the system. Most of the CPU power is allotted to serving the disks across the Ethernet network and to managing the cache.

On the initial boot, the administrator uses the front panel to set up TCP/IP with or without DHCP and to set the network name for the NAS device. Once TCP/IP settings are chosen, the HP storage server is rebooted and appears on the network. All other configuration options, including disk configuration, system monitoring and alerting, network interoperability, and security, are achieved by means of a Web interface.

Once the Web interface has started, there are four essential aspects to completing the setup of the HP NetStorage 6000: setting system and alert traps, configuring the disks, setting client connectivity, and setting up backup options. In particular, system and alert settings comprise setting the system time and date, contact information, administrator's password, and means by which notification of a critical system or hardware failure is to be broadcast.

Configuring the disks is easy, but can be time-consuming, as each new file volume must be formatted. In our test configuration of six 36GB disks, we configured two three-disk RAID-5 file volumes. The Web interface to file-volume management allows the administrator to set volume parameters such as capacity, name, and RAID type. Once file volumes are created, administrators can expand them by simply assigning more storage. Directories are subordinate to file volumes, and directories can be created, renamed, and deleted as needed.

Clients can access file volumes via NFS mount points or Windows LAN Manager volume shares. The NetStorage 6000 presents directories that can be merged (mounted) into mount points within the normal NFS structure. Unix-like access permissions are set using the Web interface. Systems administrators can set a name by which Windows will see the volume share name. As with native Windows sharing, password security can be set by the access required. Once the share is created, it is exposed using SAMBA; Windows clients can gain access to the volume using the Windows native "Map Network Drive" interface.

We've deliberately skipped over the details of how the NetStorage 6000 manages disk media and the way in which it maps physical media into logical disk space that can be assigned to file volumes. When you start looking at the raw disk subsystem through the NAS server's interface, you see a collection of physical (raw) drives. These physical drives can be joined together into RAID arrays or left as individual devices. The HP NetStorage 6000 interface refers to the disks by their slot number, which is clearly marked on the front of the unit. Individual drives carry different status levels: Empty, Rebuilding, Online, Unassigned, Hot Spare, or Dead.

The hot-spare mechanism presented by the NetStorage 6000 is impressive in its utility and simplicity. When you configure physical drives you assign one or more as a hot spare. If a physical drive that is part of a RAID array fails, the NAS server automatically assigns the first available hot spare to replace the failed drive.

HP's NetStorage 6000 cabinet holds ten 36GB disk drives. While those drives could be configured to deliver 360GB of storage, neither HP nor OpenBench Labs recommends that approach. Despite the market pressure, HP quotes the maximum capacity as 288GB. This capacity is based on a configuration having two RAID-5 volumes with five disks in each volume.

Redundancy is essential to achieving the levels of reliability today's systems demand. HP ships the NetStorage 6000 with two power supplies, both of which are needed for normal operation. A third, hot-swappable power supply can be added so that the system will survive the failure of any single power supply. In addition, an uninterruptible power supply (UPS) can be connected to the NAS server, which will detect power-failure conditions and report them through its alerting mechanisms.

If the device is to hold critical corporate data, that data must be saved at least daily. If 10% of the data on a fully configured NetStorage 6000 were to change daily, that would require backing up 30GB of data each day. Fortunately, HP solves this backup problem by including an external SCSI port and providing for the installation of a tape drive in an empty disk slot. Software support for HP DLT tape is integrated into the NetStorage 6000. Moreover, backup and restore operations can be managed from the NAS server's Web interface.

In addition, the NetStorage 6000 provides what HP calls "disk checkpoints." On schedule or on demand, a system administrator can "draw a line" across a file volume. The NAS server responds by writing incoming data to a different place, which preserves the data state of the file volume. When a client writes data to the volume, the old data is copied to become a part of the checkpoint and the new data is flushed to disk, where it is not a member of the checkpoint.

The checkpoint file, however, resides on disk along with all other data. If the disk is lost, the entire checkpoint is lost. Furthermore, disk checkpoints grow over time. If data changes at a rapid rate, then the active checkpoint files will grow at a correspondingly high rate. We recommend that if you are going to use disk checkpointing, use an automatic schedule that will automatically delete out-of-date checkpoint files.

Because the HP NetStorage 6000 is a full-fledged NAS server, the implementation of security is fairly straightforward. HP has translated and stylized the security model of the NetStorage 6000 to make it familiar and functional for both Windows and Linux administrators. The security challenge for the NAS server is to recognize user identity, map those identities to a protection scheme, and then implement access criteria on that basis. Whenever there is a boundary between systems, such as a network connection, there is the opportunity for the malicious user to change the way that security codes and security settings are interpreted.

In the case of a Linux and/or Unix environment, the NetStorage 6000 stores file permissions in native format-it is a BSD-based system. Each file is stored with its inode meta-data, which includes file ownership and file permissions, in the familiar read/write/execute form.

In the Windows model, device security has two parts. First, the owner or administrator of a device or volume must allow access. If access is permitted, then a deliberate step is required to "share" the device.

In the case of Windows NT and Windows 2000, devices are always created with a default administrator share, which grants access to members of administrator groups. In such a Windows domain-security scenario, a domain controller-a PDC/BDC under Windows NT 4 or an Active Directory Domain Controller under Windows 2000-authenticates users.

HP allows security to be set in both Windows and Unix style. As a result, both Windows and Unix clients can access file volumes without loss of security or protection. The NetStorage 6000 also creates a mechanism for mapping individual file protection between the two architectural schemes. As a result, the NetStorage 6000 integrates easily and almost seamlessly into both Windows and Unix environments.

Nonetheless, consider the case in which multiple NetStorage 6000 devices are scattered throughout the network. The NetStorage 6000 accounts for these problems by integrating support of two classes of software: network-management and remote-backup applications. From the NetStorage 6000 Web interface, systems administrators can monitor NIC summary data, network activity, status summary, environmental parameters, event-log contents, and CPU utilization.

For large heterogeneous sites, which are likely to have a mix of Windows NT, Solaris, and Linux systems, HP NetStorage 6000's tool set includes CA Unicenter TNG Framework and CA ARCserveIT, which run on all of these systems. NetStorage 6000 also integrates with HP OpenView Network Node Manager, HP OpenView OmniBack, and Veritas BackupExec. These packages recognize the Net-Storage 6000; identify it by name, icon, and IP address; allow direct access to the NetStorage 6000 Web interface; and process event notifications from the storage subsystem. While the backup applications are explicitly integrated with the NetStorage 6000, most enterprise backup packages provide the facility to back up CIFS and NFS volumes.

Benchmarking the HP NetStorage 6000 was an interesting exercise that yielded equally interesting results. Our goal was to characterize the performance of the NetStorage 6000 and the rate at which it could deliver data to an application. The problem with NAS is the network: While internally the NetStorage 6000 is theoretically capable of pulling data off the disks at 80MBps, the default 100Mbps Ethernet connection-theoretically 12.5MBps-presents a much lower effective ceiling on data throughput.

We used a Dell 2400 server running Red Hat Linux 6.2 configured with 256MB of RAM-later reduced to 128 MB-to run the OpenBench Labs disk benchmark in order to gauge achievable data throughput from the HP NetStorage 6000. All tests of the NAS server used a RAID-5 volume, which was built using three 36GB disks.

Measuring actual disk throughput was complicated by the difficulty of isolating the effects of multiple data caches on the drives, RAID controller, HP NetStorage 6000, and Dell PowerEdge client system. These cache effects create orders-of-magnitude differences in disk-subsystem performance for differing workloads. By adjusting the size of the target file on the volume under test and adjusting the amount of memory available on the Dell server, we were able to defeat or enhance the effects of these caches on their respective systems.

While the graphs tell the full story of the performance results, here's the summary: The maximum performance we could muster from the entire system was only achieved when fully utilizing the data cache of the Dell PowerEdge server, which was used to access the HP NetStorage 6000. Only when using a data file smaller than the Dell's system memory did we measure effective throughput in the range of 65MBps to 70MBps.

We began running one process, which read a 384MB file sequentially using different read-block sizes. We chose this file size to minimize cache effectiveness on the HP and Dell systems, each of which had been configured with 256MB of RAM.

Surprisingly, we did not measure any significant differences in throughput based on block size. At a 2KB block size, throughput for one process was 5.08MBps, while at a 128KB block size, throughput surged to 5.11MBps. Normally, we would expect to see throughput monotonically increasing with block size.

Increasing the number of processes reading the file rapidly increased overall throughput, but varying the read block size never affected measured throughput. Using 8KB blocks, the most common I/O block size found in commercial applications, throughput ranged from 4.48MBps with one process up to 16.77MBps with 64 processes. While each process begins reading the file at a different location, increasing the number of processes increases the likelihood of inter-process cache hits on the Dell server and accounts for effective throughput reaching 30% higher than the theoretical network limit.

To establish a network-boundary condition, we defeated the cache in the Dell system by using a file significantly larger than the Dell's system memory, but smaller than HP NetStorage 6000 memory. In this configuration, we easily could saturate the network and achieve a stable data throughput of between 10MBps and 13MBps. In this test, we saw that the maximum load placed on the HP NetStorage 6000 CPU was only 20%. We expect that with so much headroom for performance scaling, throughput could be substantially improved with a faster fiber or Gigabit Ethernet interconnect.


The effective throughput of the HP SureStore NetStorage 6000 depends on caching on the client. With high client caching, effective throughput can reach 70MBps, which has little to do with the NAS server.

When client-side caching is defeated-in our tests we read a file too large to fit in cache-throughput is throttled by limits on network throughput.

Click here to enlarge image

With one process and no chance of caching, throughput hovered at 5MBps. This scaled perfectly with two processes as throughput doubled. As more processes were added, throughput rapidly converged on the network limit. As we significantly raised the number of simultaneous processes accessing the device from the same server, inter-process caching came into play and throughput exceeded network limits by 30%.


Benchmarks go open source


By Keith Walls
Benchmarks can simplify gathering performance profiles on your systems to gauge hardware and software requirements in metrics that relate to your needs. The goal of OpenBench Labs (www.openbench.com) is to deliver a fair and technically rigorous way to compare everything from hardware devices to operating systems. Each month we'll present product reviews based on a benchmark suite that has evolved through 15 years of development.

While device and software manufacturers have used our benchmarks in the past, our target audience is CTOs, together with the people who support the IT decision-making process. The CTO is charged with ensuring that all hardware devices and software purchased are the best available for the organization.

Our goal is to supply the CTO's team with a steady flow of practical information to use when choosing hardware, software, platforms, and technology on which to base IT operations. That practical aspect to information is the most essential one. A tape drive manufacturer may produce a drive that in the laboratory accepts data at astonishing rates; however, theory is far less important than the practical data rate when that drive is installed in a system backing up actual data onto affordable media.

Ideally, the systems we choose as reference platforms-or reasonably close approximations to them-need to be instantly identifiable and have meaning for our readers. A tenth the speed of Deep Blue is no less obtuse as 10 times the speed of a PDP-11/45. The market, and not technological perfection, needs to guide the choice of reference platforms. For now, OpenBench Labs will focus on two operating systems on Intel-based platforms: Red Hat Linux (including compatible distributions) and Windows 2000.

More than any other software project benchmark code is necessarily built on shifting sands. Baselines age and become irrelevant as the techniques used in devices are constantly updated. What's more, benchmarks are somewhat unique in that the discovery of certain types of bugs can sometimes lead to wholesale revisions to the logic of the benchmark. As a result, the OpenBench Labs benchmark suite has always been a continual work-in-progress. That's why OpenBench Labs is making its benchmark suite a set of open-source projects for the operating systems mentioned above.

The OpenBench Labs benchmarks fall into three broad categories based on the computer subsystem that they are designed to test: CPU, memory, and I/O. While CPU benchmarks are concerned primarily with the resulting useful processing power of the system and memory benchmarks are concerned with the performance of access to memory, the I/O category captures all the benchmarks that pertain to getting data into and out of the system from disks or tape. Here, differences in operating system architecture, particularly in the area of data caching, play a significant role in performance and hence significantly complicate code compatibility across platforms.

One of the major focuses in the I/O benchmarks is the effective load that a disk subsystem can support. Here the goal is to measure how quickly data can be accessed by a maximal number of users. To this end, the benchmark attempts to flood the disk with I/O requests, and then measures the response time for each I/O. When the average access time exceeds 100 milliseconds for any user process, the disk is deemed saturated and the benchmark terminates with a report to the user.

Over the coming months, we will present the details of all these benchmarks in a series of Master Class articles. As we complete the first versions of each of the benchmarks, we will post them as open-source projects in the hope that we will attract contributors to our efforts. We are looking for contributions in all aspects of the benchmark series: new devices and software to test, new facets of products to investigate, improvements to the software, and reviews of software for completeness and correctness.


OpenBench Labs summary

Under examination:
NAS server performance

What we tested:

HP SureStore NetStorage 6000

  • (6) 36GB disk drives
  • 256MB RAM
  • 100Mbps Ethernet network

How we tested:

  • Dell PowerEdge 2400 running Red Hat Linux v6.2
  • 3Com SuperStack II Switch

Click here to enlarge image

The default HP NetStorage 6000 system configuration presented three 36GB disk drives configured as a single RAID-5 volume. For testing, we attached through an NFS mount-point over a private 100Mbps Ethernet network. A Dell 2400 server running Red Hat Linux v6.2 was used for all performance evaluation. The Dell 2400 server was configured with 256MB of RAM for some tests, and reduced to 128MB for other tests.

Key findings:

  • The HP NetStorage 6000 is easy to configure via its Web interface.
  • Provision for redundancy is excellent within the bounds of this class of system.
  • Security provisions are well designed and extremely well implemented.
  • CPU utilization peaked at only 20% due to network bandwidth limitations.

About our benchmark:
The OpenBench Labs ldisk was used with a file sufficiently large to defeat the cache on both the HP NetStorage 6000 and the Dell PowerEdge 2400.

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives