Benchmarking an NT Fibre Channel Cluster

Posted on October 01, 1998

RssImageAltText

Benchmarking an NT Fibre Channel Cluster

From storage on Fibre Channel to NT 5.0 clusters, server scalability is rapidly becoming a non-issue.

By Jack Fegreus

Just one year ago, Microsoft released the Enterprise Edition of Windows NT Server, with built-in clustering based on open specifications and industry-standard hardware. With computer systems downtime estimated to cost $4 billion annually in the US alone, the importance of keeping systems running cannot be overstated. This is especially true in a client-server architecture, where a single server failure may cascade down to thousands of client systems.

With clustering and cluster-aware applications, IT can tie multiple servers together into a single logical system. Additional servers can be integrated into that system as usage expands. In particular, a cluster is a collection of closely coupled systems that provides four fundamental facilities and capabilities:

- Mechanisms to provide resource sharing at either the OS or application program/service level

- No single point of failure

- Investment protection, incremental system growth, load balancing, and single point of management

- Service fail-over protection

A good implementation of such a cluster will, in turn, provide the following three important capabilities:

- Cluster members can join together to make an overall task execute more quickly

- Cluster members can join together to make the overall application more available

- Transitions between any two cluster members preserve data integrity

In theory, Windows NT Server clustering technology offers the promise of high-availability and scalability on relatively inexpensive platforms. In fact, the combination of both SMP and clustering technologies under the Enterprise Edition of Windows NT Server provides the basis for a powerful system at a fraction of the cost of most alternatives.

The Dell PowerEdge Cluster tested by CTO Labs is built on two two-way Pentium II-based PowerEdge 4200 servers with 512MB of memory in each node. To maximize client and internal cluster communications, each node has two Intel Fast Ethernet NICs. In addition, Dell clusters are configured with a 3COM SuperStack II Switch. This switch functions as the interconnection for fail-over and failback functionality.

Storage is absolutely crucial in any cluster configuration. This has long been one of the stellar aspects of Dell PowerEdge servers. For high-end RAID, Dell servers feature an i960-based controller dubbed the PowerEdge Expandable RAID Controller (PERC). In our cluster, each node has a standard PERC II controller with a 32MB cache attached to three 4GB drives in a private internal RAID 5 configuration.

For external shared storage, our Dell cluster has two Scalable Disk Subsystems (SDS) pods. Each SDS houses eight drives--4GB each in our cluster--for a total of up to 72GB of shared storage in each pod. These external drives are connected to specialized Cluster PERC (CPERC) cards, which were also configured with 32MB of cache. Currently, each cluster node can support up to two CPERCs, which translates into a potential of four SDS pods or 288GB of storage in a cluster. The CTO Labs Dell cluster configuration with 64GB of shared storage is available for less than $50,000.

Adding Fibre Channel for storage connectivity is the essential ingredient for MSCS to support clusters that have more than two nodes. Fibre Channel is a high-speed serial communications medium that supports multiple protocols, including TCP/IP, ATM, and SCSI-3. The standard data rate for Fibre Channel is 100MB per second (200MB per second in a dual loop configuration). Unlike SCSI, which was designed as a server-to-storage connectivity mechanism not involved with multiple hosts, every unit on a Fibre Channel that sends and receives is considered to be a peer node. As a result, a Fibre Channel network can be configured point-to-point, in a switched fabric, or in a Token Ring-like arbitrated loop (AL).

While Fibre Channel is still not yet supported as a pathway for shared cluster storage, local Fibre Channel storage can easily be configured under either vV4.0 or V5.0 of Windows NT. To get an early glimpse at the future of clustering, CTO Labs tested two Fibre Box Arrays from Box Hill Systems. Each Fibre Box Array is a pure implementation of a Fibre Channel Arbitrated Loop. As such, there is a total limitation of 126 nodes--each disk is a node--but no limitation on physical distance.

Each of the arrays tested by CTO Labs contained eight 9GB Fibre Channel disks from Seagate. That put the total storage capacity of our configuration at 144GB. Unlike the Box Hill array, many competing commercial arrays take advantage of Fibre Channel`s support of SCSI to implement a lower cost and lower performing internal SCSI backbone with SCSI disks. Pricing for Box Hill`s Fibre Box arrays generally falls into the range of $0.50 to $0.70 per megabyte. This price includes Adaptec PCI Fibre adaptors, as well as Box Hill`s Fibre Box Explorer and X/ORAID software modules.

In our test configuration, we used two Adaptec adapters to form a dual Arbitrated Loop. This pegged our theoretical maximum throughput at 200MBps. For maximum performance, Box Hill uses only RAID 10. In this configuration, streaming performance was beyond blazing. Peak spiral throughput was an extraordinary 103MBps. For any site looking at multimedia streams, such as those used in IP Telephony, video conferencing, or Net Show for the web, I/O bandwidth requirements will be orders of magnitude greater than typical transaction-based systems. For these sites, testing a Box Hill array should be considered de rigueur.

While streaming I/O performance lived in a world of its own, performance on the small random transfers that characterize the CTO Labs load benchmark was far more prosaic. In this realm, the Box Hill array was comparable to the CPERC and SDS pod combination. The maximum number of deamons that could be supported without going over a 100ms average access time was 175. Peak throughput for 75 daemons was 1620 I/Os per second.

MSCS support for new hardware such as Fibre Channel will extend the high-availability dimension of clustering by adding support for large, multiserver clusters. This will then make possible a virtual MPP system. For SQL Server, this will facilitate database partitioning and therefore parallel query searches, which are entirely dependent upon the number of nodes. As a result, when the overall load for any cluster-aware application or service exceeds the capabilities of the systems in the cluster, sites will be able to add incremental processing power to the cluster.

Phase II of MSCS will also provide additional services to simplify the creation of highly scalable applications. These features should help to simplify the creation of dynamically scalable, cluster-aware applications.

NOTE: This article is excerpted from a longer review that ran in the October issue of BackOfficeCTO magazine, a sister publication of InfoStor. The original review included more data on the performance of the Dell cluster and storage subsystems. To view the complete review, visit www.backoffice.com.

Click here to enlarge image

While the Cluster PERC RAID controller provided a distinct advantage for cluster availability, its performance on high-speed sequential reads fell consistently short of the standard Dell PERC II controller. On this test, the performance of the Box Hill Fibre Channel array was outstanding, with throughput reaching 103MBps on 64KB data transfers.

Click here to enlarge image

The extraordinary throughput of the Box Hill array on sequential data transfers did not translate into an overall advantage in a random access transaction oriented benchmark. In the CTO Labs Load benchmark, performance of the Box Hill subsystem was better than that of the CPERC. However, neither challenged the high-end throughput of the PERC II subsystem. Nonetheless, for fewer than eight simultaneous disk daemons, the Box Hill subsystem did hold a distinct advantage.

In Summary

Product: PowerEdge Cluster

Company: Dell

(800) 829-0550

www.dell.com

Price: to come

Product: Fibre Box Array

Company: Box Hill Systems

(800) 727-3863

www.boxhill.com

Price: to come

Bottom line:

High-availability and high I/O throughput solutions such as Dell`s PowerEdge cluster and Box Hill`s Fibre Box Array will continue to infiltrate mainstream enterprise IT configurations as data mart and business intelligence are joined by media-rich applications, such as video conferencing, on the growing list of strategic mission-critical applications.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives