Techniques for scaling NAS environments

Posted on October 01, 2002

RssImageAltText

By Stephen Terlizzi

Network-attached storage (NAS) is simple to use and cost-effective, and it allows multiple users to share the same data as if it were located on a local workstation. On the downside, NAS has availability and scalability issues, which can lead users into a nightmarish buying pattern: buying NAS devices one after another to meet performance and capacity requirements. This not only turns into a management headache, but also creates additional points of failure and potential bottlenecks.

In an attempt to address these scalability and availability issues, many vendors have introduced clustering techniques to improve performance and create high availability. A cluster combines multiple devices and coordinates the efforts of these devices to achieve an availability goal.

For example, two devices could be clustered for redundancy (i.e., fail-over). In this type of configuration, if one device fails, the other takes over the work. To maintain 100% performance in fail-over mode, the second device in the cluster must serve as a hot standby, meaning it does not service any file requests. However, if users are willing to sacrifice some performance in fail-over mode, they can experience a performance boost in normal operation.

If this is acceptable, then the second device in the cluster may be configured to provide additional file servicing during normal operations. To export the same files, there needs to be tight communication among devices in the cluster. For instance, to prevent two individual users from writing to the same file, the normal locking mechanisms available in CIFS and NFS protocol have to be enforced.


Point-to-multi-point file access has inherent performance and availability advantages.
Click here to enlarge image

null

These locking mechanisms require constant communication between all devices in the cluster while actively servicing file requests. The communication overhead within the cluster can lead to more than a 20% or more loss in the combined performance of the two devices.

This communication overhead becomes increasingly worse as more devices are clustered. With a two-node cluster, there is only one communication path between nodes A and B. The insertion of a third node, node C, introduces not one new path but two new paths (A-C and B-C). Similarly, the introduction of a fourth node adds three paths (six total) and a fifth node, six paths (10 total). Eventually, the communication overhead becomes overwhelming.

An alternative approach is to create a cluster in which each device is assigned a portion of the file system to export. This assignment can be changed as needed by the administrator through simple commands. Out of necessity, all devices need to be able to access all the physical hard-disk assemblies in order to see their assigned portion of the data. This often requires the implementation of a complex and expensive SAN-based storage system with multiple NAS head-ends as opposed to a simple NAS appliance.

One advantage of this type of configuration is n+1 fail-over, or the ability of a single hot spare to provide fail-over protection for multiple active devices. With n+1 fail-over, the hot spare assumes the identity of the failing device, should any device fail. Multiple hot spares can be provisioned as appropriate to maintain desired performance levels in degraded mode.

The downside to this approach is that there is no performance improvement in the delivery of any single file system. The administrator needs to assign a portion of the file system to an individual device. If there are hot spots on that specific portion of the file system, the individual device becomes a bottleneck. Despite the condition of other devices in the fail-over cluster, the performance is limited to that of a single device in the cluster for any given file system.

Introducing the "file switch"

An alternative approach would be to scale performance and capacity linearly by adding more NAS devices as needed (as well as "any-to-any" fail-over capability) to maintain maximum data availability.

The key here is to break the point-to-point, client-server approach to file services by introducing a new network element—one that sits between the client and server and switches the file transactions at wire speed. Leveraging high-speed Gigabit Ethernet networks, a switch can load balance a single file across multiple NAS devices. By aggregating the devices linearly, the devices jointly benefit from a performance boost.

Each NAS device delivers a portion of the file, which is aggregated by the switch and delivered to the client. By changing the two-tiered clustered architecture into a new three-tiered switched architecture, the point-to-point file services approach is converted into a point-to-multi-point approach (see figure).

Inherent in the point-to-multi-point approach is the ability to build in both scalability and high availability. From the vantage point of the client, the switched architecture looks like a single virtual NAS device. As new NAS devices are added behind the switch, the client simply sees additional performance and storage capacity. However, unlike simple file-virtualization techniques, a switched architecture allows aggregation of the performance and capacity of the individual NAS devices, removing the scalability bottleneck of any single NAS device. Since the individual NAS devices are independent of one another, there is no communication overhead (see above).

Also, since the switches can use the inherent locking mechanisms within the CIFS and NFS protocols, multiple switches can be added in a mesh-like configuration to grow the aggregated bandwidth of the switches while maintaining access to a single file system. This mesh-like configuration allows for any-to-any fail-over, which means any switch can take over for any other switch and deliver the same file system.

In short, by implementing a switched architecture for NAS environments, IT organizations can build scalable, highly available NAS infrastructures. The introduction of switching into the architecture allows true NAS aggregation by creating a single virtual NAS device out of standard NAS devices. By removing the scalability and availability issues of a single NAS device through aggregation, the promise of true data leverage can be achieved—access to data anytime and anywhere.

Stephen Terlizzi is vice president of marketing at Z-force (www.z-force.com) in Santa Clara, CA.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives