NAS Market Overview: Issues and Trends

Posted on February 01, 2004

RssImageAltText

By Brad O'Neill

The network-attached storage market is on the verge of another wave of innovations that will at least in part solve issues such as NAS management, scalability, and convergence with SANs.

Click here to enlarge image

Network-attached storage (NAS) has become an increasingly robust platform for enterprise storage. And the NAS market is in the early stages of several advances in functionality that will bring this technology into every corner of the data center, from easily deployed and managed departmental appliances to enterprise-class filers and file services linked to storage area networks (SANs).

This article covers several topics, including NAS management, databases on NAS, file systems, Microsoft's role in the NAS market, NAS-SAN convergence, and benchmarks.

Managing NAS

As NAS becomes a mainstay technology throughout all levels of the enterprise, management becomes a critical issue. The following are elements of NAS management to focus on:

Advanced file management

Whether you're dealing with more than 100 departmental NAS appliances or fewer than 50 data-center filers, simple yet flexible file- and device-management tools are increasingly important. Based on our conversations with users, even seemingly simple features such as centralized device-level views, auto-mounting, and log-in capabilities for all client systems are highly desirable. At the file level, users are exploring ways to access and manage the creation and control of file shares and to reduce labor (and automate where possible) associated with file migration processes.

One approach to providing better file management is to establish a means of creating a unified logical namespace, essentially an aggregate file-level view abstracted from the physical locations of the files. This allows administrators to achieve significant levels of flexibility in the following areas:

  • Migrating file-level data;
  • Replicating file-level data;
  • Creating virtual mount-points;
  • Extending file shares to new users; and
  • Establishing flexible user groupings.

This type of logical namespace aggregation for a NAS environment can be accomplished through a management software layer integrated over an existing distributed file system (DFS) deployed on each file server. For administrators, the result is a user interface that enables "drag-and-drop" functionality for any file anywhere in any file directory, regardless of where that data physically resides in the storage environment—in other words, a true "virtualized" and accessible view of the logical data set.

If a user wants to create logical namespace aggregation without a DFS deployed across the entire NAS environment, another option is to manage the metadata associated with the file-level data, preserving the native file system of each server. In this approach, rather than managing a single file-system image of the data, dedicated software manages the relationship of file metadata in terms of accessibility and location. This removes any issues of file system and client compatibility, but does introduce non-trivial engineering issues regarding user permissions management across NFS and CIFS environments.

You can expect significant developments in this segment of NAS management this year, although it is too early to predict which approach companies will favor. The need for unified file and device management will become critical over the coming years, especially as more and more enterprises find themselves running increasingly large, mixed CIFS-NFS environments.

In addition to managing files, the related issue of managing NAS capacity is increasingly critical.

Users are starting to ask for more-complex volume-management functionality to support data migration, snapshots, and replication. The leading NAS vendors already have solutions for creating and managing both file- and volume-level copies of data, including mirroring data on one NAS device and then replicating that data to another device, using snapshot copies for testing purposes, remote copies, and capacity balancing. These features will be necessary for new entrants in the enterprise NAS market.

Products that provide these capabilities only at the file level will not be sufficient for enterprise NAS customers. End users will increasingly demand truly heterogeneous, interoperable, volume-level control for their NAS environments. Simply moving data between NAS appliances from the same vendor for scaling purposes will not suffice. Like SAN managers today, NAS users have a significant interest in moving large amounts of data from expensive platforms to inexpensive platforms on a routine basis, and they want the freedom to leverage multiple vendors' technologies.

Flexible file services

Over the long run, capacity controls will become increasingly automated and policy-based, and when integrated with file-aggregation schema, will lead to more-flexible file services, which is analogous to hierarchical storage management (HSM) functionality. Administrators will be able to migrate volumes, trees, and files to less-expensive, lower-performance appliances based on a variety of policies, such as frequency of use, file-system size, age, associated applications, or dynamic user groupings. The benefits of more-flexible file services on the back-end of NAS environments include nearline archiving, simplified backup, granular restores, reduced management costs, and increased capacity utilization.

NAS and databases

Not long ago, the concept of running databases on NAS filers was dismissed as a joke. However, relational databases will increasingly find a welcome home in NAS environments. Although constraints still exist, the outlook is bright. Some of the positives and negatives of running databases on NAS include the following:

Positive: Ease of use, lower costs

Compared with the complexity and costs of establishing even a simple Fibre Channel SAN for a two-node database environment, the ease of deploying a NAS appliance for the same task is compelling. Users might sacrifice some of the software tools and functionality common to SAN environments, but in smaller database deployments—say, less than 2TB—this functionality might not be missed. Pressure on IT budgets has been a boon for NAS in relatively "lightweight" departmental relational database deployments.

Positive: NAS + SAN

A number of vendors are converging file- and block-level functionality in hybrid "SAN-NAS appliances," and an increasing number of data-base administrators will take advantage of these solutions. This trend may put an end to the database-on-NAS debate. Especially when you consider the push behind iSCSI adoption and the early support that leading NAS vendors (such as Network Appliance) are giving to iSCSI, this seems to be an increasingly strong bet—and good news for NAS users.

Positive: Next-gen interconnects

In a 10Gbps Ethernet world, many of the performance pains associated with running databases on NAS will be eliminated. For those users with an immediate need for increased NAS performance for databases, filers that take advantage of the Direct Access File System (DAFS) are available from Network Appliance. DAFS was originally designed to leverage the memory-to-memory functionality of InfiniBand, but has been recently implemented to use Virtual Interconnect (VI) over Gigabit Ethernet. The data-sharing capabilities of DAFS make it a high-performance platform for running databases on NAS. In fact, DAFS-based NAS systems have logged benchmark results equivalent to—or superior to—Fibre Channel raw disk access in certain OLTP database workloads.

Negative: Lack of scalability

In order for databases on NAS to be taken seriously in data-center environments, NAS vendors will need to present proof of scalability at higher node counts. Although small, two-node database configurations do not pose scalability issues, four- and eight-node clusters will be necessary in data-center deployments. Fortunately, the leading NAS vendors are working on solutions to these challenges.

Negative: Management software

Another challenge for running databases on NAS is achieving the same level of management flexibility that is available with SANs. For example, the range of cluster management tools available to SAN users enables them to manage not just databases, but all applications with a unified platform. This is the bar against which NAS will be judged. Today, database-on-NAS limits users to cluster management solutions that accompany the filer and the clustering solutions specific to the database vendor. In addition to overall management frameworks, the immaturity of software technology in the NAS market makes it difficult to resolve key issues that are readily resolved today by database administrators with SANs, such as moving hot tables and balancing bandwidth utilization. This functionality must be developed if NAS is to drive deeper into the enterprise database market.

NAS and file systems

No area of NAS technology creates as many questions as those stemming from file-system architecture, which relates directly to scalability. What makes the most sense: single, distributed, or cluster file systems?

Single file systems

NAS with a single file system per device is the standard architecture of a "filer" (see Figure 1). The file system resides in the filer, and the file-layout and file-size parameters are set by the administrator against some set capacities bound by the NAS filer's physical disk capacity. This architecture works well when the NAS filer is operating within its originally specified parameters, but when administrators need to expand the filer to accommodate a growing environment, the problems with a single file system become evident.


Figure 1: NAS is typically implemented with a single file system per appliance, creating "islands" of storage capacity.
Click here to enlarge image

Typically, a single file system that expands to the limitations of a filer's physical capacity requires a time-consuming and complex "forklift upgrade" to a larger filer with more capacity. To accomplish this, the file system must be taken offline and re-partitioned on a new filer with adequate capacity, and user data must then be repopulated on the new device and rebooted. Only then can the new NAS device be brought online and the clients remounted.

The easiest solution to this problem is simply to deploy another NAS device, which itself eventually becomes a "data island" when the next NAS device is added to the line.

When the addition of capacity is multiplied by a large number of NAS devices that may be doubling in capacity every year, it is easy to see the limitations of a single file system in NAS environments. In this scenario, administrators must manage each filer, each with its own file system and each requiring independent manual attention for both file-system and capacity management issues.

Distributed file systems

The primary goal of a distributed file system (DFS) is to provide a single file-system image across a large number of devices. DFSs are typically used in high-performance computing (HPC) environments in the sciences and academia. In recent years, the DFS approach has been applied to NAS with the goal of solving the file-system scalability issue. In the case of NAS, the immediate advantage of a single file-system image is the ability to scale the environment beyond a single device, eliminating the problems associated with a single file system per device.

Using Ethernet as a networking protocol between nodes, a DFS allows a single file system to span across all nodes in the DFS cluster, effectively creating a unified logical namespace for all files. The result is an environment where file shares are available from any server node for any client node. This eliminates the physical restrictions associated with the "NAS island" problem.


Figure 2: A distributed file system (DFS) usually includes a two-tiered software architecture with a metadata layer and a storage layer.
Click here to enlarge image

Architecturally, a DFS typically has a "two-tiered" software architecture with a "metadata" layer and a "storage" layer that both sit behind NAS heads on a filer (see Figure 2). The metadata layer of a DFS is responsible for parsing all of the file requests that come into the NAS head and determining where the requested file resides physically. This is accomplished using a routing table local to that filer to determine the location of the requested file data. If the data is local to that filer, the storage node is contacted and the data is pulled from the filer. If the data requested is not local to the filer, the routing table sends the request out into the DFS cluster and the appropriate storage node is contacted and the data is then served to the user.

DFS-based NAS is a very complex architecture that has been tackled by many NAS start-ups in recent years (both successfully and unsuccessfully). In theory, the scalability of a DFS-based NAS solution is infinite. In reality, workload and network constraints create limits on the functionality of DFS-based NAS solutions, most of which remain untested today.

For example, in a 1,000-node environment, a user's throughput may appear more than adequate, but how will overall DFS-based performance of the entire environment measure up? Will the DFS become stressed and fail to produce linear scalability with the addition of resources? These are valid questions that remain largely unanswered today. However, DFS-based approaches will eventually become an accepted means of delivering NAS scalability in a variety of application environments.

Clustered file systems

Most discussions do not distinguish between a distributed file system and a cluster file system (CFS). However, DFS and CFS are different architectures, and they lead to different kinds of NAS solutions.

A true CFS, unlike a DFS, does not rely on a multi-tiered architecture, but instead replaces the server file system with a new native file system that has the ability to span multiple servers and provide a unified file-system view across all nodes. The CFS coordinates data requests between server nodes and a back-end storage networking interface via a distributed lock manager (DLM). Depending on the CFS, the DLM may or may not be distributed on every node of the cluster. Additionally, the CFS utilizes server cache management across all of the nodes in the cluster to determine if data requests are resident in local or remote storage on the network.

CFS and DFS approaches are used primarily in academic and scientific environments, but a number of software vendors are making strides toward transitioning CFS to mainstream environments.

The potentially high performance of a CFS in NAS environments will make it an appealing technology, particularly if vendors can also deliver higher-level cluster management functionality with the CFS. The primary roadblock to CFS adoption in NAS environments is the complexity of integrating the technology with vendors' existing NAS product lines.

Microsoft and NAS

Microsoft's traction in establishing a NAS beachhead over the past 18 months is nothing short of spectacular. Microsoft has achieved more than a 40% share of all NAS units shipped. Microsoft's Windows Storage Server 2003 has quickly become a mainstay of departmental filers and is taking over the lower end of the NAS market. As would be expected, Storage Server 2003 is designed to ensure deeper penetration of Microsoft's CIFS-based file serving, but Microsoft has its eye on a bigger prize and reported a 50% increase in NFS performance in last year's release.

The features of Storage Server 2003 include Volume Shadow Copy Service (VSS) to coordinate backups between applications and hardware; Multi-path I/O (MPIO), which enables larger and more-flexible deployments; Virtual Disk Service (VDS) to automate LUN management; and iSCSI initiator support.

Many large NAS sites are beginning to give Windows Storage Server 2003 a hard look for duty beyond departmental file serving. In many cases, the economics of deploying a Windows-based NAS appliance are so compelling that users with significant NAS requirements are willing to forgo some of the features, performance, and reliability of other solutions in favor of cost savings. And Microsoft is adding features to Storage Server 2003 that it hopes will make it a "no-brainer" choice in any price/performance equation.

Looking forward over the next 18 months, with volume management, disk management, and SAN tools already on its plate, it seems that Microsoft and its channel partners only need to add better file management capabilities to begin moving up the NAS food chain into the midrange market. At that point, Microsoft will undoubtedly begin exploring means of building truly scalable NAS appliances, perhaps leveraging scalable file-system technology.

For end users, the rise of Microsoft in the NAS market means more choices and increasing features at drastically lower price points. In response, vendors of proprietary NAS filers will continue to add value through higher-performance solutions, sophisticated NAS management software, and specialized appliances with differentiating features (e.g., regulatory compliance, archiving, remote file caching, etc.).

NAS-SAN convergence

One of the more hyped topics of the past two years has been the convergence of NAS and SAN. Are we there yet? Conceptually, the answer may be "yes," but in deployed reality the answer is still "no." While solutions exist for co-managing block and file data services, most announced products from major vendors lack SAN interoperability support. In other words, vendors have said to end users, "You can run a NAS gateway in front of any SAN as long as it's ours." Another factor restricting NAS-SAN convergence is the fact that most data-center managers still separate NAS and SAN management, each requiring different disciplines and resources.

Nonetheless, in higher-end SAN environments, many end users do have plans for converged file and block services. In order for NAS-SAN convergence to become more widespread, vendors will have to demonstrate benefits in management, scalability, and ROI.

Eventually, a plethora of solutions will be available at all levels of the market. Expect to see a range of high-end and midrange NAS gateways and associated software for file service management entering the market over the next year or two. These solutions will be scalable and easily managed and will have enough intelligence and fabric support to interoperate with most SAN environments. This will bring a significant improvement over existing NAS gateways that are optimized for performance on a particular SAN vendor's arrays.

IP SANs + NAS

A key enabler of file-block convergence will be iSCSI SAN adoption, which will bring high-performance, block-level storage to companies that may not already have Fibre Channel SANs but that are likely users of NAS. Converged iSCSI block and file solutions will become common over the next couple of years.

NAS in the switch

There has been some excitement recently regarding NAS blades deployed on SAN switches, much of it driven by start-ups. However, many data-center managers wonder what advantages they would gain from this approach that they do not get through traditional NAS gateways. In any case, NAS blades on fabric switches will provide some benefits. Envision a scenario where base NAS functionality comes on a fabric switch, with software for aggregated file and capacity management integrated into higher-level storage management tools.

NAS benchmarking

In the NAS market, the three most commonly cited benchmark tests are SPEC SFS, Iometer, and NetBench.

SPEC SFS (Standard Performance Evaluation Corporation Server File System) is a de facto standard benchmarking suite designed for NFS servers. The test suite runs NFS servers through a variety of workloads that simulate a range of real-world applications.

Iometer measures disk I/O performance for storage subsystems. While a useful measure of disk I/O performance, Iometer is not designed to measure real-world enterprise workloads for NAS devices.

NetBench is a measurement tool for 32-bit CIFS file server performance that goes far beyond Iometer in terms of simulating realistic workloads for NAS. Originally used as an analysis tool for departmental file servers and now in its seventh release, NetBench receives less exposure, in part because it is time-consuming to run (requiring a minimum of 60 clients to simulate true enterprise workloads). However, with the traction of Microsoft and CIFS in enterprise-level NAS deployments, NetBench will increasingly become a standard performance measurement tool, alongside SPEC SFS.

How important are these benchmarks and what do they really mean about a NAS product's performance? Benchmarks can be used as general guidelines for evaluating performance, but the results may mean little outside of the context of your real-world workloads.

Brad O'Neill is a senior analyst with The Taneja Group. This article was excerpted from a larger report. To read the full report, which includes vendor profiles, visit www.tanejagroup.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives