By Josh Coates
Most IT managers are aware of the benefits of network- attached storage (NAS), including relatively low cost, heterogeneous file sharing, and simple installation and maintenance. The concept is simple: By separating the storage server from the application server over a network, storage can be managed from a central location and shared by many applications. Although NAS is widely implemented, scaling NAS servers remains a problem due to limitations in NAS protocols and file-system technology.
The Network File System (NFS) was the first network protocol used to implement NAS. NFS was developed in 1984, and a year later, NFSv2 became the prevalent method of sharing networked storage across Unix platforms. NFS was standardized in 1989.
Server Message Block (SMB) was the PC world's answer to NFS. The most widely used implementation of SMB is the Common Internet File System (CIFS).
All popular operating systems come with NFS and/or CIFS client and server software. There is no hardware required outside of what is normally found in a networked environment, which means that practically any system in an enterprise can serve files to any other system.
Simplicity of Nas
The simplicity of NAS comes primarily from the file-based, client-server model. A NAS system exports a persistent file system governed by a single server, becoming a convenient resource for storage administrators. For NAS to continue to succeed, these fundamentals must be preserved.
With the proliferation of NAS servers, storage administrators have attempted to use NAS for environments and applications that push the limits of conventional NAS technology. The same NAS architecture that was once used for departmental file serving is being used to service CPU farms made up of hundreds of systems, some with multi-gigabyte files. As a result, file-system structures are now made up of hundreds of millions of files.
The main problem facing NAS is lack of scalability. The traditional NAS architecture consists of one server, attached to a string of disks, on a network. This architecture is good for relatively simple applications in somewhat small departments.
However, scalability is hampered by the complexities that result from the addition of servers to a NAS environment. NFS and CIFS do not provide administrators with the ability to share file systems between NAS servers. As a result, the addition of storage capacity means that disparate file systems must be created and maintained for each server. The addition and maintenance of file systems cause exponential management difficulties for administrators.
Clustering is a potential solution to the management issues. Could clustering principles be applied to NAS servers? The short answer is "yes," but it will be no mean feat.
The challenge lies in software. NAS server software is made up of several components-the protocols (e.g., NFS and CIFS), the file system (e.g., NTFS, FFS, and ext2), and the disk subsystem (e.g., volume managers and software RAID).
Most of these software systems are "stateful," which makes them extremely complicated to "parallelize" (see sidebar).
For example, if a storage administrator attempted to cluster NAS servers behind a single virtual IP address, many separate protocol servers, file systems, and block managers would co-exist. A client would connect to this "virtualized NAS server," open a file, read from or write to it, and close it. However, when the client came back for another transaction, it would likely find that there was no record of its previous visit, and it would not be able to access any files it had previously written. Those files would probably be on another file system, on another block device, somewhere else in the cluster. The difficulty of parallelizing file-system software is the last barrier to pooling NAS at a local level.
It is important to note that the problem of NAS growth does not end at the firewall. Unable to share file systems across geographically distributed facilities, storage administrators find that they need to deploy redundant NAS appliances in each corporate campus, driving up costs exponentially and disrupting workflow to the point that file-system updates are often done at night via FTP.
The barrier to wide-area NAS is latency. Sending a packet round trip from California to New York takes 50 milliseconds or more, approximately 100 times longer than it takes on a LAN. Because opening an empty file in a simple console editor requires dozens of file-system calls to complete, it is easy to see how latency becomes prohibitive to scaling NAS across the WAN.
Work done in academic research labs over the past decade is yielding the answer to these and other problems. Parallel systems technology is allowing storage administrators to pool NAS on a local level and connect NAS file systems on a global level.
This research has yielded the first parallel file systems, which allow network clients to access files from clustered NAS servers as if they were one large NAS appliance. This increases capacity utilization, boosts throughput, and provides high availability.
In addition, the problem of scaling NAS across the WAN is being addressed with aggressive file-system caching and flexibility in locking semantics. File-system caching is similar to Web caching, except that it tracks the notoriously complicated state associated with file systems.
Keeping file-system caches coherent is also extremely difficult. These issues are complex, but they are further complicated if the design intends to retain the convenience and simplicity of NAS systems (i.e., no change to client software, standard file-system semantics, etc.).
As parallelism and caching technologies are fine-tuned for file-system applications, storage administrators will soon be able to keep file systems coherent across the WAN in real-time without the need for client-side software. This will result in global file systems.
NAS has become an invaluable resource for storage administrators. At the workgroup level, NAS appliances with NFS/CIFS support will continue to be popular. As demand on an enterprise's file storage infrastructure increases, administrators can make use of parallel systems technology to ease the growing pains of storage, while still using familiar NFS/CIFS protocols. And parallel file systems will enable next-generation NAS architectures to maintain convenience and ease of use.
In the future, ongoing advances in file-system technology will enable global file systems that can streamline workflow and increase enterprise connectivity. The next generation of NAS will take advantage of these advances and increase the value of networked storage to any organization.
Josh Coates is chief technology officer at Scale Eight (www.s8.com) in San Francisco.
'Stateful' versus 'stateless' systems
A stateful system is one that maintains "session data" (also known as state) that is required to be maintained in order for the system to continue functioning properly. An interruption in the session usually results in a fatal error (i.e., the session needs to start over from scratch).
A stateless system is one that does not maintain any "session data," and so it can be disrupted or reset and still continue to function properly.
CIFS is stateful because when a client connects with a CIFS server, it records various authentication information about the client. This creates a "virtual circuit" between the client and server. If this circuit is interrupted, then it needs to be completely rebuilt, which renders the old state invalid, and replaces it with the new session state. In the context of a parallel CIFS server, this can be a problem, because any client session should be able to be ser viced by any server. The problem is that the servers don't share this authentication information (state) with each other. If a client attempts to communicate with a server that doesn't have its state, then the server will reject the session, or it will actually create an entirely new session, which will confuse the client that is now stuck with multiple concurrent sessions.
NFS is stateless, which means that there is no special session information that the client and server maintain about each other. For example, instead of maintaining a "virtual circuit" between the client and server (like CIFS), a series of signals are sent. Each signal is completely independent of the last one. If a client sends signals to one server and then switches to another server, the system will continue to function.