The future of SAN file systems

Posted on September 01, 2000

RssImageAltText

File systems designed for storage area networks already exist, but more functionality is due by the end of the year.

By Paul Rutherford

SAN file systems, some shipping today and some that are in development, are well on the way to offering the IT community a very different model of information usage than the one currently used in enterprise computing environments. Instead of managing local storage attached to a particular server or host, the future of information exchange will see the transparent sharing of data between large numbers of users in many different locations. Instead of duplicating files and moving them between servers, IT managers will be able to share single file sets among many users in different locations. Bandwidth increases will effectively make all the storage appear to be local, and the grouping of storage in easily managed pools will sever the link between a single storage resource and a single CPU.

The storage industry is not all the way there yet, although a few IT organizations are surprisingly close to this vision. This article addresses where we are today, looks at the progress that you can expect over the next year or so, and gives some advice to end users about storage area network (SAN) implementation strategies and the role that SAN file systems will have in those implementations.

The first question to address is why we need SAN file systems. After all, thousands of Fibre Channel SAN installations are up and running, supported only by the standard Unix and Windows file systems that have been around for the past two decades. These users are enjoying many of the benefits associated with SANs, without any special SAN file systems. So why do we need something more?

The answer is directly linked to the progression of benefits that Fibre Channel SANs are accomplishing for IT organizations that are increasingly being beaten down by the volumes of data they have to manage.

The first requirement that Fibre Channel technology solved was the need for high-bandwidth connection to discrete storage components. This was accomplished with a direct connection between a single Fibre Channel disk device and a single workstation or server. This direct connection provided high-bandwidth connectivity and high-speed access to data, potentially supporting greater distances between data and hosts. And it still probably represents the most common use of Fibre Channel technology today.


Figure 1: In a co-location model, disk systems are partitioned and allocated to several servers as dedicated resources.
Click here to enlarge image

One step beyond direct connection is the co-location model, where disk systems are partitioned and allocated to several servers as dedicated resources (see Figure 1). These configurations can use Fibre Channel hubs; however, they are typically implemented using switches. Mirrors are commonly used in these environments. Duplicate data provides a high measure of protection, and Fibre Channel technology allows it to occur over longer distances without loss of bandwidth. That's a real advantage that is also relatively widespread today.

The third, and so far most useful application for Fibre Channel SANs, is to get the large data movements associated with backup off the LAN and on to a backbone storage network through a switched fabric. Backup is the first leveraged SAN application-one that is providing benefits beyond simple bandwidth. When it is executed in the right way, and supported by software that takes advantage of the SAN environment, it allows IT departments to consolidate the backup data from many local disks onto a single backup device. Most importantly, it dynamically allocates the read/write devices, allowing a few of them to be used to backup data from many different servers. This dynamic resource sharing, along with LAN-free operation, is the key advantage of SAN backup.

Server-less backup

Just beginning to become reality is the next stage of SAN backup-server-less backup, which allows data to be moved from disks in the switched fabric directly to backup devices without using servers to move the data (see Figure 2). Instead, data is moved via active agents in the fabric, which reside in hubs, switches, or in the backup devices themselves. Server-less backup is important because it separates active data management from server resources.


Figure 2: Server-less backup allows data to be moved from disks in the switched fabric directly to backup devices without using servers to move the data.
Click here to enlarge image

All of these steps are bringing us a little closer to the vision of virtualized storage, but the true advancement is the idea of managing consolidated active data-instead of backup data-via a SAN. This is just beginning to happen, and the progress is occurring in stages. By combining the use of Fibre Channel switches and disk volume management software, users today are sharing centralized disks or a pool of centralized disk resources and making them available to several servers in a virtualized storage environment. This use of SAN technology pools resources, and can be used to reassign them dynamically to different servers over a switched fabric. Next-generation software will be able to automatically change the pools based on specific requirements, and will use existing file systems.

What else is required-ease of management and data sharing. Pooled disks that are divided between servers via switch ports and volume management software are always working against hard edges of the volumes. In other words, individual servers are limited to assigned volumes, and some level of management is always required to make more resources available when the need changes. This version of a SAN is still locked into the older network model, in which a given storage resource is dedicated to a single CPU. There are two important differences. First, the central resources are shared over a high-bandwidth fabric so that distant resources behave as if they are local. Second, the resources can be shifted quickly between hosts as storage needs change. But fundamentally, storage is still attached to a single host, and data must be duplicated and multiple copies must be managed in order for it to be available to multiple hosts. This approach offers significant advantages, but it's inherently limited.

SAN file systems

To get all the way to the vision of shared resources we need a file system underlying the storage resource that will allow true data sharing down to the file level. Why a new file system? The real reason is that existing file systems were developed with those hard edges firmly embedded. In other words, they behave as if disk volumes are dedicated to servers, and they have no mechanism for dealing with the inevitable issue that comes with shared files: arbitrating access to data.

The new SAN file systems take the next step by allowing multiple hosts to access and share the same files through a switched fabric, while the data appears to be local to all hosts (see Figure 3).


Figure 3: SAN file systems allow multiple hosts to access and share the same files through a switched fabric, while the data appears to be local to all hosts.
Click here to enlarge image

To a user, if files are simply being viewed or made available by a host to other clients, it looks exactly like it did when files weren't being shared. The same file is simply seen by as many hosts as are given permission to access it. The fact that it's only one file rather than many duplicate files is transparent to the users accessing the files. But the added value is significant: Instead of managing many files, duplicating them for different servers when the demand changes and then changing all copies when they are updated, the administrator manages and updates one file.

Multiple access to a single file also enables collaborative workflow. Several users, each using a different operating system and different applications software, can work on one file at the same time. This is why some of the early adopters of this technology are digital video and special effects editors. In those specialized applications, the files are immense (one second of digital motion picture can require a gigabyte of space) and several specialists need to perform different tasks on the same file.

Without SAN file systems, files were copied and moved from workstation to workstation to let different editors work on them. With SAN technology and a SAN file system, a single copy on a central disk can be accessed and edited by many users. The high bandwidth of the SAN allows the data on a central disk to be accessed by all the users at speeds as fast as local disk. Therefore, network traffic is lower, disk capacity requirements are reduced, work gets completed much faster, and editors can see others' work in real time.

It's easy to see the extension of this technology into applications where access is more important than collaboration. One site, for example, first uses SAN file sharing to allow multiple Linux machines to perform high-speed parallel processing on large files. In this case collaboration is critical. Then the same files are turned into graphical output on high-performance Unix workstations. In this application, network traffic and file copying are minimized, but the real advantage is fast access, not multiple users editing content.

Other users are evaluating SAN file system technology to share data among many hosts, each of which in turn serves data out to other clients. One obvious application is Web hosting, where many Web servers can access a single set of files, eliminating the need to manage duplicate copies. Another application is a back-end to network-attached storage (NAS) filers, where several filers could serve files over an Ethernet connection to network clients, but share file sets in a pool of common Fibre Channel disks rather than on local disks.

The development effort on SAN file systems has been significant. These file systems need to operate on a par with other local and distributed file systems so that when a file that is located on a central disk device is needed, it can be accessed transparently. Interfaces to the other file systems are also required, so that applications can see the files and use them. A system for permissions and block-level arbitration of requests is also needed so that files can be shared, but so that the same block of data cannot be accessed at the same time by multiple users.

Critical features are being added gradually, as with any new file system. Fairly recent additions include quality of service controls, a robust system for distributing the file location information among multiple distributed hosts, concurrent maintenance, and a failover system. More work is needed both on security and on migration issues. Those issues are being worked on now, and solutions are expected before the end of this year.

What is also needed is agreement on standards. If vendors develop many different approaches, applications support will be slowed and the end-user community will suffer. All file system developers need to work together in groups like the Storage Networking Industry Association (SNIA) to make sure that we don't end up with incompatible approaches. Everyone will benefit-especially the IT community-with a coordinated development effort.

Paul Rutherford is vice president of technology and software at Advanced Digital Information Corp. (ADIC; www.adic.com) in Redmond, WA. He can be reached at paul.rutherford@adic.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.