DAFS versus SAFS

Posted on March 01, 2001

RssImageAltText

Direct Access File Systems and SAN-attached file systems (SAFSs) promise a number of improvements, particularly in performance. Here's the case for SAFS.

BY CHRIS STAKUTIS

End users are finding it increasingly difficult to efficiently deploy large file-stores with high-performance, front-end applications servers. As a result, several technologies such as Direct Access File Systems (DAFSs) and SAN-attached file systems (SAFSs) are being used to help file servers and network-attached storage (NAS) devices achieve higher performance and scalability, while reducing impact on host CPUs.

Storage basically comes in two types: file system-based and raw blocks. High-end RAID systems are typically raw-block devices (although the blocks are usually quite large). Users carve up the terabytes of data into hundreds of gigabyte-sized chunks (LUNs-logical unit numbers) and assign those for private and exclusive use by individual servers. Each server, in turn, places a file system on the LUN, but the server's view of the storage is block-level.

The other type of storage is based on a file system; there is an embedded engine located close to the disks, which presents the disks as a "network file system" instead of a set of raw blocks. Hosts see it as a network file system-not a block device. For simplicity, we'll call this class of devices network-attached storage (NAS) servers, which are becoming increasingly popular.

The network-attached model has a number of benefits such as ease of administration, which can mean fewer file systems for an enterprise because each file system can be simultaneously shared by many servers. Disk space is always a tight resource, and a network view of storage is inherently more flexible than hard-mounted LUNs, which don't stretch or shrink easily. In a network-attached model, an administrator could, for example, have a single file system for a terabyte-size NAS device. The entire terabyte is potentially available to every server at any time.

Click here to enlarge image

Physically, NAS servers offer a number of other benefits. They are tightly packaged and integrated with a rack of physical storage, redundant power supplies, high-end RAID technology, LAN interfaces, and simplified administration.

Limitations of file system storage

With all of these advantages, it is surprising that the NAS model isn't dominant. The fact is, there are limitations in the areas of performance, impact on host processing, scalability, and "unsettled writes" that prevent them from being effective in some markets or applications.

  • Performance: The name of the game in storage is performance. Nothing can be faster than direct-attached storage, but 100BaseT and Gigabit Ethernet can offer enough total throughput for some but not all applications. Application servers that are I/O-bound have a difficult time exploiting back-end network file systems. Too often, the raw throughput performance prohibits NAS servers from meeting application-server requirements.
  • Host impact: Host impact refers to how much overhead the LAN traffic erodes application host CPUs. Moving one megabyte of data to or from a LAN wire is considerably more expensive than moving data from a direct-attached wire (SCSI, Fibre Channel, etc.). Application servers can wind up spending 50% of their CPU cycles on just the LAN-access activities, thus decreasing the available resources for application processing.
  • Scalability limitations: The scalability limitations of NAS are due primarily to the CPU impact issue. LANs have two endpoints, and it is expensive to process LAN traffic on both the application host and the NAS end. High-end NAS servers are extremely sophisticated, parallel processing systems with many CPUs, network interface cards (NICs), host adapters, and complicated internal buses and switching mechanisms. These are necessary to address the performance and scaling requirements, but they add considerably to the total cost of the system.
  • Unsettled writes: "Unsettled writes" refers to the fact that LAN-based data "writes" are not 100% ensured to be settled out to real storage at the moment they are acknowledged. Instead, once the data is in transit, the application is typically told that it is free to continue its job (thus achieving some increased performance due to parallel processing). However, there are some mission-critical applications such as databases that have to have 100% assurance that their journal records are settled. If not, they cannot guarantee recovery.

DAFS promises

The NAS industry is well aware of these limitations and is working on solutions, with DAFS currently the best hope. The DAFS approach uses a faster wire that is more error-free for short distances. It is based on a new, lighter-weight transmission protocol and uses new NICs for direct-to-memory transfers to completely offload CPU processing.

DAFS may be a couple years away from widespread implementation, and it is not clear how much will have to change to realize the benefits. There is talk about applications having to be re-coded to exploit new APIs and about changing the general LAN networking model (security, namespaces, and access rules). And not all vendors support the DAFS initiative, which may cause standards issues.

What are SAFSs?

SAN-attached file systems, which have been available for a few years, exploit all of the existing network protocols and can achieve high performance with zero impact on host CPU cycles via established SAN technologies. SAFSs have been deployed predominantly in niche industries and have only recently been touted as a possible solution for NAS and general-purpose file servers.

The SAFS premise is that LANs are good at many things-but data transfer is not one of them-and SANs are good at data transfers-but not good at file coordination and security. So why not combine them?

In a SAFS architecture, one server (or set of servers) performs a different set of functions than the other servers. Think of this as the NAS file server in the conventional sense. Other servers would ordinarily have only a LAN interface to the NAS file server, and all of the requests and data would have to funnel through that single server. The back-end of the NAS, or file, server is attached to the storage elements typically via Fibre Channel or SCSI.

In a SAFS approach, some of the hosts can also be wired directly to the block storage. Applications on the hosts believe they are talking to the file server because, in general, they still do. The file server performs the authentication and layout of the file on the storage as well as all the other NAS-type functions. But when it is time to actually read or write the data, the SAFS host issues those I/Os directly to the storage device.

This split LAN/SAN architecture has been available for several years from such vendors as ADIC, Avid, EMC, IBM/Tivoli, SGI, and Veritas Software. Let's now test the SAFS approach against the four main issues with NAS-performance, host impact, scalability limitations, and unsettled writes:

  • Performance: SAFS performs direct I/O out a Fibre Channel (or SCSI) host bus adapter (HBA) to the storage pool, providing higher speed than traditional NAS architectures. (With a NAS or file server, data goes out the same type of channel but is throttled by the LAN and network protocols). SAFS implementations can provide more than 90MBps, which is far faster than most NAS implementations.
  • Host impact: Fibre Channel and SCSI protocols are lightweight, and the wiring medium is relatively error-free. Typically, there is near-zero host CPU impact. HBA vendors have been doing DMA transfers for years, and every host operating system supports them. By contrast, DMA transfers for LANs are new, or will be when DAFS becomes available.
  • Scalability limitations: Scalability should be very high with a SAFS approach because no data goes through the file server or NAS server. In contrast, data in a DAFS approach still "hits" the NAS server.
  • Unsettled writes: An application "write" in a SAFS approach goes directly out of the HBA, eliminating the unsettled writes problem. As a result, SAFS is suitable for applications such as databases and transaction processing.

So why isn't SAFS pervasive and taking the NAS market by storm? The reasons are related to evolution. SAFS grew up in the media world, where movie creation and graphic arts professionals needed to process huge files between peer machines. After a number of years and technology innovations, SAN software developers began to develop methods of file-level sharing on direct-attached storage devices. Slowly, the concept of a hybrid LAN/SAN became accepted. Still, the technology was limited primarily to high-end media and data-warehousing sites.

At the same time, NAS was becoming increasingly popular, but in different market segments-mainly the general enterprise computing space. The two camps were basically unaware of each other. Today, however, most NAS players are aware of how simple it is for them to expose their storage to SAFS hosts, and we may see many new product offerings hitting the streets. Stay tuned.

Chris Stakutis is chief technology officer for the SANergy product line at Tivoli Systems (www.tivoli.com/sanergy) in Austin, TX. He can be contacted at cstakutis@tivoli.com.

Click here to enlarge image

CHRIS STAKUTIS
Tivoli Systems

For more information on the Direct Access File System, see InfoStor, February 2001, p. 70.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.