Tape Backup Strategies for NAS

Posted on December 01, 2000

RssImageAltText

Library vendors look to solve NAS backup issues with NDMP-centered architectures.

By Heidi Biggar

Click here to enlarge image

With the total network-attached storage (NAS) market expected to exceed 125,000 units this year and more than one million units by 2004, the NAS industry may be headed for a backup crisis of multi- terabyte proportions.

Though touted for their powerful file-serving capabilities, NAS appliances are criticized for their lack of integrated backup support. "Unfortunately, the reality is that backup and recovery for NAS appliances is a fairly difficult and serious challenge," explains Marc Farley in Building Storage Networks (McGraw Hill, 2000).

While NAS devices have been optimized for client/network integration, they have not been designed with backup in mind. "As a result, it is very difficult, if not impossible, to correctly back up and restore a complete NAS system," according to Farley.

To get around the backup/restore issue and provide users with a degree of backup/restore protection, NAS vendors have used various techniques. For example, some vendors have developed proprietary backup programs, while others have developed proprietary backup agents or have recommended mapping techniques to users.

While these alternatives let users backup and restore NAS-resident data, each has significant drawbacks and no longer adequately address users' total backup requirements. For example, while proprietary software can enable full, high- performance backup and restore, it typically does not integrate well-if at all-with leading backup applications (e.g., ARC-serve, NetBackup, and Backup Exec).

Similarly, developing customized back-up agents can be complicated and may not interface well with NAS devices or operating systems. And, among other things, mapping techniques-in which NAS devices are backed up as mapped drives over the local network-are limited to the speed of local I/O channels and require users to re-create important security information after each restore, making it a poor choice for large-scale NAS environments.

Enter NDMP

With the objective of solving the NAS backup/restore problem, in 1996, Network Appliance teamed up with Intelliguard (which was subsequently acquired by Legato) to co-develop the Network Data Management Protocol (NDMP). An open-standard protocol, NDMP was designed to provide efficient, high-performance backup to local tape without involving network traffic.

"The reason this all came about [referring to NDMP] is that NAS devices started to get too big to back up over Ethernet," says Bob Covey, vice president of marketing at Qualstar, a tape library manufacturer. "The answer was to locally attach tape devices right to the filer."

Rather than cluttering the NAS operating system with additional code and making the NAS device responsible for hosting the backup application, NDMP facilitates the backup process by acting as a "pass-through" or "third-party" device.

"NDMP facilitates the communication between the NDMP client [i.e., the backup software] and the NDMP server [i.e., the NAS device] over IP or some other protocol," explains Jay Desai, product marketing manager at Network Appliance. "Whether it's a Gigabit Ethernet network or Fibre Channel network, it's all transparent to the NAS device."

Control information and metadata is passed to and from the NDMP-compliant backup application via the NDMP interface. The backup application, meanwhile, handles all traditional backup services (e.g., file history, scheduling, packaging, backup/restore control, etc.), leaving the NAS device to do what it does best: serve files.

While NDMP-based tape backup benefits users with improved backup performance, better interoperability, and greater freedom of choice for backup software, it has certain requirements and limitations. For example, both the backup application and NAS device must be NDMP-compliant, for the most part restricting current implementations to NetApp filers running Legato or Veritas backup software.

While it is possible to back up NAS filers with leading non-NDMP-compliant backup applications such as Computer Associate's ARCserve, it's not as fast, and the restore is not as secure as it is with NDMP-compliant software, according to Network Appliance's Desai. "The disadvantage is that you would have to go through CIFS [or NFS] protocols, which are not the most efficient way to do backup."

Texas Instruments (TI), for example, is currently backing up about 20TB of its NAS-resident data using the NFS protocol.

"This was the fastest method we had without directly connecting tape drives to each file server, but it is somewhat slow and hard to manage," says Steve Meadows, info specialist/analyst at Texas Instruments. "That's why we decided to look into other options."

TI is evaluating an Ethernet-based ATL NDMP architecture, which includes Veritas' NetBackup software, ATL P3000 tape libraries, and NetApp F760 filers (see below).

Support aside, metadata protection is also an issue. "...NDMP can provide fast, system-independent backup functionality, but it cannot provide complete back-up functionality," writes Farley. "A NAS vendor that wants to provide security and metadata protection will need to provide its own proprietary method in files that store the metadata."

Now in its third version, NDMP has been adopted by Network Appliance, as well as other NAS, library, software, and component vendors, and is the focus of a Storage Networking Industry Association (SNIA) sub-working group. Besides NAS backup, the protocol also plays a role in other applications, such as server-less backup in storage area networks (SANs). (For more information, see "The role of data movement devices," on p. 24 and "Server-less backup to take center stage," InfoStor, March 2000, pp. 24-28.)

Vendor venue

As evidence of users' growing need for more efficient NAS backup, tape storage vendors have begun to team up to bring various architectures to market. Both

ATL and Exabyte, for example, have announced NDMP-based backup models, and others are expected to follow suit shortly.

Both vendors' configurations involve backing up NetApp filers over Fibre Channel or Gigabit Ethernet to tape, demonstrating local and filer-to-filer NDMP backup (see "NDMP: Four scenarios," p. 22).

The primary architectural difference between the two implementations is connectivity, says Gene Nagle, ATL product manager. The local configuration uses Fibre Channel to back up NetApp filers to Fibre-Channel-attached libraries, while the Ethernet implementation backs up NetApp filers over the network to an Ethernet-attached tape library.

In the Ethernet example, connectivity is provided by an embedded network interface card (ATL's MC-100 Prism Management Card), which acts as an NDMP server, handling NDMP processing (see "ATL supports NDMP/NAS/ GbE," InfoStor, September 2000, p. 10). Protocol conversion is provided by Vixel and Cisco switches/routers, respectively.

Similarly, Exabyte is working with LAND-5 and Vixel to bring dedicated Fibre Channel- and Ethernet-based backup appliances to market.

The Fibre Channel appliance, the SaveStor 800, includes LAND-5's StoragePod Plus NAS device, Exabyte's X80 library, and Vixel's 7100 switch. The Ethernet-based SaveStor 400 features a LAND-5 thin server, proprietary management software (iceNAS), and an Exabyte 430 Mammoth-2 library.

Is one configuration better than the other? That depends on your network, what technology you have on the servers, and how big an operation you have, says Curt Mulder, market development manager at Exabyte.

"Fibre Channel offers bigger pipes and involves less overhead because you don't have to go through the TCP/IP stack," he explains.

But to its credit, Ethernet offers lower cost, has less of a learning curve than Fibre Channel, doesn't have interoperability issues, and allows for dynamic sharing of tape drives (see "SAN fabrics: Ethernet, Fibre Channel, InfiniBand," InfoStor, September 2000, pp. 48-50).

Addressing both camps, ATL says it will offer a hybrid library-half Fibre Channel, half Ethernet-in the next quarter. Also, as part of its Open Storage Networking Initiative (OSN), ATL plans to broaden its support beyond partners Cisco, Foundry, Legato, Veritas, and Vixel to provide better interoperability.

TAOS tales

Recognizing the importance of network-attached tape backup and the shifting interconnect landscape, Spectra Logic is gearing up for a February launch of its Tape-IP Consortium and Tape Appliance Operating System (TAOS) architecture.

"The ultimate goal of the consortium is to provide network-attached storage solutions-based on our TAOS architecture-that work in iSCSI, SoIP, OSN, NDMP, DAFS, VI, Jumbo Frames, and InfiniBand environments," says Britt Terry, director of marketing at Spectra Logic.

Initial consortium members represent networking, tape, systems, and software vendors (see box below).

Click here to enlarge image

How does TAOS differ from existing network-attached backup architectures? "It really comes down to the difference between an appliance and a server-based architecture," says Terry. "What we want to do is distill [our appliance] down to the point where we only get of it the things we need to do IP-based tape transfers." Stripping the appliance down to its bare bones, he says, will provide optimum performance and scalability.

TAOS currently operates on all Spectra Logic controllers and in Fibre Channel and Ethernet environments via NDMP. The company expects to offer TAOS as a standalone device enclosure and will license the architecture to drive and library vendors.


NDMP: Four scenarios

By Heidi Biggar

There are four different ways to backup NAS devices to tape: local, filer to filer, filer to server, and server to filer. Each of the models targets specific user needs and applications. In all the scenarios, the backup server and the NAS filer are NDMP-compliant. The backup application continues to control the backup/restore process, and handles file and scheduling information.

Click here to enlarge image

1 Local
In this scenario, a tape device is physically attached to the NAS device, or filer. Data from the filer is backed up over a local SCSI or Fibre Channel link (see diagram). While users may opt to attach a library to each filer, others may choose to "split" a library between two filers. Take a four-drive library, for example. Two of the drives could be wired to Filer A and two to Filer B.

2 Filer to filer
In this scenario, as in the local configuration, a tape device (in this case, a library) is attached to a filer over a local SCSI or Fibre Channel connection. The second filer, however, gets backed up to the tape library that is attached to Filer A. So, Filer A gets backed up locally, while Filer B gets backed up over the network.

This model is often found in environments with a lot of filers, which can take advantage of a library's multiple drives. Consider a 10-drive library. A user could back up five of its large filers over SCSI or Fibre Channel to two locally attached tape drives, while backing up smaller filers over the network.

Users get the benefits of tape sharing, as well as the benefits of local performance for big filers and network-based backup for smaller filers.

3 Filer to server
In this scenario, all tape drives are attached to the backup server and all filers get backed up over the network to the tape drive attached to the server. This configuration, which enables users to back up hundreds of NDMP clients or NDMP servers to a central set of tape libraries, targets data-center-scale operations.

4 Server to filer
In this scenario, tape drives are attached to the NAS filer and the data on the backup server gets backed up to the tape drives on the filers. So, users with NetApp filers or NDMP-compliant devices could hook up a tape library to the filer and then back up one of its client to the attached tape library. However, since most users need to back up multiple clients, this configuration is the least-popular option.


Making tape libraries more intelligent

By Britt Terry

The evolution of tape libraries has gone from simply providing a robotics mechanism to a complex set of value-added features that meet changing IT requirements. Some of the key IT trends include

  • Consolidation-Companies are moving from many servers and pools of data to a more centralized data pool and connected views into that data. The same trends are driving the backup world: Larger devices are responsible for more and more systems.
  • Centralization-System administrators have become stretched too thin from the "drop-in-a-new-server" approach of the past five years. They would rather have all storage (primary and secondary) in one location. Storage can be added and maintained from one location rather than many, as in distributed models.
  • Lower maintenance and overhead-While truly "lights-out" environments are the goal, few IT organizations ever achieve this level of service. Media rotations and other day-to-day interactions are required. However, more-powerful backup applications have significantly reduced the interactions required to accomplish backups in complex environments.
  • 100% uptime-Primary disk and system vendors face an ever-increasing need to drive downtime to an absolute minimum. The trend toward higher availability is often a requirement. Most companies are online 24/7.

Tape library vendors are meeting the requirements of those trends, and built-in intelligence is the key.

100% uptime

The requirement for continuous availability is different for disk and tape systems and is a reflection of the differences between primary and secondary storage. Most companies spend a significant amount of money ensuring their primary storage-and access to that primary storage-is up at all times (e.g., mirroring, snapshots, and other replication techniques).

Tape library manufacturers are adding more built-in connectivity and maintenance uptime features, such as hot-swappable components, redundant power supplies, and redundant interface control paths.

Logical partitioning

The ability to logically partition a tape library now allows for shared library resources in heterogeneous hardware/software environments. With the option to create logical libraries, users can custom-configure multiple logical libraries from one physical library, and each logical library is recognized by the host it is attached to as a distinctly owned resource. The consolidation of backup resources into a single library lowers the purchase price considerably, compared to buying multiple small libraries with the same number of drives and slots.

With the ability to logically partition tape libraries, operators can collect tapes that need to be moved off-site from one centralized location, instead of separate locations. New media can be added at the same time, resulting in one instead of multiple trips.

With the ever-increasing amount of data growth, logical partitioning also allows for future configurability. If one library needs more slots or drives, users have the option of taking them from a less-utilized system or, if the unit is not fully populated, users can add more slots or drives, thereby expanding the physical library and distributing the additional capacity and throughput in the best manner.

Finally, the consolidation of multiple tape devices into a single library offers reduced management costs and lower overhead. One larger library is less expensive and less time-consuming to maintain and operate than multiple small libraries.

Storage density

With the trend toward centralization and consolidation, many corporate data centers and collocation facilities are now requiring space-efficient tape libraries, because floor space is critical in most environments today. With the vast amount of growth in the Internet, along with the emergence of collocation centers, there has been an increased demand for condensed form factors. High storage densities let administrators consolidate large amounts of data in fewer tape libraries, reducing overhead costs.

Britt Terry is director of marketing at Spectra Logic (www.spectralogic.com) in Boulder, CO.


The role of data movement devices

By Cassia Glass

Fibre Channel-to-SCSI bridges are key to protecting investments in SCSI tape libraries and disk subsystems. However, as storage technologies and storage area networks (SANs) mature, the need for bridges will lessen. Many primary storage devices already offer Fibre Channel interfaces and drives, and secondary storage devices (e.g., routers) are beginning to follow suit.

As storage vendors look to solve more complex issues such as data movement, routers will play an increasingly important role. These devices promise to provide a foundation for advanced data movement-in particular, server-less backup-improved restore functions, and virtual volume protection.

The so-called SAN "killer app," server-less backup enables direct data movement from disk to tape. Bridges, routers, and other SAN interconnects that support the extended copy function become the engines for moving data from disk to tape, leaving application servers out of the backup loop. However, simply having devices that support extended copy does not necessarily solve all backup and restore issues.

Tying data movement functions to devices such as switches introduces potential bottlenecks and raises issues of fabric scalability. So, what about Fibre Channel-to-SCSI bridges, which connect SCSI tape or disk products to the SAN fabric? Bridges are not necessarily optimized to manage and buffer the data now streaming from disk to tape. A high density of Fibre Channel and/or SCSI ports is needed for improved performance and intelligence is needed to manage data flow.

Also, bridges often do not provide upgrade paths to new technologies (e.g., Fibre Channel-based libraries). High-performance, high-density, modular devices are needed to not only keep data moving, but to enable seamless upgrades to Fibre Channel devices and to 2Gbps Fibre Channel fabrics and beyond.

Hardware aside, there is still the need for intelligence to handle the translation of data blocks to file systems and operating systems for individual file or directory restore in heterogeneous SAN environments. Server-less backup does not translate into "server-less restore," revealing the need for advanced, open SAN platforms to allow data movement from tape to disk (and disk to disk) for quick online restores.

Optimized, intelligent data routers offer a point within the SAN to manage these complexities. In addition, as SAN virtualization technology drives storage toward a utility model, where virtual volumes are allocated by capacity on demand (How much do you want today?) and attribute (Should this be fast storage?), rather than physical location within a subsystem, backup and restore functionality will need to keep pace. SAN routers, coupled with SAN management devices, will offer a future platform for seamlessly restoring files in a virtual volume in the event of data loss.

For users building SANs today, choosing to integrate a backup solution with routers enables high-performance SAN-based backup today, while providing a foundation for advanced data movement solutions tomorrow.

As SAN application technology matures, and as new Fibre Channel products enter the market, having flexible, intelligent, performance-oriented data movement devices will not only offer investment protection, but also provide an easier migration path to virtualized storage.

Cassia Glass is a product manager, enterprise backup solution, Compaq Computer (www.compaq.com).


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives