Is disk-based backup all that it's cracked up to be?

Posted on July 01, 2003

RssImageAltText

Q: A few years ago, we tried to improve our backup performance by staging to disk, but our performance actually got worse. Has disk-to-disk backup technology improved over the last few years? Should we look at it again?

Does disk-to-disk (D2D) backup deliver as advertised? That's a question a lot of you have been asking me lately. The short answer is that, while it does have its benefits, D2D is not always as simple or as beneficial as some claim it to be.


Jacob Farmer
Cambridge Computer
Click here to enlarge image

The story goes that disk is faster than tape, but that disk has traditionally been more expensive than tape. Now that the cost of disk capacity has approached that of tape, you can accelerate your backups by backing up to disk and then perhaps forwarding the data to tape, right?

Well, that all sounds good, but it's not quite that simple. In fact, well-designed tape systems still out-perform—often significantly—garden-variety D2D systems. (Note the italicized words: Few backup systems today are "well-designed;" in fact, they are a rare bird. And while there are a few cutting-edge D2D systems available, which can out-perform and out-maneuver the best tape systems, most D2D systems are either extremely expensive or very proprietary or they come from relatively unheard-of start-ups.)

The most common way to implement disk staging is for a network backup server to write data locally to a disk array and then copy the data from disk to tape in a batch process. Alternatively, the backup clients could write data to a shared drive volume on the network (either to a file server or to a network-attached storage [NAS] device) and then the backup server would migrate data from the network share to tape.

In both cases, the process of sending data over the network is the real bottleneck. In a well-optimized network backup system, you might get 30MBps to 50MBps of aggregate throughput. In comparison, a single LTO-2 drive with a modest rate of compression can handle about 45MBps. Now, imagine that you have multiple tape drives and/or highly compressible data. A multi-drive tape system could be several times faster than a disk-to-disk-staging system.

Tape drives are, in fact, pretty fast, especially when you have multiple drives. LTO-2 and SAIT tape drives can stream data a rate of ±30MBps (native), or roughly 100GB per hour. If the data is compressed (e.g., 2:1) and you have five drives streaming, you can back up a terabyte in about a half hour. Factor in a thoughtful backup schedule and it's imaginable that a relatively low-end tape library could handle a multi-terabyte environment in a four-hour backup window.

The keyword here is imaginable. The challenge is designing a network infrastructure and job schedule that will keep the drives streaming. Most tape drives do a nasty thing called "shoe-shining," whereby the drives cyclically start, stop, rewind, and reposition because the rate at which data is fed to the drive isn't fast enough to keep the drives streaming. A solution to this problem has been to SAN-attach as many backup clients as possible, but this has other issues. For example, not all hosts can send data to the tape drive fast enough to avoid the shoe-shining phenomenon, so you end up with a slowpoke host hogging a high-performance tape drive while other backup clients wait the night away for a free tape drive.

One of the benefits of disk staging is that it can compensate for this shoe-shining effect. Disk systems do not care whether the data rate varies or not. They accept data at any speed up to their maximum throughput. But, before you get all excited, bear in mind that there are many other factors that affect backup system performance, including the size of the files or objects being backed up, the nature of your file systems, other activity on the backup clients, your network, the backup server, and your back-end storage channels.

So, staging to disk certainly simplifies the process of designing a backup system, but it's not a panacea. A simpler solution is to select tape drives that do not shoe-shine!

Another possibility is to do disk staging with "virtual tape." A virtual tape system is a disk-based system that emulates a tape device at the command level—that is, you plug a virtual tape system into a storage channel (e.g., SCSI, Fibre Channel) just like you would plug in a tape device, but because your backup software believes it is talking to a tape device, it does not have to have specific support for disk-enabled backup.

Virtual tape systems compensate for poor performance in SAN-based systems. On a SAN, a tape drive only services one data stream at a time (vs. parallel data streams in LAN backup). Thus, it is possible for a single slow backup client to hog a tape drive for many hours. A virtual tape system could compensate for this problem by presenting enough virtual tape drives to service all needs.

A few warnings about virtual tape, however: They could have licensing implications with your backup software vendor. For instance, most vendors charge a fee for each tape drive in use. If you conjure up 100 virtual tape drives, you will have to open your wallet wide. Also, note that virtual tape systems offer few—if any—benefits over simple disk staging if you are backing up over the LAN. Finally, virtual tape systems, though commonplace in the mainframe world, are new to the open-systems market, so many of the nice things said about them stem from mainframe applications.

In short, disk is just another ingredient in the complex world of backup systems. Disk is great when your backup system is really unhealthy, but if your backup system is that unhealthy, odds are that the disk system won't be the secret to scalability. If your backup system is healthy, D2D may or may not offer any benefits..

The best bet is to get a thorough understanding of how your backup system really works. Do that through analysis, education, and/or consultation. Once you know how your data moves, you can apply D2D technology to make the process work better.

Jacob Farmer is the CTO of Cambridge Computer. He can be reached at jacobf@cambridgecomputer.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives