Is serverless backup your best bet?

Posted on November 01, 2002

RssImageAltText

Serverless backup has made great strides over the past few years, but in many cases traditional backup techniques work better.

By Winston W. Hait

Only a few months ago, serverless backup was viewed a silver bullet of networked storage. It was expected to be the first in a series of developments that would help fulfill the promise of storage area networks (SANs): to simplify and centralize storage. It would do so by enabling large volume data transfers (for backup) to take place with little or no impact to the main network or to the servers attached on it.

Has serverless backup lived up to these expectations? The answer is, yes and no. While a number of hardware and software vendors now offer serverless backup options, it is debatable whether these products actually address real end-user problems.

A quick review

Serverless backup works by moving data directly from disk to tape or from disk to disk in a SAN environment. The backup data travels from the source (disk) to the target (disk or tape) through a specially equipped data mover or intelligent agent running SCSI Extended Copy Command (or "Xcopy"). The data mover could be a switch, a storage router, or a dedicated Unix server.

Click here to enlarge image

Using Xcopy, the intelligent agent produces an image of the disk by making a snapshot of pointers, which indicate the location of the data. The intelligent agent then communicates with the backup application, via Network Data Management Protocol (NDMP), and data is backed up.

A slightly different approach would be to use the intelligent agent to make a block list of the data to be moved, manage the movement of the data from disk to tape, and then ensure the commands have been properly executed. Once the block list is built, the data is moved directly from disk through a switch/storage router to tape.

This scenario compares to a conventional backup environment in which backup data moves from disk through a switch/router to a server (which monitors and manages the data flow) and then back through a switch to tape.

Serverless technology allows administrators to back up data from disk to tape with minimal server involvement, which means CPU cycles can be saved for production applications. This can also lower associated backup costs by eliminating the need for additional servers or equipment to help with backups.

It should be noted that when "serverless" backup was first introduced, Unix servers were sold as part of the application. The server was used to build and manage the block list and monitor the data movement, so the process wasn't truly "serverless." Today, that intelligence can be embedded in the data mover. The data mover then functions as the manager of the backup process.

Does it work?

In some cases, administrators have seen a CPU utilization reduction of nearly 90% with a serverless implementation. For example, in a test involving a Compaq GS320 server running Tru64 v5.1A with 32 SuperDLT drives attached to the server and running in parallel, CPU utilization was reduced from 90% to 20%—a drop of 88%.

So, yes, in certain situations serverless backup can have a substantial impact, but the real question that should be asked is, "Is serverless backup useful in my environment? Can it help me get my backups done and give me the type of restore capability I need?"

As with implementing any good backup system, backup requirements need to be spelled out explicitly. How much data is there to be backed up? What is the amount of data and how quickly it is growing? What applications, if any, are involved (e.g., database, ERP, CRM)? What is the current server utilization and how quickly is it growing? What is the availability of the data (e.g., is there downtime)?

Tracking all of this can be tricky, but there are tools that can help you monitor data volumes and perform trend analysis to determine how your data is growing. Armed with this information, you can determine if your current process is sufficient and if, or when, it is best to implement a serverless backup architecture.

But before you make the move to serverless backup, you'll want to make sure that the applications you are running will work well in a serverless environment. For example, can your application run in hot mode in a serverless environment? Many cannot. In these situations, you'll either have to use mirroring and do a cold backup of the mirror or you'll have to close whatever application(s) you're running and then do a cold backup.

Other options

If taking an application down to back it up is the only solution you have, depending on your networking environment and your backup window, you may be better off without serverless backup. Also, by taking your application down, you can funnel all CPU power into improving your backup speed.

Back up folders/partitions

One simple way is to back up the folder or partition that contains/holds the files you want backed up. For example, when you back up a Website, you have thousands to hundreds of thousands of small files (which can be as small as 10K to 20K each). All backup products log the files that are backed up (e.g., their locations, the dates of the file, and the tapes they've been backed up to) in a central database, which is used to determine the location of a file in the event of a restore.

The problem with this type of approach is that you are forcing the system to back up individual files. This can consume CPU cycles and impact the performance of your tape drives. The key is to keep data streaming to your tape drives. If a tape drive's buffer is not kept full, you will lose valuable time waiting for the drives to stop, rewind, and wait for their buffers to fill up again so it can write the additional information to tape.

Alternatively, if you back up the partition, you will get faster throughput because you are backing up one very large file, i.e., the drive itself. Doing so also allows for faster restores in disaster-recovery situations in which entire drive contents need to be restored. The backup application only sees the partition that was backed up, not the individual files in it.

Physical backup/logical restores

Another option is to do physical backup/logical restores. While this process overcomes the speed issue by backing up all files as one partition, its main advantage is allowing you to restore particular files, not whole drive volumes. Serverless technology can't do this, and support is years away.

Parallelism

Another option is parallelism, where the database itself or the partition that it resides on is split into any number of equal pieces and streamed simultaneously to any number of tape drives in parallel. This can help you speed up your backup process and it can increase restore speeds.

Serverless technology has this potential although no vendors have implemented it. The main focus of serverless technology thus far has been on a single stream of data going to a single drive. Vendors do have plans to support multiple data streams, but again, that support is at least a year away.

Dual backup copies

Another variable to consider is, if you have the available CPU and network bandwidth, to create two sets of tapes at the time of backup. This is particularly useful if you have a large backup that you need to make copies of for on-site and off-site storage. Creating both sets at the time of backup (by writing two separate, but identical data streams) saves you a second copy step.

Serverless technology can do this if the data mover supports multi-casting (i.e., take a single backup stream from disk and then broadcast it two or more tape drives). The main problem with this type of process is that it places a high load on the data mover. The device receiving the data stream must expend its own CPU cycles and use its own RAM to create one or more additional copies of the data and then stream it out again. The other problem with this technology is that it's not reliable enough for prime time.

Restore: The real kicker

Not only is it important to back up data in a way that least impacts your systems, but it is also critical to know how that data will be restored.

Using parallelism to stream data to 10 drives simultaneously for a backup means that you can also stream that many drives to restore the data. Doing so can increase restore speeds by an order of magnitude, compared to a single drive backing up/restoring from a single stream.

The downside is that parallelism can significantly impact CPU cycles. Serverless backup has the potential to relieve this stress (on backups and restores), but is not yet capable of doing so.

There are two types of restores to consider: individual file/folder and entire partitions/system. Restoring an individual

e-mail or PowerPoint file takes a lot of administrative time, in contrast to how much data is being restored. However, it is normally fairly easy to do, doesn't require a lot of CPU cycles, and can typically be accomplished using a single drive.

Restoring a terabyte of data, in contrast, is not as straightforward. In this type of situation, speed is the biggest concern. (Of course, if you have a mirror, this is not an issue since the application will automatically fail over to another server.)

But, if you don't mirror or cluster your servers, one of the fastest ways to restore is to have multiple data multiple streams going at once. Some applications can support dozens of drives in parallel.

As the data is streamed back to the system, the software doing the restore takes the various data streams and reconstructs the data to the single point in time at which the data was backed up.

Serverless backup can support this concept, but vendors have not yet implemented it. The concept is almost there, just at a slightly different level: Instead of restoring multiple files, serverless technology will restore multiple blocks. In both cases, the application helping with the restore assembles the data back to a specific point in time.

Using serverless technology could help reduce the CPU cycles, but in the event of disaster recovery, that may not be the primary concern.

It is important to make sure that a vendor's application can fully exploit the available speed of a tape drive. You don't want to find out that data can be backed up well within a backup window, but that when it comes to restoring the data, it takes two to three times as long.

Conclusion

Serverless technology has come a long way over the last few years but has a lot more ground to cover. A lot hinges on the functionality and reliability of NDMP, on enhancements to data mover technology, and on its ability to support parallelism. Developments in the coming year will give us a much better sense of the timetable for serverless growth.

The Storage Networking Industry Association (SNIA) is now overseeing the functionality and development of NDMP. Working in conjunction with multiple backup vendors and hardware manufacturers, industry standards are being set to expand the applications and capabilities of serverless backup technology. These standards will help customers to use the new enhancements over a broad range of both software and hardware. As these advancements are implemented, the functionality of serverless backup will continue to expand, offering significant per- formance gains.

Winston W. Hait is the senior product manager at Syncsort (www.syncsort.com) in Woodcliff Lake, NJ.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.