Backup with BRU and Mammoth-2

Posted on October 01, 2000



OpenBench Labs fires up EST's Backup and Restore Utility for a Linux-based backup test.

By Jack Fegreus

The moment you start to spin a disk, you start the clock ticking on that nasty spec that every one tries to ignore: mean time before failure. Entrusting disks with valuable data without a proven backup strategy is the best way to guarantee that CIO is the acronym for Career Is Over.

Having recently examined the raw performance of the latest helical-scan tape drives (InfoStor, July 2000, p. 52, "Next-gen tape battles: Exabyte's Mammoth-2"), OpenBench Labs now turns its attention to the most real-world application of tape: backup and restoration. With respect to backup/restore operations, there has been a longstanding argument over whether saving or restoring is the more critical aspect to the backup cycle. This is particularly true from a performance perspective.

On the more complex restores, which require numerous searches of the tape, BRU was still able to provide a solid 5MBps.
Click here to enlarge image

Speed is important when saving data on a tightly scheduled basis, particularly when other maintenance tasks or users are queued up. On the other hand, file res-toration is often done in a panic, with important operations grinding to a halt due to missing or corrupt corporate files. The consensus at OpenBench Labs is that save speed and data safety are of the utmost importance, and restore speed-while not insignificant-is of less importance.

Enhanced Software Technologies (EST) has built a number of significant data verification features into its Backup and Restore Utility (BRU) and XBRU (BRU's graphical user interface) to ensure the ability to accurately restore data. Typically, backup software relies on bit-level comparisons of data to maintain data integrity. The software compares file attributes such as modification times, date changes, file size, data changes, status changes, and links changes. The software then creates an execution summary of the differences, and uses this to determine the integrity of the file system. However, this scheme adds an enormous amount of overhead. BRU can do bit-level comparisons; however, it also implements an alternative, which EST dubs Autoscan. A total value is calculated for each block of data and is stored in the header for that block. When an archive is completed, BRU rewinds the tape and verifies the integrity of recorded blocks through the checksums. While this does add an extra step to the backup process, it enables BRU to accurately verify the integrity of each data block.

What's more, because the checksums do not require access to the file system, administrators can use them to determine archive integrity at any time. If system downtime is critical, the Autoscan check can be disabled and the data verified later off-line.

Nonetheless, BRU has a number of performance tuning options to maximize backup throughput. These include shared memory tuning, double buffering of data to help ensure tape streaming, user-definable buffer sizes, sparse file handling, and user-definable compression.

OpenBench Labs concentrated its testing using the XBRU interface. XBRU requires non-rewinding drives that do not automatically rewind to the beginning of the tape (BOT) after an operation completes. While there is a much higher degree of functionality running BRU from the command line, all of the necessary systems administrator tasks that are required for operational backups and restores can be accomplished with the X-Windows-based GUI.

As a result, operations staff members can perform their necessary tasks, including scheduling backups, verifying backup integrity, and file restorations, with minimal training.

The XBRU interface provides a simple, if not always intuitive, means of running BRU quite effectively. After clicking into the main screen, the administrator is presented with a split window with the current directory (CD) on the left and the backup device on the right. Using the GUI, all of the essential parameters that would be scripted in the /etc/brutab file can be set. For our tests, we set the data buffer to 64KB and disabled software data compression.

Software compression should only be enabled when used with old drives without hardware data compression. When software data compression is enabled with a device with hardware compression, such as the Exabyte Mammoth-2, the result is a nightmarish throughput quagmire. The software burdens the server CPU and produces a naturally incompressible data stream that the drive's hardware will attempt to further compress. Overall, this dysfunctional scheme will increase backup time on the order of 300% to 400%.

To define a backup process, which can be saved to run again and again, the administrator simply clicks on the disk directory structure and adds it to the tape device. In addition, the systems administrator can apply filters to exclude certain files. To test BRU's backup and restore capabilities, we ran it on our Dell Power-Edge 2400 server under Red Hat Linux 6.2. For our tests, we used a 5GB collection of C and HTML code along with a surfeit of Web site data, which resided on a RAID-0 array of Seagate X15 drives. As noted, our backup drive was an Exabyte Mammoth-2.

Last month, we set the throughput boundary conditions for this tape subsystem as between 27MBps with highly compressible data-not the case for the jpg, executable, tar, and zip files in our mix-and 11MBps with no compression. Backup throughput on our 5GB data set with BRU consistently averaged more than 15MBps.

When it comes time to restore files to disk, once again BRU lists the data in the left-hand screen as the current directory. This listing, however, is the disk directory as currently stored on the tape. This can seem a bit counter-intuitive at first use. This current directory mechanism, however, enables the same procedures used in selecting a backup to be used now to restore a directory hierarchy or a single file. At this time, rules can also be selected governing when to overwrite an existing file, rename files as they are restored, or to restore files relative to a different directory. While not quite the blazing speed of backup, restore throughput on our test system averaged 5MBps.

In addition, XBRU supports the pro-cess of writing multiple archives to a single tape. This option is only available for tapes, and does not function with disks. This scheme can save time in restore operations because BRU reads an entire archive during the restore operation. With tape drives like the Mammoth-2, this is not a particular problem; however, with older technology this could turn into significant overhead.

Click here to enlarge image

1. XBRU presents systems administrators with an easy-to-learn split screen interface with the current directory (CD) on the left and the backup device on the right.

2. During a backup, XBRU provides a live progress display with the number and size of the files backed up.

3. When a tape is loaded, XBRU provides a simple means to read the header file of the tape. In addition the integrity of the current archive can be assured by running a checksum on all of the data blocks.

4. & 5. While not as complete as the backup progress display, the live restore window does keep a running list of all of the files restored.

The benchmarks are available free and can be downloaded from

Labs scenario


Test software

  • Red Hat Linux 6.2
  • Enhanced Software Technologies' BRU v16.0 and XBRU v1.17

Test hardware

  • Dell PowerEdge 2400 server running Red Hat Linux 6.2
  • Exabyte Mammoth-2 external tape drive

Key findings

  • Backup throughput with BRU consistently averaged just over 15MBps.
  • While not quite the blazing speed of backup, restore throughput on our test system averaged 5MBps.
  • Using the XBRU interface, operations staff can perform their necessary tasks, including scheduling backups and verifying backup integrity.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives