Super tape shoot-out: SDLT vs. LTO

Posted on February 01, 2004

RssImageAltText

With the introduction of the SDLT 600 the tables are turned in performance and manageability–at least for the time being.

By Jack Fegreus

Starting with the first generation of LTO Ultrium tape drives, at least when it came to performance the difference between Ultrium and SDLT was, to say the least, dramatic. Even when Quantum released the SDLT 320, which barely managed to catch up to the first generation of LTO Ultrium drives, the Linear Tape Open consortium (which is led by Hewlett-Packard, Certance, and IBM) released the second generation of Ultrium—once again leaving SDLT trailing far behind in performance.

That was then and SDLT 600 is now. With the introduction of its SDLT 600 drive, Super DLTtape II media, and DLTSage management software, Quantum has taken a, well, quantum leap forward and left the competition on the wrong side of the performance curve.


Figure 1: The oblTape v2 benchmark was used to peg the probable performance envelopes for the SDLT 600 and Ultrium 460 tape drives. While the lower bound for both drives was approximately 30MBps, the SDLT 600 extended up to 70MBps.
Click here to enlarge image

To assess the state of what storage analysts dub the "super-tape" market, we set up a comparison of two drives: the HP Ultrium 460 and the Quantum SDLT 600. For this assessment, we examined throughput performance with homogeneous streams of compressible and non-compressible data, as well as with heterogeneous streams of data in which we varied the percentage of compressible versus non-compressible data.


Figure 2: When we compared the performance of the SDLT 600 and Ultrium 460 drives normalized to their respective native throughput rates, differences in the electronics of both drives to improve performance become obvious as the ratio of compressible to non-compressible data changed.
Click here to enlarge image

The use of homogeneous data provides a means to set upper and lower boundaries for probable performance. More importantly, the use of heterogeneous data streams provides a means to test the capability of the drive's electronics to maintain the drive in a data-streaming condition.

As data becomes more variable in a backup scenario, a tape drive becomes prone to halting, which necessitates repositioning the tape to resume writing data. For drives in the new super tape class, repositioning is especially costly in terms of the average throughput rate. As a result, performance measurements using data streams of varying compressibility are far more important for predicting real-world performance.

Since our main goal was to examine the drive's suitability for backup in an enterprise scenario, all tests were performed using benchmarks running in user mode in order to incur all of the operating system overhead that any enterprise application would incur. In addition, it was essential that our test hardware not present any throughput bottlenecks that would not be present in an enterprise-class data center.

Testing was done on an HP ML350 G3 server sporting dual Xeon CPUs, DDR memory, and PCI-X expansion slots. For our tests, the server ran SuSE Linux Enterprise Server (SLES) version 8 and Windows 2003 Server. We installed an Adaptec Ultra320 SCSI host bus adapter (HBA) that was attached to four Maxtor Ultra320 Atlas drives in a RAID-0 stripe set for disk I/O. For tape I/O we installed an Adaptec Ultra160 SCSI HBA that was attached to the Quantum SDLT 600 drive and HP Ultrium 460 drive.

To verify our results, we installed an off-the-shelf enterprise backup package, NetVault version 7.1 from BakBone Software. We then performed a series of backup-and-restore operations using a 5GB data set. This data set contained a mix of predominantly Microsoft Office data files along with a mix of HTML and image files from a number of Websites created by Strategic Communications.


Figure 3: When we compared the actual performance of the SDLT 600 and Ultrium 460 drives on both the oblTape benchmark and backup jobs with NetVault software, performance closely paralleled benchmark predictions, with the SDLT 600 having a 16% performance edge.
Click here to enlarge image

For large enterprise backup jobs, an important aspect of the SDLT 600 is the introduction of a new class of media: Super DLTtape II. The SDLT drive can read Super DLTtape I and DLTtape VS-1 cartridges, but to record you will need the new medium. This new tape is key to the increase in throughput speed and the 50% leap in cartridge recording capacity vs. an LTO-2 tape cartridge.

Super DLTtape II features a new recording layer dubbed eMP60, which contains ultra-fine ceramic-armored metal particles that are 40% smaller than the particles found in the AMP++ recording layers of Super DLTtape I and DLTtape. The smaller particles produce two very positive magnetic effects. First, they lower the recording layer's coercivity, which is a measure of the energy required to write a bit. At the same time, these smaller, denser particles raise the remanence, which is a measure of the residual strength of a magnetic bit once it has been written. The net result is that it is easier to write bits that will archive for longer periods of time with Super DLTtape II.

That also makes it easier for a tape drive to write smaller, denser bits in the same area on the tape. As a result, more data can be written or read more quickly using Super DLTtape II cartridges. In particular, the SDLT 600 drive writes 40% more tracks and lays down 20% more bits on a track than the previous generation of SDLT 320 drives. This translates into an 87.5% increase in the native (uncompressed) capacity of a Super DTLTtape cartridge, which holds a minimum of 300GB.

We used our oblTape benchmark with compression turned off on the SDLT 600 drive to peg native throughput at 35.4MBps. We then used oblTape to generate a stream of synthetic data distributed in a pattern that was designed to produce a nominal 1.8-to-1 compression ratio on a DLT 7000 drive. This test produces an anticipated high-water mark for performance. Here we measured throughput at 69.8MBps.

For our worst-case scenario, we generated a stream of purely random (non-compressible) data and left the drive's compression on during the test. This case represents the problems encountered when backing up a highly compressed file. During such instances, a drive can waste cycles as it attempts to compress non-compressible data. During this test, throughput fell to 30.5MBps.

In contrast, our oblTape benchmark pegged the native throughput of an HP Ultrium 460 drive at 29.2MBps. The upper and lower ends of the performance envelope for best- and worst-case scenarios were pegged at 53.6MBps and 28.9MBps.

While these tests provide an important performance envelope for throughput, the widening range in performance (which in the case of the SDLT 600 ranged from 30MBps to 100MBps for individual files in backup runs using NetVault) makes real-world performance highly dependent on the ability of the drive's electronics to handle fluctuations in throughput while continuing to write data.

To solve this problem, the Quantum SDLT 600 uses pure digital circuitry in a technique dubbed Digital Data Rate Agent (DDRA). One of the the key characteristics of DDRA is a minimization of command overhead to maximize bandwidth for data traffic.

In contrast, the HP Ultrium drive uses a very different hybrid approach, dubbed Adaptive Tape Speed (ATS), in attempting to solve the same problem. Unlike DDRA, ATS employs both digital and analog circuitry. One of the unique aspects of ATS is its ability to sense an incoming stream of non-compressible data and turn the drive's compression circuitry off until it can again play an effective role.

To test the effectiveness of DDRA and ATS in keeping their respective tape drives streaming during write operations, we introduced variability into our synthetic data stream of optimally compressible data. In essence, our tests to characterize a best- and worst-case performance envelope can be considered as special end-points of the performance curve, where the amount of random (non-compressible) data within the test stream is 0% and 100%, respectively.

By first partitioning the data stream generated by the oblTape benchmark into 1,000-block segments and then subsetting these segments into compressible and non-compressible components, we were able to stress these performance optimization schemes and measure their effects on overall throughput.

Using the raw performance measured over a range of compressibility ratios, probable backup performance can be projected given a reasonable estimate of the number and size of compressed data files within a given backup job. More importantly, by first normalizing each drive's raw performance to its native throughput speed, it is possible to characterize and directly compare how well schemes such as DDRA and ATS improve throughput performance.

Comparing the normalized results of the Quantum SDLT 600 with those of the HP Ultrium 460 yields a number of interesting insights into the issues surrounding DDRA and ATS. Furthermore, such a comparison also provides a number of important implications for the creation and implementation of backup policies that will directly affect the ability to maximize the throughput performance associated with certain backup jobs.

Looking first at the normalized data, two distinctively different regions where DDRA and ATS performance are divergent immediately become evident. The first region covers data streams that were generated with only 0%-to-30% compressible data. The second region covers data streams that were generated with 70%-to-100% compressible data.

Not surprisingly, when our compressible data stream was substantially interrupted with non-compressible data, the ATS optimization scheme on the HP Ultrium 460 drive provided the better overall throughput. With less than 30% of the data stream compressible, turning the compression circuitry off at the drive proved to be the best solution. Under these conditions, throughput on the HP Ultrium 460 drive slowly degraded to the benchmarked native speed—approximately 29MBps.


Figure 4: Running on Windows 2003 Server, DLTSage xTalk proved to be a very easy wizard-driven tool to run diagnostic tests, upgrade firmware, and retrieve device statistics.
Click here to enlarge image

In contrast, throughput on the SDLT 600 had already converged to its native (uncompressed) speed when 30% of the data being streamed to the drive remained highly compressible. As the percentage of compressible data dropped below 30%, the average throughput on the SDLT 600 drive was slower with compression turned on than it would have been had we turned the drive's compression off.


Figure 5: The DLTSage HealthCheckup Test provides a wealth of information for both the SDLT 600 drive and Super DLTtape II cartridge.
Click here to enlarge image

It is important to note that such a level of non-compressibility in data is characteristic only of inactive files that have been archived. Even compressed image file formats rarely, if ever, reach this level of compression because of the inherent degradation problems that would be caused by a loss of data. As a result, any backup job characterized by a large enough volume of data to make it less than 30% compressible very likely includes compressed archives residing in a specialized directory or volume.

For system administrators, the clear implication is to isolate such a directory or volume and create a separate backup job that has compression on the SDLT 600 turned off. It should be noted, however, that the cost of not following this proscription in terms of raw performance is negligible with the HP Ultrium 460 drive. Even with totally non-compressible data, the difference in native throughput speeds between the two drives leaves throughput in such a scenario in a dead heat.

At the other end of the compression spectrum, the SDLT 600's DDRA circuitry boosts overall throughput to a greater degree than HP's ATS circuitry when compressible data makes up 70% or more of the data stream. In other words, in a typical backup job scenario, which is characterized by occasional encounters with highly compressed files—such as zip archives—and more frequent encounters with moderately compressed data—typically image files—will continue to stream better and exhibit a higher overall compression level using the SDLT 600 drive.

To verify these results in an operational setting, we installed BakBone Software's NetVault version 7.1 on our Linux server. We then proceeded to make a series of backups using a 5GB test archive that contained a large number of data files generated by Microsoft Office, including Word documents, Excel spreadsheets, and Access databases. This archive also included a significant volume of image files associated with a number of Websites created by Strategic Communications.

Overall compressibility for our test archive during the NetVault backup jobs was maintained within a fairly narrow range: 1.74:1 to 1.82:1. Nonetheless, peak compression rates on individual files reached 2.86:1, which translates into an effective throughput of 100MBps on the SDLT 600 drive. Taking the geometric mean throughput over a series of backup jobs, results for the two drives were totally consistent with the results of our synthetic benchmark. Backup throughput on the SDLT 600 was pegged at 57.1MBps, while throughput on the HP Ultrium 460 was pegged at 49.2MBps.

While performance is one important metric in choosing a tape drive—or more likely the kind of drives that will be used in an automated library—the ability to monitor and control the operational status of the drives is equally important in data centers packed with servers, switches, printers, and storage devices. For tape storage devices and tape media in particular, failures are highly dependent on age. As a result, monitoring usage patterns is a highly effective way of predicting any risk of failure. To this end there have been a number of proprietary efforts to provide tape manageability centered on the embedding of smart chips in tape cartridges. Such efforts, however, have shown limited results because storage solutions vendors have been slow to adopt these incompatible schemes.

To provide for the ability to manage DLT and SDLT devices and media, Quantum has introduced DLTSage, which facilitates the exchanging of management information between tape drive, media, and host systems. DLTSage uses a combination of open standards—the SCSI Log Page and ANSI T-10 MAM standards—along with proprietary drive diagnostics to build device-level management information that is very rich in usage pattern data. Statistics are available as total numbers over the life of the devices—drives and media cartridges—as well as counts for the last two instances of both read and write jobs. These statistics include megabytes written or read, the number of read or write retries incurred, and the number of media loads.

For the IT end user, there are two wizard-driven applications for tapping into this data. The DLTSage program xTalk can be run on the host server to which the device is connected using SCSI protocols—making it useful for both SCSI- and Fibre Channel-connected drives. There is another DLTSage program dubbed iTalk that communicates with tape drives via an infrared port built into the front of each drive, so it can be run on a laptop or PDA.

These programs provide access to a wealth of operational data, as well as a means to test devices and upgrade firmware on drives. Using xTalk on a host running Windows 2003 Server, we were easily able to run diagnostic software that would test, analyze, and isolate drive and cartridge problems. Nonetheless, the real promise of DLTSage is in the ability to predict and prevent problems from occurring, rather than analyzing them after they occur.

However, this sort of task is not easily accomplished manually; it is best done by continuous monitoring and analysis by a service or daemon process. For that reason, DLTSage also provides a management protocol interface for third-party hardware vendors (i.e., tape libraries) and software (i.e., backup packages). While IT users will find xTalk and iTalk useful as tools for data-center operations, the true measure of their power will really come to fruition through the adoption of DLTSage by third-party vendors.

The SDLT 600 represents a significant step forward in the evolution of SuperDLT technology. In basic performance, it more than doubles the native throughput rate of the previous generation of SuperDLT drive: the SDLT 320. More importantly, its native throughput rate exceeds that of the HP Ultrium 460 by 16.67%. Furthermore, Quantum's DDRA on-board circuitry improves throughput on the SDLT 600 more effectively than HP's ATS when non-compressible data in a job stream represents less than 30% of the total data.

Jack Fegreus is technology director at Strategic Communications (www.stratcomm.com). He can be reached at JFegreus@StratComm.info.


Lab scenario

Under examination
Performance of super tape drives

What we tested

  • Quantum SDLT 600 tape drive
  • Hewlett-Packard StorageWorks Ultrium 460 tape drive

How we tested

  • HP ML350 G3 server
  • SuSE Enterprise Linux 8
  • Microsoft Windows 2003 Server
  • Adaptec Ultra320 SCSI and Ultra160 SCSI HBAs

Key findings

  • Benchmark measurements pegged the native throughput rate of the SDLT 600 exceeded that of the HP Ultrium by 16.67%.
  • The SDLT 600's DDRA circuitry boosted overall throughput to a greater degree than HP's ATS circuitry when the amount of compressible data was greater than 70% of the data stream.
  • Using DLTSage xTalk on a host running Windows 2003 Server, we were able to run diagnostic software to test, analyze, and isolate drive and cartridge problems.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives