By Michele Hope
Disk-to-disk (D2D) or disk-to-disk-to-tape (D2D2T) backup architectures are rapidly gaining acceptance by IT organizations constrained by their reliance on tape-based backup and restore.
To better understand real-world D2D issues, InfoStor asked several integrators, resellers, and end users to share their experiences implementing disk-based backup. A common picture began to emerge.
Clearing up customer misconceptions about D2D is often the first line of business for many VARs and integrators.
"End users tend to think that D2D will automatically solve their backup performance problems," says Rich Baldwin, chief executive officer of San Diego-based Nth Generation, a storage-related consulting firm and solutions provider. "They think, 'If you want to go fast, just go disk-to-disk.' "
Not always true. Throughput can be impacted by a variety of factors that may still exist after disk-based backup is introduced. CEO Tom Mumford of Auburn, MA-based solution provider TriAxis puts it this way: "Speed is a function of the infrastructure in addition to the software." Mumford has helped implement a variety of disk-based backup architectures for customers, including virtual tape libraries from vendors such as Diligent Technologies.
W. Curtis Preston, vice president of service development at GlassHouse Technologies, cites certain backup-related problems that a disk-based solution won't cure. These can include insufficient bandwidth to back up servers, an unreliable network, or client software that "doesn't cooperate with the application or operating system, such as many Exchange mail software packages or clients for clustered NetWare servers," says Preston, who is also an author of books on backup.
Reiterating this point is Kelly Small, a product manager for Computer Upgrade Corp. (CUC), a storage/systems integrator in Corona, CA. "Customers must be told that D2D is not a fix for slow transfer speed created by physical network or software configuration problems," says Small. "Unfortunately, this is usually a post-installation 'gotcha.' We have been brought into a few problem sites after the fact."
Backup bottlenecks can even be caused by a large number of files in a given directory or by the existence of several hundred (or thousand) small, one- or two-block files. Nth Generation CTO Dan Molina says the number of disk spindles in use on the backup target's disk devices will also impact backup throughput speeds.
VARs and integrators usually place the most common disk-based solutions into two categories: the backup-direct-to-disk approach and some type of virtual tape library (VTL), or tape emulation, method.
In the first case, customers typically use the backup-to-disk (or disk "staging") features found in their backup software in conjunction with low-cost disks, such as Serial ATA (SATA) drives. In contrast, VTLs present themselves to the backup software as if they were physical tape libraries, but use some form of virtual disk pooling software to write or read the backup data to random access disk first, before writing it to a back-end physical tape library.
(In the case of one user we spoke with, a third approach emerged-STORServer's all-in-one software/hardware "backup appliance," which includes Tivoli Storage Manager [TSM] software to perform what STORServer calls "incremental forever" backups and rapid restores to its own virtual storage pool. Appliance functionality also includes TSM's policy-based management features to help assign and automatically apply retention rules to related backup data.)
Scott Robinson, CTO of Chanhassen, MN-based Datalink, an integrator specializing in information storage architectures, says his company educates customers about D2D by ascribing the use of disk to one of four architectures: backup to disk, tape emulation, off-host backup (originating from the use of a third mirror or replicated copy of primary data), and replication-based backup (involving some form of replication, snapshot, or point-in-time copy).
(A Datalink white paper, "Four disk-based approaches to enhancing data recovery," available from the company Website, covers the pros and cons of each architecture.)
Regardless of which D2D architecture you favor, VARs and integrators say it pays to look closely at whether your current backup software will even handle disk-based backup and how it implements this functionality. "There's a misconception out there that any [backup] software supports D2D, or that you can put any low-cost disk in there and it will do the job," says Dave Holloway, chief operations officer at Aliso Viejo, CA-based West Coast Technology. West Coast Technology is a storage VAR that has implemented virtual tape libraries from multiple vendors, including several REO series VTLs from Overland Storage.)
Restore issues take center stage
According to solution providers like Nth Generation's Baldwin, most customers deploy D2D to help them speed up unacceptably slow restore times for end-user files. Streamlining backup windows usually comes in a close second to rapid restores.
Take the case of Alpharetta, GA-based Per-Se Technologies, a third-party healthcare company that provides back-office billing and claims processing for hospitals and physicians. Per-Se's IT group is in the first phase of implementing a three-phase, disk-based backup upgrade.
According to Eric Chester, Per-Se's director of systems engineering and support, this phase has involved moving all of the company's Intel/Windows-based gear to a D2D backup architecture. With the help of Duluth, GA-based solution provider VeriStor, Chester and his team decided on a direct-to-disk backup architecture that employs Veritas NetBackup 5.0 software along with four SATA-based ATAboy storage sub-systems from Nexsan Technologies.
"Very tenuous" is how Chester described the prior state of restores available for files previously stored on the company's Windows-based departmental servers. According to Chester, the disk-based backup upgrade is part of a larger server consolidation effort the company has undertaken to provide more-centralized data protection.
To aid in restores, Chester decided Per-Se would need an initial 23TB of SATA disk in order to keep the last two weeks of data available on disk. He says they now routinely fill up the ATAboy subsystems during their nightly or weekly backups, along with immediately sending the data to tape for off-site disaster-recovery purposes.
Taking a different road to D2D is Joe Ambrosino, a networking operations manager for Philadelphia-based Sovereign Bank. Ambrosino and his team worked with Unicom, a consulting firm in Winsocket, RI, to implement SAN-based backups at a few of the company's data centers.
Taking advantage of 2GBps rates
Sovereign Bank began looking at D2D as a way to help them take advantage of the SAN's fast, 2GBps transfer rates when performing backups. According to Ambrosino, the bank originally considered connecting a tape library to the SAN but ruled out that option because it might have required streaming backups to as many as four tape drives simultaneously to get the transfer rates the bank wanted.
Unicom helped Ambrosino explore the use of a virtual tape library solution, whose purported ease of implementation and minimal impact on his existing backup system were what initially drew Ambrosino to the technology.
Sovereign eventually decided on a VTL from Sepaton ("no tapes," spelled backwards), whose road map included software features that appealed to Ambrosino, such as replication, synthetic full backups, and the ability to perform user-driven restores. Ambrosino reports that it only took a few hours to set up the VTL and resume backup operations.
Ambrosino says that part of the reason for Sovereign Bank's subsequent smooth-sailing experience with D2D had to do with how well the bank sized the VTL to accommodate keeping the last two weeks of data online for restores. This translated into 50TB in Sepaton libraries now spread across four data centers.
Also drawn to D2D for its promise to improve on the speed of both backups and file restores was Alex Schmauss, a system administrator at Northern Inyo Hospital in Bishop, CA. "The pivotal reason we looked at disk-to-disk was because we were really challenged to try to ensure the backup we advertised [was what] we were offering. We were not meeting the requirements we had to meet, so we really didn't have a choice [about upgrading]," he says.
Schmauss and his team are responsible for backing up close to 2TB of data that has often been scanned into the hospital's document management system.
The rapid restore capability of STORServer's backup appliance was a key factor that influenced Schmauss' ultimate decision to implement the appliance. "I saw how easy it was to get a user file back-in about two minutes," says Schmauss.
In all, most customer and solution provider experiences add credence to the original promises of D2D backup/restore. Early problems with unreliable SATA drives or software that did not recognize disk as a backup medium appear to have been resolved. However, VARs and integrators caution users not to get so carried away by D2D that they forget the important role tape still plays in applications such as off-site disaster recovery and remote replication.
Michele Hope is a freelance storage writer and owner of TheStorageWriter.com. She can be reached at email@example.com.
ADIC integrates disk, tape
By Heidi Biggar
With the 2.0 release of its Pathlight VX disk-based backup product next month, Advanced Digital Information Corp. (ADIC) claims it is taking steps toward addressing some of the problems with existing disk-based backup products, including issues with performance, scalability, management, cost, and disaster-recovery support.
ADIC has integrated CLARiiON ATA RAID disk subsystems with ADIC Scalar and StorageTek L-Series tape libraries in a unified system that appears as a virtual tape library to applications. Access to data is automated and managed according to user-defined policies.
The policy-based management capability is enabled by ADIC's StorNext data management software. The Pathlight VX system also integrates ADIC's iPlatform library technology and storage networking connectivity technology to create the SAN appliance.
Users determine the mix of disk and tape storage according to recovery time objectives (i.e., how fast the data needs to be restored) and cost issues. For example, users with shorter RTOs may choose to keep several weeks of data on disk versus tape than those with longer RTOs.
Similarly, budget-constrained users may opt to back up only several days' worth of data to disk to keep acquisition costs down. In this case, the idea is to keep only data that needs to be recovered quickly on disk and all other data on tape, which is still significantly less expensive than disk on a per-megabyte basis.
For example, a 45TB system configured with all disk would cost about $15.80/GB, while a 45TB system configured with only 3.8TB of disk would cost about $7.50/GB. Configure the system with 300TB of tape and just 23TB of disk and the price drops down to about $2/GB, according to ADIC.
With the 2.0 release of Pathlight VX, ADIC also claims to have doubled throughput up to 2TB/hour to the CLARiiON arrays.
"What's interesting about the Pathlight announcement is its performance," says Dianne McAdam, a senior analyst and partner at the Data Mobility Group consulting firm. "ADIC cites customer examples [specifically, JetBlue] where the disk-based backup using Pathlight is twice as fast as tape and the restore rate is more than twice as fast as tape."
For disaster recovery, Pathlight VX 2.0 exports application-readable media, which can be restored in any standard tape drive or library and allows users to create multiple copies of data on different media types for long-term data protection.
Pathlight VX 2.0 scales from 3.8TB to more than 2.8PB.