By Michele Hope
When disk-to-disk (D2D) backup first became part of the storage industry lexicon, a portion of intrepid users volunteered to put the first wave of disk-based backup products through its paces. Still, others decided to wait and see how this emerging technology would ultimately play out.
Since then, many users have successfully incorporated D2D into part-or sometimes all-of their backup routine. Although some users have replaced tape with disk, most have taken more of a middle-ground approach, choosing to use disk as a short-term staging area for backup data that is subsequently streamed to tape.
The following case studies illustrate how some end users are taking advantage of D2D backup/recovery, with an emphasis on some of the new architectures.
De-dupe and compress
Several organizations use D2D systems that offer some type of high compression ratio for backup data. This technology-also known as de-duplication, single-instance storage, or content optimized storage (COS)-typically eliminates the backup of duplicate data by identifying and storing just the block-, byte-, or bit-level changes or additions made to data sets since the prior backup data was saved. As a result, users can back up their data using a fraction of the disk storage they’d otherwise require.
One such user is Paul Scheib, director of IS operations at Children’s Hospital in Boston, a non-profit Harvard University teaching hospital and one of the largest pediatric hospitals in the country. For their first foray into disk-based backup, Scheib and his team tried virtual tape library (VTL) technology to address the growing tape hardware issues and errors they had begun to experience when they were attempting to back up 80TB of data from the hospital’s estimated 400 Unix and Windows servers.
After less-than-satisfactory results with the VTL and no significant increase in the speed of backups going to the virtual tape library, Scheib concluded it wasn’t so much VTL technology at fault as it was the need for the hospital to overhaul its current backup architecture.
After completing the re-architecting, which involved consolidation to larger media servers, Children’s Hospital took a new look at D2D with Data Domain and its claim of 20:1 compression ratios. Scheib figured 20:1 sounded a lot better than the 3:1 compression ratios some VTL solutions had claimed when he first ventured into D2D.
The hospital wanted to save as much as two months of backup data on disk. Compression ratios would play a key role in finding an affordable D2D solution. Needing to back up somewhere between 80TB to 100TB of production data each week (and the two-month equivalent of more than 800TB of traditional backup data), Scheib and his team projected they would need only 50TB of physical disk capacity from within the Data Domain DD400 Enterprise Series. This estimate has since played out, thanks to sustainable data compression ratios across the board of approximately 20:1, which was a surprise to Scheib because he expected more fluctuation in compression ratios for different types of data.
Says Scheib, “When you look at the usable space we get with compression, it comes out to at least half the cost of a Clariion or EVA-class disk array, and probably greater than that. If you buy into getting a 20:1 compression ratio, it’s just hard for anything to compete from a price standpoint.”
Scheib now has six Data Domain systems plugged into the hospital’s Ethernet network, with plans to move them to a secondary location where backup jobs will be subsequently redirected over a WAN. Although his team currently replicates nightly to tape, he looks forward to moving to just monthly tape backup once backups to the secondary location are up and running.
D2D reduces stress
Virgil Vaclavik, technical services manager at oil industry manufacturer Hydril LP, was also impressed with the compression rates he began seeing from his company’s recently implemented EVault Info-Stage software. The software uses EVault’s DeltaPro technology to compress and encrypt backup data prior to sending it to a local or remote “vault” also managed by the customer. It identifies and transmits only new or changed data blocks.
Being in hurricane territory, Vaclavik and his team were prompted by Hydril LP’s board of directors to come up with a more rapid disaster-recovery plan than Hydril’s previous tape-based infrastructure allowed. After witnessing the several-day traffic gridlock ensuing from the previous season’s hurricane warnings, the company was concerned it might not have such easy access to off-site tapes if a disaster ever struck the corporate headquarters.
Vaclavik decided to search for a disk-based backup/recovery solution that could encompass the backup needs of all of his company’s systems, including an AS/400 with close to 1TB of production data. EVault’s InfoStage turned out to be the answer.
At Hydril, InfoStage is installed in a dual-vault configuration: The primary InfoStage vault is at Hydril’s main data center in Houston and is based on a Dell 1750 server with 4.5GB of usable disk space provided by a back-end EMC AX100 system. A matching Dell/EMC system located about 35 miles away acts as Hydril’s InfoStage remote vault. To ensure fast recovery if both sites are impacted by a potential electrical outage, Vaclavik figures the company could even ship a few portable vault servers, equipped with the company’s data, to the company’s collocation facility located in the northeastern US.
With the compression functionality, Vaclivik claims that backing up the 1TB of AS/400 data to the remote vault now only requires 150GB of disk space. In fact, backing up a total of 12TB of production data from Hydril’s Windows and Linux servers, along with the AS/400, requires less than 2.5TB of disk space for compressed backup copies that exist on either the local vault or the remote vault.
“The compression ratio is phenomenal,” says Vaclivik. “What’s nice is that the system does bit-wise changes. If you have an SQL Server database and just two records in the database change, the only thing replicated to the vaults is those two records.”
Phil Jay, a senior network technician at the Gates Chili School District, in Rochester, NY, is quite happy with the 20:1 backup compression rates offered by his ExaGrid NAS-based InfiniteFiler system, which includes a SATA-based Intelligent Data Repository.
But what really gets him excited is not the compression. Jay used to spend as much as 45 minutes a day monitoring, managing, and troubleshooting the multiple backup jobs that had been scheduled to run the previous night. The staggered backup jobs were designed to protect data on the 14 servers that support the school district’s administrative staff applications, as well as high school and middle school staff and students involved in creating large files for various art, graphics, and technical classes.
Jay’s headaches involved managing local and remote tape rotations among the various servers, running between servers to view tape activity, and battling backup window overruns from the middle school and high school data that was starting to max out nightly tape backup capacities.
The ExaGrid system now frees him of these types of headaches. Jay applauds the unified console from which he can now view all disk and tape activity across all 14 servers. Instead of 45 minutes a day, he estimates he now spends two to three minutes at the console reading through logs. Since installing the ExaGrid system, he has moved completely away from the use of tape and estimates it will take the school district three to four years to recoup the cost of the ExaGrid system, based on yearly tape savings alone.
Jay now plans to deploy a second Exa-Grid system for remote disaster recovery at one of the school district’s elementary schools farthest removed from the main campus.
Keep the tape
Some D2D users are not sold on the concept of disk replacing tape entirely and choose to use disk for backup in only specific instances.
For Travis McCulloch, systems engineer at Orlando-based Hilton Grand Vacations, situations lending themselves to disk-based backup include applications with short backup windows or rapid recovery needs, and remote office backups that traverse the WAN and benefit from the throughput speed of random disk access at the corporate data center.
McCulloch uses CommVault’s Galaxy software for disk-based backup in these cases, with some disks and “a fast group of spindles” he’s allotted from both an HP MSA1000 system and an EMC CX100. The Galaxy software offers both direct-to-disk backup functionality and VTL functionality to emulate a tape library. McCulloch appreciates how easy the system makes it to manage and restore backup data, without needing to send the file back to a “fake-tape” staging area first, like he had to do with a prior VTL solution.
One of McCulloch’s main D2D uses is to support the backup needs of the company’s remote offices in Las Vegas and Hawaii. Instead of backing up locally, Hilton Grand Vacations has chosen to centralize and encrypt remote backups by sending the backup data over the company’s existing WAN link.
Here, Galaxy’s disk-based backup functionality-and a Riverbed Technology Steelhead WAN accelerator sending only new or changed data-are in use to more quickly transmit the nightly data. Once the data set arrives at the primary data center, the Galaxy software then copies it to tape. Data sets stored on disk are soon over-written by new backup data, making the disk a short-term staging area for backups waiting to be offloaded to tape.
McCulloch figured he could only get about 1GB per hour of data transmitted to tape over his current WAN connection, and he’d end up with a lot of tape troubles in the bargain. Instead of tying up one of his Orlando tape drives for 9 hours to get 10GB of data, McCulloch opted instead for disk backup. “It’s much faster to run that to disk, and much easier on the equipment. Plus, when I back up that 10GB locally to tape, it only takes 10 minutes,” McCulloch explains.
More disk staging
Another CommVault Galaxy user is Matt Pittman, director of enterprise systems at Penson Financial Services, an organization that handles much of the back-end work to clear transactions for brokerage houses. Pittman recently acquired two Xiotech Magnitude 3D 3000 disk arrays to help store primary data along with tackling both his data-protection and tiered storage agendas. One of the disk arrays is at the company’s Dallas headquarters, while the other array is at an off-site disaster-recovery location.
Pittman knew how critical it was for the company to maintain access to Microsoft Exchange e-mail transactions at all times. He also knew his tape-based backup environment might require as much as a full day to restore the system in the event of an outage. He went looking for a disk-based solution that would let him recover Exchange from the point of an actual outage in just two to four hours.
He eventually went with a multi-pronged approach to data protection and data migration. Pittman now uses CommVault’s Galaxy to perform local backups to a group of SATA drives in the Xiotech Magnitude array (the total chassis comprises 50% Fibre Channel disks and 50% SATA disks). The local backup data is then streamed directly to an LTO-3 tape drive, at which point it is overwritten daily on disk. To ensure rapid recovery, Pittman also uses Xiotech’s DataScale Geo-Replication software to perform synchronous mirrors of data between the primary disk array and the secondary, off-site system.
This multi-pronged approach has led to a number of positive changes: Local backup times have been cut in half since the company switched from tape to disk-based backups. Pittman says the change now allows his group to do full backups of Exchange each night, as opposed to once a week. He also claims that they can restore Exchange in less than 10 minutes, if needed, including fail-over to remove servers- a feat well below his original goal of restoring the system in two to four hours.
Pittman has chosen to tackle the growing volume of backup data more proactively at Penson Financial Services via an emerging information lifecycle management (ILM) strategy that also incorporates Xiotech disk arrays, in conjunction with CommVault’s Data Migrator software. He now uses this solution to identify and migrate certain e-mail from primary storage to SATA disks if it has not been accessed in a certain number of days. This helps him maintain a more manageable size for the overall production data requiring backup.
D2D + ILM
Often touted as a nearline source of affordable disk storage, SATA-based storage subsystems are cropping up at a number of sites as a source for both disk-based backup data and data being migrated “downstream” from primary storage as part of an information lifecycle management (ILM) strategy.
K.C. Tomsheck, senior director of IT operations at Vernon Hills, IL-based reseller CDW, chose to piggyback a D2D architecture on top of a larger ILM project that now consists of 14 individual SANs on the network, including EMC Clariion CX700s, EMC Celerra, and an EMC Centera archive.
Tomsheck decided that an ILM approach would allow him to ultimately buy less storage over time by better using the storage CDW already had.
Tomsheck and his team needed a disk-based solution that could successfully back up the hundreds of terabytes currently in production. They decided to bring in a few EMC Clariion Disk Libraries (CDLs), a type of virtual tape library (VTL) that uses Legato NetWorker software in conjunction with some large Clariion 700 disk arrays at the back-end. One CDL is used at the Vernon Hills data-center location, with another in use several miles away at a remote disaster-recovery site.
Tomsheck is pleased with the local backup speeds he’s getting from server to CDL-approximately 2.8GB to 3.2GB per minute, as opposed to prior backups to tape that ran 200MB to 300MB per minute. Backups of SQL Server databases also now complete in just one to one-and-a-half hours, as opposed to the previous eight hours.
Another organization that decided to tackle a variety of storage issues, including disk-based backup, within one storage solution is the NYU School of Medicine. According to director of systems Jeff Berliner, avoiding the pain of potential downtime to the school’s e-mail server became the impetus behind the school’s ultimate move to networked storage in the form of two Compellent Storage Center SAN solutions.
Unfortunately, it took a painful, weeklong e-mail outage to first convince the organization that networked storage with disk-based backup and replication was a better way to go. Now, all e-mail data resides on one SAN at the primary data center. Using Compellent’s disk-based snapshot technology, which the vendor calls Data Instant Replay, Berliner and his team have scheduled an “instant replay” of specific production volumes to be taken every 15 minutes.
The instant replays are saved on another volume on the SAN for two hours before being overwritten by newer instant-replay images.
The Compellent SAN uses a variety of different drive types, including SCSI, Fibre Channel, and nearline “Fibre ATA,” the latter of which claims to offer performance characteristics close to Fibre Channel, but at approximately one-third the cost. This mixture works well for Berliner’s needs, as it allows him to use the primary SAN and now a second Compellent system, located a few blocks away, to host primary production data as well as performing remote asynchronous replication between the two systems via Compellent’s Remote Instant Replay functionality.
Berliner recognizes his use of D2D has fundamentally changed the role of tape. “We can now have a fully mirrored copy of our data redundant not just locally but also three to four blocks away,” he explains. “This is really where tape comes in now. If any disaster takes out both of these sites, the recovery strategy at that point is to use our off-site tapes to perform a complete data-center recovery.”
Michele Hope is a freelance storage writer. She can be reached at email@example.com.