Fabric-based replication: new option for BC/DR

Posted on January 01, 2005

RssImageAltText

Running applications such as replication from network-based devices provides a number of benefits, including lower costs and increased flexibility.

By Heidi Biggar

An increasing number of users are implementing fabric-based replication for disaster recovery and business continuity. Some of these users are doing replication for the first time, while others are using fabric-based replication in conjunction with-or even in lieu of-host- or array-based replication technologies they have used for years.

Either way, analysts say fabric-based replication in various forms is garnering much-deserved attention from both large and small users because of its obvious benefits: It allows users to safeguard data in a flexible, cost-effective, and easy manner.

Although there are a number of vendors offering fabric-based replication technologies-and many more that are or will be offering switched-based implementations-this article focuses on users’ experiences with products from four vendors: FalconStor, Troika (with StoreAge software), DataCore, and Kashya. We’ll take a look at switch-based alternatives in a future issue.

Thompson Hine LLP
Implementation: FalconStor

Thompson Hine, a large business law firm, is in the midst of a three-year, three-phase disaster-recovery project that is atypical of organizations of its type and size. Replication, based on FalconStor’s IPStor, is one of the key components of the project.

“Thompson Hine didn’t have a rock-solid disaster-recovery plan in place,” says Greg Knieriemen, vice president of marketing at Chi Corp., which was instrumental in Thompson Hine’s disaster-recovery efforts-including the decision to implement FalconStor’s IPStor for replication.

The firm, which recently completed phase 2 of its disaster-recovery project, is currently using FalconStor’s IPStor to replicate data between its Cleveland headquarters and offices in Cincinnati, New York City, and Washington, DC (see figure on p. 26).

Click here to enlarge image

“Our original focus was to replicate all our e-mail and document data out of Cleveland to Cincinnati, but it’s worked so well that we also applied it to New York and Washington, DC, last year and now plan to take it to our other offices,” says Ed Carroll, associate director of technology operations at Thompson Hine.

The firm is currently replicating 5GB to 6GB of document data and 60GB to 170GB of e-mail data daily over a dedicated T1 connection between Cleveland and Cincinnati, and about 300MB to 700MB of document data and 2GB to 7GB of e-mail data from its Washington, DC, office to its Cleveland headquarters.

As part of the replication installation, Thompson also implemented Nexsan’s ATABeast disk arrays in Cleveland and Cincinatti, as well as Nexsan’s ATAboy disk arrays in New York and Washington, which it is using as shared tier-two storage. Thompson Hine is also leveraging FalconStor’s IPStor to do synchronous mirroring at its Cleveland office (between a Hitachi-based SAN and a Nexsan ATABeast array) for redundancy purposes.

“We found that we could get more out of IPStor than just replication, and we didn’t want any single points of failure,” explains Carroll. “We wanted to put something in place so that if anything catastrophic happened with our Hitachi array users would see no difference.” Carroll says that he is currently running production data off of Nexsan ATABeast arrays at two of his offices and has seen no performance problems.

Thompson Hine also reports improved flexibility and better storage efficiency as a result of its IPStor virtualization implementation. In particular, Carroll says it gave him the flexibility to resize LUNs as needed, which allows him to make better use of the Hitachi disk array by reducing the amount of wasted disk space. Also, Carroll says that IPStor is easy to use and does not require separate staff to manage it.

Prior to implementing FalconStor’s IPStor, Thompson Hine had been using Veritas’ Volume Replicator for replication. Carroll says he was less than happy with the product for service and support reasons and therefore was receptive to Chi’s initial suggestion of implementing IPStor.

By phasing in the disaster-recovery project over three years, Thompson Hine says it was able to “sell” the idea to management. The firm declined to disclose pricing information for any of the technologies involved, including FalconStor’s IPStor.

Hunterdon Medical Center
Implementation: DataCore

For Hunterdon Medical Center, a non-profit community hospital in Flemington, NJ, the decision to bring replication technology in-house was part of a larger effort to re-haul the center’s hospital information system (HIS).

“We were looking to refresh the HIS hardware, and we saw it as the perfect opportunity to also bring in new technology for our disaster-recovery and SAN efforts,” says Alberto Cruz Natal, technical manager at Hunterdon Medical Center. The hospital had recently completed the construction of a physical disaster-recovery facility but had not yet put the technologies in place to safeguard the data being generated at the facility.

Quadramed, a provider of HIS systems, was instrumental in bringing DataCore’s SANsymphony product to Hunterdon; in fact, early on in the project, the hospital’s disaster-recovery and SAN efforts were centered around the HIS implementation; however, over time the DR/SAN component was spun out.

Hunterdon considered IBM for replication (it had IBM Sharks installed on-site), but for a variety of reasons, including cost, lack of virtualization support (at the time), and complexity, the hospital decided to go with DataCore.

“We couldn’t afford all the bells and whistles IBM’s replication offered,” says Cruz-Natal. “And DataCore gave us the flexibility to leverage any type of disk storage on the back-end.”

Hunterdon uses DataCore’s Asychronous IP Mirroring (AIM) capability, which is a component of SANsymphony, to replicate data across five T1 lines from its primary data center at Hunterdon Medical Center to a second facility about 15 miles away. The hospital also takes snapshots every 24 hours using the AIM module for additional data protection; the snapshots, or point-in-time copies, are also used for analysis and testing purposes.

Explains George Teixeira, president, CEO, and co-founder of DataCore: “Hunterdon is mirroring [replicating] over IP so that both sites have the same copy. Then, in order to avoid having any impact or interruption on either site, the hospital takes a snapshot of a disk at a particular point-in-time during the day. Production and AIM keep moving along doing their thing [i.e., copying data to the remote site]; meanwhile, the hospital can use the point-in-time copy for analysis and testing.”

Currently, Hunterdon is replicating about 1GB per day, although this number is expected to increase as more systems and applications are added to the environment over time. Data is currently replicated between two IBM Shark arrays, one at the primary data center and another at the disaster-recovery facility. The hospital may replicate to lower-end disk arrays in the future.

Two DataCore SANsymphony storage domain servers (SDS) are currently installed in the hospital’s primary data center, with a single SDS at the disaster-recovery location. The company plans to add a second SDS at the recovery site this summer.

Cruz-Natal says the hospital’s SANsymphony implementation has been nearly flawless. “The simplicity of its use can’t be understated,” he says. The cost of the implementation was not disclosed.

American Institute of Physics
Implementation: Kashya

For the American Institute of Physics (AIP), a scientific industry service provider with about 140 online journals, the mantra “it is daytime somewhere, any time” governs the way it does business. And that means anything less than 99.98% uptime is unacceptable, even during scheduled maintenance.

In fact, AIP is one of those rare non-financial institutions that actually tests its disaster-recovery plans on a regular basis-in fact, three times a year. “If it’s not 100% successful, we’re not happy,” says James Wonder, manager of the online-systems division at AIP.

To ensure data availability in a disaster situation and to meet its recovery time objective (RTO) of 48 hours, AIP built a disaster-recovery site about 20 miles away from its primary location on Long Island. It also began to look at products that would enable it to replicate data between the two sites. AIP evaluated a variety of products, including FalconStor’s IPStor and Veritas’ Volume Replicator, but none of the products fit the bill, according to Wonder.

“Veritas Volume Replicator is a good product, but it just didn’t meet our requirements,” says Wonder. In particular, he didn’t like the fact that it [and other available options] was software-based, and he was concerned about its ability to perform up to his tough standards. “We’re a high-transaction environment [last month AIP recorded 52 million hits], and I wanted nothing that would interfere with our system at all,” he explains.

Ultimately, AIP decided to build its own product in-house, and although the product worked well-and met all the performance criteria-the amount of money the company was spending on people to keep the site going ultimately became its undoing. AIP renewed its search for a replication product and discovered Kashya’s replication appliances.

“Again, we couldn’t find exactly what we needed, until a friend told me about Kashya’s KBX4000 Data Protector Appliance,” says Wonder. “We decided to do a ‘try and buy’ because we were skeptical, but it worked.”

Wonder was not only able to get the product installed within two hours, but was also able to run it without any performance impact.

On average, AIP is getting 8:1 compression with the Kashya appliance; however, with textual pdf files compression has been as high as 20:1 to 30:1, according to Wonder.

The Kashya appliance provides near-synchronous replication, which means there is a slight lag between the two sites. However, the appliance allows users to set policies to determine the lag times for various types of application data.

AIP has replicated about 3TB of data to date using the Kashya platform. Its environment consists of 14 Sun servers and StorageTek D178 disk arrays. Two Kashya KBX4000 Data Protector Appliances are currently installed, in a clustered configuration, at both the primary and secondary data sites for redundancy.

Even though cost wasn’t the deciding factor (people time and distance were), it was a consideration. “We looked at everything, and Kashya was priced much lower than [the big-name alternatives], such as EMC’s SRDF,” says Wonder.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives