IP storage solutions for disaster recovery

Posted on July 01, 2003

RssImageAltText

Using IP for remote storage applications may offer cost advantages, and in many cases can be used with existing infrastructure.

By Gary Orenstein

IP storage transport protocols, combined with conventional remote storage solutions, can be implemented across a wide selection of IP networking options. The various options have their own strengths and weaknesses.

This article clarifies the network options available for remote storage solutions and makes a few suggestions for achieving effective deployment.

Competitive pressures now mandate remote recovery sites for virtually all businesses. While many companies can accept the time to restore from tape backup in return for a low-cost solution, others cannot. Businesses relying on mission-critical applications need remote, online storage solutions to guarantee availability. These solutions often start as simple backup operations and can graduate to more-sophisticated replication and clustering depending on the uptime requirements. Of course, an intense focus on recovery in each scenario determines the ultimate success.

Until the advent of IP storage and protocols such as iSCSI, iFCP, and FCIP, remote solutions required dedicated and expensive hardware and connections. Typically relying on dedicated channels, these deployments were restricted to only the most mission-critical applications and companies with large IT budgets. In such cases, an entire optical circuit was required to provide the appropriate remote storage connectivity. However, with IP and Ethernet-focused storage communication, most of the existing network infrastructure can be used for remote storage, where the provisioning aspects of IP make these solutions more accessible and affordable to a broader customer base.

Understanding the makeup of existing corporate data networking infrastructure allows storage architects to properly design systems involving local and remote network connectivity. Since traditional storage networking products were separate from mainstream data networking products, a new category of IP storage networking products has emerged. These products bridge data-center storage networking with metropolitan- and wide-area connectivity and foster a new class of IP storage business continuity solutions.

MAN/WAN networking options

Although the percentage of data centers using storage area networks (SANs) is growing, the majority of storage devices are still attached directly to servers via SCSI cables. Some direct-attached storage (DAS) has been converted from SCSI to Fibre Channel, even though SANs have not yet been deployed. Direct-attached disk arrays can be backed up to other arrays and tape libraries by sending the data back through the attached server, across the LAN, to the target storage device through its attached server.

DAS has the advantage of being simple, but it places an added load on the servers and the LAN, because they handle the same data twice (once to write it and again to replicate it). The performance may not meet the requirements of the application, since the servers must convert the more-efficient block-mode data into file format for transmission across the LAN, then back again to block mode for storing on the target disk array or tape library. Still, if the applications can tolerate the reduced performance, this approach also enables simple remote backups because the data that is converted into a file format uses Ethernet and IP as its network protocols. In most cases, as shown in Figure 1, file-based storage data simply can share the organization's existing data network facilities for transmission to other data centers over metropolitan or wide area networks.

It is important to note that almost all metro- and wide-area network topologies include an underlying optical infrastructure, frequently SONET (Synchronous

Optical Network) or the international equivalent SDH (Synchronous Digital Hierarchy), using wavelength-division multiplexers (muxes). In some areas, this is being complemented or replaced by Ethernet.

In the case of SONET networks, routers are used to convert from variable bandwidth protocols such as Ethernet to fixed-bandwidth services. The connection approach to this underlying network infrastructure has dramatic effects on overall circuit utilization and cost. Using a router helps consolidate various types of IP traffic into a single optical connection. A direct link consumes an entire optical circuit. This split becomes more evident in upcoming examples.

Data centers outfitted with SANs not only enjoy the benefits of any-to-any connectivity among servers and storage devices, but also have better options for local and remote backup. LAN-free and serverless backups are enabled by the addition of a SAN, and those functions can be extended within the metro area via dedicated fiber, as shown in Figure 2.

Dedicated fiber is generally costly, requires a lengthy installation process, and is not available at all locations. In those cases, it may be possible to run fiber a much shorter distance to the multiplexer that is nearest to each data center and send the storage data over an existing SONET/SDH backbone network.


Figure 1: File-based backup can be accomplished locally via the LAN, and remotely via the existing data network.
Click here to enlarge image

null


Figure 2: Block-based backup can take place locally via the SAN, and remotely via dedicated fiber.
Click here to enlarge image

null


Figure 3: Block-based backup can take place locally via the SAN, and remotely via an existing SONET network using IP storage protocols.
Click here to enlarge image

null

This approach became possible with the advent of Fibre Channel interface cards for multiplexers. These cards are similar to those developed for Gigabit Ethernet and, as with the Ethernet cards, encapsulate Fibre Channel frames to make them compatible with the SONET/SDH format for transmission between locations. In this case, the network demarcation point would be Fibre Channel, so subscribers simply can connect their Fibre Channel SANs to each end of the network, with no need for protocol conversion (see Figure 2).

There are, however, some drawbacks to this approach. As the transmission distance is increased, it takes more time to move control messages and data between locations, as dictated by the laws of physics. Fibre Channel switches originally were designed for use within data centers, where transmission delays can be measured in microseconds, rather than milliseconds, and the rate at which they transmit data is approximately inversely proportional to the distance between switches.

Performing operations in parallel can overcome some of the performance impact of long transmission delays. It is possible to have other data on-the-fly while waiting for the acknowledgements from earlier transmissions, but that requires large data buffers and most Fibre Channel switches have relatively small ones. Consequently, the buffers fill quickly in long-distance applications, and the switches then stop transmitting while they wait for replies from the distant devices. Once a write-complete acknowledgement and additional flow control credits are received, the switches can clear the successfully transmitted data and fill their buffers again, but the overall performance drops off dramatically over distances of 20 to 30 miles because the buffers are not large enough to keep the links full.

Click here to enlarge image

Inefficiency is another drawback. The channels or logical circuits used by SONET and SDH multiplexers have fixed bandwidth, so what's not used is wasted. Gigabit Ethernet and Fibre Channel circuits on these multiplexers have similar characteristics, but there are dozens of applications that use Ethernet and only storage uses Fibre Channel. This means that the cost of the Fibre Channel backbone circuit cannot be shared among applications, as is the case with Ethernet.

Besides sharing a circuit, efficiency also can be improved by subdividing a large circuit into multiple smaller ones. For example, a service provider's multiplexer with a single OC-48c backbone link can support, say, three OC-12c subscribers and four OC-3c subscribers. That would limit the subscribers' peak transmission rates to 622Mbps or 155Mbps, in return for significant cost savings, as the service provider can support many subscribers with a single multiplexer and pass some of the savings on to their subscribers.

For smaller data centers that don't require full gigabit peak rates, IP storage networks work fine at sub-gigabit data rates. All standard routers and switches have buffers that are large enough for speed matching between Gigabit Ethernet and other rates.

The table shows the standard transmission rates for SONET and SDH. The small letter c suffix indicates that the OC-1 channels are concatenated to form a single higher-speed channel. For example, OC-3c service provides a single circuit at 155Mbps, not three logical circuits at 52Mbps.

Today, most mid-sized and high-end data subscribers use 155Mbps or 622Mbps services. The higher rates are very costly and generally are used only by service providers for their backbone circuits.

However, as the demand for broadband network capacity eventually resumes its upward march, and as 10Gbps Ethernet becomes more common in LAN and IP storage network backbones, high-end subscribers will ask their service providers for more bandwidth on these types of circuits.

Other types of multiplexers are used to deliver services at rates below 155Mbps. In North America, 45Mbps (DS-3 or T3) is popular and the equivalent international service is 34Mbps (E3). 1.5Mbps (T1) is the next step down. The bandwidth ultimately can be subdivided down to hundreds of circuits at 64Kbps, which is the amount traditionally used to carry a single digitized voice call.

In all cases, the services operate at the same speed in both directions. This is called full-duplex operation. For storage networking applications, it means that remote reads and writes can be performed concurrently.

Because Fibre Channel requires at least 1 or 2Gbps of backbone bandwidth, Fibre Channel interface cards are designed to consume half or all of an OC-48c multiplexer circuit. Many potential subscribers to metro area Fibre Channel services currently may have only a single OC-3c connection to handle all of their non-storage data services for those sites, so adding Fibre Channel would increase their monthly telecom fees significantly.

At least one multiplexer vendor has tried to overcome this Fibre Channel bandwidth granularity problem by adding buffers and a speed step-down capability to its Fibre Channel interface cards. That allows Fibre Channel to run over a less costly OC-3c circuit. There are, however, some practical limitations to this approach. Since there are no standards for this technology, these solutions are proprietary and the speed step-down cards on each end of the circuit must be the same brand. That makes it difficult to find a global solution or to change service providers, once it has been deployed.

Fortunately, most of the problems with long-distance Fibre Channel performance and efficiency have been solved, within the laws of physics, by IP storage switches and routers. Because they can convert Fibre Channel to IP and Ethernet, the switches can be used to connect Fibre Channel SANs to the standard IP network access router, as shown in Figure 3.

By connecting to the service provider network via the router, rather than directly, users may be able to get by with the existing network bandwidth—especially if they already have 155Mbps or Gigabit Ethernet service. Depending on the application requirements and frequency of data change, remote IP storage solutions can operate at speeds as low as 1.5Mbps. In any case, using an IP-related service results in significant cost savings, and the storage network performance can be tuned using familiar network design practices. This approach also uses standards-based interfaces and protocols end-to-end, so there's no risk of obsolescence or compatibility problems.

IP storage also has the flexibility to be used with the new ELEC services. Subscribers may choose to use the same network topology—that is, a router at each network access point. In that case, the link between the router and the network would be Fast Ethernet or Gigabit Ethernet, rather than SONET or SDH, but all other connections, including the storage links, would remain unchanged.

Should subscribers choose instead to eliminate the routers, the IP storage link would be connected to a Gigabit Ethernet LAN switch or directly to a Gigabit Ethernet port on the ELEC's access multiplexer. Connecting to a LAN switch has the economic advantage of sharing the existing backbone connections (assuming there is sufficient bandwidth), while the direct ELEC link would provide a second, dedicated circuit for higher-volume applications.

As shown in the preceding examples, even though several options exist to tap into the metropolitan and wide area network cores for remote storage solutions, IP storage networking can deliver the most efficient bandwidth use. By centralizing traffic with IP and Ethernet, a variety of storage and networking applications can be provisioned accordingly, each with their own bandwidth requirements. This is not the case with, for example, Fibre Channel-centric optical solutions, where entire circuits must be dedicated to one channel.

Consider the implications of overall infrastructure flexibility. In the event of increasing bandwidth requirements for storage applications, some available resources can be reassigned from network applications. Similarly, an increased need for network bandwidth could come from unused storage network capacity. This flexibility leads to lower costs for network equipment and bandwidth and consolidated network management.

Gary Orenstein is the author of IP Storage Networking: Straight to the Core (Addison-Wesley) and is vice president of marketing at Compellent Technologies.


Click here to enlarge image

This article is excerpted and adapted from a chapter in the recently released book, IP Storage Networking:

Straight to the Core, published by Addison-Wesley Professional. The book provides guidance for evaluating, architecting, and implementing IP-based storage technologies. For more information on this book, as well as other Addison-Wesley and Prentice Hall Professional Technical Reference titles related to storage networking, visit www.awprofessional.com and www.phptr.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives