IT managers should evaluate a number of technologies, including remote mirroring, replication, snapshots, and IP storage.
By Bill Margeson
When critical business data is lost, every minute that goes by means missing information, lost revenue opportunities, and perhaps even the closing of a department or company. Whatever the cause of the data loss—disk crash, power outage, virus, or accidental deletion—the usual result is that gigabytes of files as well as weeks, or even months, of work are lost.
The main ways to ensure ongoing survival in the face of data loss are the development and implementation of a business continuity plan, and the employment of the requisite technology.
A business continuity plan documents which organizational functions are critical, what steps are necessary to survive without them, and what resources are available to help recover lost or damaged systems. Without a suitable continuity plan, data loss might be irrevocable.
In an increasingly volatile environment, institutions that want to stay efficient cannot afford to have their systems down—even for a short time. A recent report by Gartner Inc. says that organizations will need to reduce the time it takes to recover critical processes and application systems to 24 hours by 2003. According to the report, even non-critical systems will need to be backed up within four days.
It's a simple choice: You can back up your disks or risk losing your data. Whatever method you choose, it's critical that you put a business continuity plan in place and use it.
Continuity planning can be an onerous task, requiring extensive planning and implementation time. Each plan must be designed to meet the specific needs and requirements of the particular company.
The first step is to identify the organization's most important assets and information. During this phase of planning, you should identify threats and vulnerabilities, propose and refine solutions, clarify corporate policies, assign responsibilities, and develop standards and training.
The second step is to create a security plan, including procedures, budget, and implementation timetable. Once those steps are complete, new architectures can be rolled out and new procedures put in place. The new system should then be tested from the outside for any remaining weak points. Finally, security should be audited regularly to keep pace with both internal changes and evolving external threats.
Senior management must support the project and demonstrate their involvement. Business and technical experts must be involved on an ongoing basis. Individual business units within the organization must take responsibility for their own security assessments.
Behind the business continuity plan is the technology. If protecting the organization's data is the fundamental priority, suitable backup technologies must be put in place.
Uninterrupted availability is most commonly achieved by redundant components and fail-over capabilities. IT managers must ensure all critical components are duplicated, so that system and application availability is maintained in the event of any single component failure.
Remote data mirroring is the most common approach to minimizing system downtime and facilitating resumption of business in the event of a disaster. With this approach, an exact copy of production data is maintained at a remote site. Should a failure or disaster occur in the original production site, remote mirroring is used to restart the system and applications at the remote site.
Remote data mirroring can take one of two forms—physical mirroring, based on a replication of the original site's hardware, or logical mirroring, which is based on a replication of the original system's file structure. Each has its own advantages and benefits and should be chosen according to specific requirements.
Physical mirroring is the appropriate choice when performance, data currency, and ease of management are the most important factors. This is because physical mirroring's use of a disk-based system does not consume host CPUs, requires that only a single I/O be issued for the mirroring operation, allows software mirroring independent of disk technology, and may improve read performance with multiple read devices.
Logical mirroring, on the other hand, may be more appropriate when transactional consistency is a more important factor. With a logical mirroring solution, remote data corruption is less likely to occur, resynchronization will usually require manual intervention, and transactions—not data blocks—are mirrored. Logical mirroring has slightly lower performance than physical mirroring.
When you are considering the choice of a remote mirroring solution, a number of questions need to be answered:
- Will the mirroring operation be synchronous or asynchronous?
If the ongoing availability of data is key to business survival, a remote mirroring solution—whether physical or logical—can guarantee that essential information can be accessed when needed. Similarly, data replication technologies can provide businesses with guarantees that data is mirrored to a remote site in the event of a site failure. Such technologies replicate, or shadow, the organization's data in real time and write the data to alternative sites or systems via a network.
IP-based storage offers another way of ensuring data security. IP storage (iSCSI, FCIP, and iFCP) sends block-level data over an IP network, enabling servers to connect to SCSI storage devices and treat them as if they were directly attached to the server, regardless of their location. It optimizes an organization's ability to enable remote data backup, tape vaulting, and remote disk mirroring.
There are two different processes for IP storage: storage tunneling and native IP-based storage, each with advantages and disadvantages.
Storage tunneling provides a dedicated, point-to-point link between two storage area networks (SANs) via encapsulated Fibre Channel SAN frames in IP packets across an IP network. Storage tunneling generally makes use of existing MANs and WANs. However, tunneling lacks the ability to make full use of standard IP network management and control tools, such as directory services, traffic management, and quality of service.
Native IP-based storage integrates existing storage protocols with the IP protocol, so that storage traffic can be managed with existing software applications and tools for bandwidth provisioning, traffic management, and overall network management. Native IP-based storage devices allow data to be stored and accessed anywhere on the network.
Data snapshots, another technology for ensuring redundancy, are read-only views of the file system. After a snapshot has been taken, changes to files are reflected in updates to the current set of pointers. In basic terms, snapshots take a picture of what the system data looks like, reflecting changes only as they are made. The use of snapshots can help organizations minimize costs, because they take up very little disk space and can be sent to remote sites for disaster recovery, reducing the need for redundant storage.
In a business environment that's increasingly characterized by globalization, the Internet, and vulnerability to intrusion, solid continuity planning—and appropriate underlying technology—has become a critical element in business organizations at all levels.
Bill Margeson is president of CBL Data Recovery Technologies (www.cbltech.com) in Armonk, NY.