Four steps to high availability


By Steve Duplessie - Enterprise Storage Group
Click here to enlarge image

A high-availability (HA) IT environment requires more than clustered servers. HA encompasses anything that improves the overall availability of IT resources. There are four steps to building an HA environment: 1) make RAID the storage foundation, 2) install multiple pipes to storage, 3) bulletproof servers with clustering, and 4) add agents to safeguard applications. You can complete as many of these steps as you need, and each step will improve your uptime.

Step 1

Begin with RAID

Finding an inferior RAID system is like finding a lemon among BMWs. While some RAID arrays are better than others, the most expensive RAID system doesn't always offer the best bang for the buck. A midrange RAID storage system may be all you need.

RAID 5 or RAID 1 is usually the best choice; just make sure you use hardware RAID. If you choose RAID 1 (mirroring), pairs should be split between different I/O controllers and mirrors, in different cabinets.

If you're mirroring just because you had a bad RAID-5 experience, get over it. For years, database vendors have told their customers to avoid using RAID-5 configurations because of the performance penalty associated with the write algorithms. As it turns out, almost all intelligently designed arrays improve I/O transaction rates and throughput in database applications, as well as other types of applications. Hardware-based RAID 5 will most likely offer the greatest cost/performance benefits.

The RAID system you select should have no single point of failure. To this end, make sure your system has dual redundant controllers, along with other redundant components, such as n + 1 power supplies, AC and DC cooling, hot spare disks, and mirrored write caches-all of which should be hot-pluggable. For example, if a power supply fails, you should be able to replace it without having to shut down the system.

Opt for a multi-port RAID system, which can plug into at least two different servers. Multi-port RAID reduces the cost of subsequently adding storage to other servers.

On a final note, if you chose to go with SCSI, make sure the upgrade to Fibre Channel is a board swap, not a forklift upgrade. You'll be on Fibre Channel sooner or later.

Step 2

Multiple pipes

If you can't get at the data on the array, then even the most fault-tolerant RAID system can prove useless. Once you have your RAID system, add multiple paths (or pipes) to access the array from your server or servers.

Multi-pathing software is fairly inexpensive. Sometimes it comes as an embedded feature in the operating system, such as Solaris' Dynamic Multi-Pathing (DMP). Most RAID vendors support one or more multi-pathing offerings.

In the event of a failed path, multi-pathing software automatically redirects I/O traffic to another path. (A failed path could include a cable, interface, or RAID controller.) Some software actually allows dynamic performance improvements. For example, EMC's PowerPath automatically routes traffic in real-time based on I/O patterns. At the very least, you can manually dictate which LUNs can be accessed through which paths to provide rudimentary load balancing.

Step 3

Bulletproof servers

Once you have your fault-tolerant disk array with multiple pipes, you need to add a safeguard server to the mix. If the primary server fails, the server's functions need to migrate to an alternate server. You most likely will have one or more servers performing other tasks, such as NFS or CIFS file serving. These servers may be able to handle the incremental load of a primary database server in a catastrophe. Many users dedicate a "spare" server within a cluster to fail-over. However, if you don't have a lot of money, any other server running the same operating system will do.

An HA cluster requires that any node designed to take over a function from any other node must have direct physical access to the failed node's storage. Thus, server-to- storage connectivity becomes the primary key to building an HA cluster. This technique can be accomplished easily with two- or three-node clusters. Going beyond three nodes may require a storage area network (SAN). HA for large-scale heterogeneous SANs has just started to move into prime time.

Clusters can be built with as many nodes as the storage array can support. You can cluster as long as your disk architecture can support multiple nodes. With SCSI, practical limitations continue to merit two-node configurations.

Two important components for HA clustering include the working together of a service and HA software. A service usually includes an IP address and an exported file system, which sits on either a local or physical disk volume. Another attribute of a service includes the application. Many HA software products accomplish the same thing-sensing a failure and migrating services to a surviving node or path. The above sidebar explains how HA software works.

Step 4

Add agents

Most HA software comes with standard recovery functions for things such as NFS or CIFS file serving. Stateless applications don't require client intervention and, thus, are easier to deal with. By using HA agents for specific applications, you can increase the intelligence level of your environment.

A lot of vendors offer canned agents for popular applications, such as Oracle, Exchange, or SAP. These agents usually include a toolkit that allows you to write a custom agent, often a PERL script, to monitor and control any application. You can configure these rule-based packages to act the way you want.

For example, you may want certain services to come up on a surviving machine, while other services stay dormant until you scrub the database. The agents will allow you to ping your applications and make sure they respond. If your application doesn't respond after a certain number of tries, you can automatically halt and re-start the application on the same node. If that doesn't work, the agent could be directed to cause a hard fail-over and bring the application up on the surviving node.

You can write very complex agents to monitor a myriad of applications or hardware conditions. You have to decide whether you want to tackle writing agents or contract with a systems integrator to handle your initial HA set up. Spending the money for agent writing allows you to get the task done right the first time and to learn the technologies faster.

Look for systems integrators that have worked with all of the various components in your computer room and can configure things the way you want. Unless you have a lot of experience, you may be better off hiring an outside integrator.

Steve Duplessie is a senior analyst with The Enterprise Storage Group in Milford, MA.

Two nodes acting as independent file servers

Each server exports file systems, or in NT environments shares, to the network. Appearing to be independent, each server has its own respective file systems and volumes. When the HA software senses a failure, the software begins a sequence of events to recover. Meanwhile, the client continuously sends the server stateless NFS requests, unaware that the master server has failed. The HA software usually has a predefined set of rules that retries the command. If this procedure doesn't work, the HA software closes the service.

Then the HA software begins its service migration. The remaining server gets notified that a cluster member's service has failed. The remaining server mounts the failed server's volumes and file systems. Next, the remaining server takes over the failed server's IP address or addresses and begins responding to its own IP requests

If you took Steps 1 and 2, each server would be connected to a common disk array with multiple pipes. Each server, in turn, brings to the network a different service with a unique IP address. If a hard failure occurs somewhere in the path, the HA software dynamically reroutes the request via a known good path and, if necessary, fails over the entire service to the alternate node.

This article was originally published on May 01, 2000