High availability for Windows 2000

Posted on September 01, 2001

RssImageAltText

Storage management techniques can boost Windows 2000 environments to rival the high availability of Unix environments.

BY CRAIG HUBER

Availability of applications and data is critical in today's Web-based e-commerce-driven business environments. To provide high availability, stability must be implemented in the key components in the operating environment, including the operating system, platform hardware, and server application. Even momentary downtime-planned or unplanned-can result in serious loss of revenue and customer/user frustration. When IT managers think of creating highly available environments, they think of reducing planned and unplanned downtime.

IT professionals operating in Windows environments share the same requirement for creating highly available environments as their Unix counterparts. The Unix operating system is touted as being enterprise proven for creating highly available environments. The reasoning is that the base operating system provides the required flexibility. However, this flexibility is no longer limited to Unix environments.


Figure 1: The key building blocks to increase levels of availability are in the areas of backup/restore, clustering, volume management (including RAID), and replication.
Click here to enlarge image

The good news for IT organizations is that these same levels of flexibility found on Unix are now available on Windows. This means reducing both planned and unplanned downtime. The key building blocks to increase levels of availability are in the areas of backup/restore, clustering, volume management such as RAID, and replication. IT departments can combine these building blocks in a flexible fashion to create highly available environments. Figure 1 shows how these solutions for Windows can be used to increase system availability.

CAUSES OF DOWNTIME

Figure 2 shows the causes of unplanned downtime, according to a Gartner/Dataquest study. When one thinks of downtime, images of smoking servers and failed hardware components immediately come to mind. Hardware failure, however, accounts for only a fraction of degraded system availability, and while we often make the supporting network a scapegoat, network failure is even less of a culprit. The vast majority of downtime is actually a combination of system software problems and planned maintenance. Organizations can have a dramatic positive impact on their system uptime with a little careful advance planning and deployment of the right system design, managed by the right set of tools.

PLANNED DOWNTIME

The obvious objective of planned downtime is to minimize the time that any service is offline, which can be achieved by redundancy of services, compartmentalized de-pendencies, and having good solutions and procedures to efficiently effect the necessary changes.

UNPLANNED DOWNTIME

Software downtime

Difficulties with software account for the bulk of unplanned system downtime. This is not surprising, given the number of interactions between software components, not to mention the variables that today's software must deal with. Some downtime is caused by poor data management such as volumes running out of disk space. Planning for these occurrences prior to installing the first application is the obvious advice-through the use of the right tools. Volume management tools that have the ability to expand volumes after they are initially configured without taking the system down will allow users uninterrupted access to data. Another key software component used to minimize downtime is Microsoft Cluster Service (MSCS) for Windows, which will be covered in more detail later in this article.

Hardware downtime

The second largest class of downtime-hardware-related components-accounts for about 23% of unplanned downtime. The conventional method of reducing hardware downtime is to provide redundancy to the point where a single component failure will not cause system outage. Key areas of server hardware to consider are storage, processing, and power.


Figure 2: Software problems account for the bulk of unplanned system downtime.
Click here to enlarge image

Storage is made redundant through a combination of RAID configuration techniques. In addition to hardware RAID, host-based volume management can be used to virtualize storage and create a software mirror across hardware arrays that may already have mirroring internally applied.

Application processing is made redundant through clustering. Dual power supplies and/or battery backup with hot-swappable power supplies will handle most power-related incidents. Other ways of protecting against a single point of hardware failure is to use a multi-pathing solution that provides protection against host bus adapter and cabling failures. Multi-pathing is an integral part of some host-based volume management products.

People downtime

The most preventable cause of unplanned downtime is human error, which accounts for about 18% of all downtime. Unintentionally subjecting the system to a virus, configuration errors, poor procedures, and policy are the usual culprits. Comprehensive backups and a recovery plan are your best defense against human errors.

DEFINING AVAILABILITY LEVELS

There are different levels of availability, each of which has its appropriate implementation. In a perfect world, all components of system hardware and software would work perfectly, leading to zero downtime. This, of course, is the "Holy Grail" of computing, and few organizations can afford the enormous expense associated with building the necessary infrastructure to achieve the five nines (99.999%) or more of uptime. Given this, the real question becomes not "What is availability?", but "What is the appropriate level of availability for the application?" As the degree of mission criticality for the application rises, so should the effort placed on planning for the appropriate level of availability.

Since there is always a business justification involved in building infrastructure, the first step in this process is to determine the level of acceptable downtime (risk) that can be weighed against the expense of mitigating that risk through infrastructure investment. The second step will be designing the system to meet the specified availability index, which defines the following levels of availability:

  • Basic availability (including good backup and recovery);
  • Enhanced availability (including basic-availability components, advanced backup/restore, basic volume management, RAID 0, 1, 5, hardware redundancy, etc.);
  • High availability (including high-availability components and advanced online volume management, replication, clustering, third-mirror break-off, etc.); and
  • Fault tolerance (including high-availability components and the addition of 4-node clustering, further hardware redundancy, etc.).

Basic availability

Basic availability begins at the 95% uptime range, which translates to an average of about 8.3 hours of downtime per week. Basic availability is achieved through being able to quickly recover from those failures that would otherwise result in a loss of data. A good recovery plan will allow ordinary systems to accommodate software and hardware failures and occasional disasters that reduce system availability.

All systems should be covered by basic availability. This means implementing backup-and-recovery functionality. While Windows NT and Windows 2000 provide basic backup applets, any organization requiring robust data protection should use full-featured backup solutions for Windows NT/2000 servers. Adding particular agents for Microsoft Exchange Server, SQL Server, and other key applications allows for a complete customized solution.

Enhanced availability

Enhanced availability raises system uptime to 99%, or about 1.7 hours of downtime per week, on average. Almost every organization strives to achieve this number as the minimum bar for mission-critical system availability.

Meeting this level of availability means reducing the effects of storage failure through the introduction of features like multi-pathing, mirroring, and other RAID configurations.

With respect to storage subsystems, there are two primary objectives to enhanced availability: Accommodate growth in file systems or databases without causing service disruption, and achieve the desired amount of tolerance to hardware failure associated with storage systems.

By aggregating physical disks into logical volumes, you can accomplish these goals. To assist in understanding this, some basic conventional disk definitions follow:

Simple disks: The most basic drive configuration, where partitions are contained within the confines of a single physical disk.

Spanned or concatenated disks: Allows one or more logical volumes to span more than one physical disk. Should one disk fail without protection, all data on any spanned volume is lost.

Striped disks (RAID 0): Data blocks are distributed in parallel among several disks to gain performance.

Mirrored disks (RAID 1): Data is written simultaneously across two disks to provide redundancy.

RAID-5 arrays: Where several disks are arranged to stripe data across the disks, adding parity for data protection against single drive failure.

A summary of how various disk configurations impact availability and performance (an aspect of availability) follows:

  • Simple disks do not increase overall availability;
  • Spanned disks increase availability by allowing a larger file system or database to be accessed by users;
  • Striped disks enhance performance and thus availability;
  • Mirrored disks add directly to high availability in the event of a disk failure; and
  • Mirrored and striped (with parity) volumes are fault-tolerant to the loss of one disk per plex.

High availability

High availability stretches from 99% uptime to around 99.999% uptime, translating to about 5 minutes of downtime per year. In this region, avoiding failures and the time-to-recovery become critically important.

However, it's often the case that technology is implemented through a series of compromises. To gain higher levels of availability, we give up some performance. To gain performance, we often sacrifice high availability or at least fault tolerance. Traditionally, this is the case in conventional disk configurations. Should performance be a priority, striped disks offer excellent performance but with no tolerance to disk failure. Mirrored sets, on the other hand, provide protection against failure at the expense of performance during write operations.


Figure 3: A mirrored-stripe set consists of a striped set of disks and a mirror of those stripes
Click here to enlarge image

Advanced features in host-based volume management can add flexibility into storage management, allowing for some useful, non-compromising configurations. This makes it possible to have performance and high availability at the same time. Performance and availability are actually linked quite closely; if users are not seeing their data arrive in a timely manner, in essence, it is not available to them. To obtain the best of both the availability and performance worlds, mirrors and stripes can be combined. To achieve this, you would first establish a striped set of disks and then establish a mirror of those stripes, gaining the performance advantage of the striped set as well as the redundancy of a mirror. This configuration is usually referred to as a mirrored-stripe set and is illustrated in Figure 3.

Advanced volume management features also enable dynamic online growth of high-availability volumes, including striped, mirrored, mirrored-striped, and RAID-5 volumes. The key to obtaining the highest levels of availability is keeping the data online while necessary configuration changes are made.


Figure 4: In an n-way mirroring configuration, administrators can create up to 32 mirrors.
Click here to enlarge image

Further, without advanced volume management, there is an inherent limitation of mirrored volumes to only two physical disks. The ability to build n-way mirrors that support up to 32 mirrors are found in some products. There are significant benefits to this capability. While the slight impact to write performance remains, read performance is increased through the ability to direct concurrent reads to all disks of the mirror simultaneously. Figure 4 shows how n-way mirroring could be done in an advanced volume management solution.

The next primary benefit comes into play with imaged backups. Whether working with standard mirrors or a striped volume, an advanced volume management product should provide the ability to use a mirror to perform a backup. This is known as mirror break-off, and there are two primary scenarios:

  • One volume of a two-volume mirror can be broken off and mounted by itself, on the same host. This mirror can then be backed up to reflect a consistent state representing an exact time. Post backup, the data on the mirror can be erased, and the disk space can be used again subsequently for another backup.
  • A third-mirror break-off uses three mirrored volumes or plexes. The process is the same as the first example, except in this case, the mirrored volumes remain fault-tolerant during the break-off due to the continued presence of two mirrored volumes while the third has been mounted for backup.

Third-mirror break-off (see Figure 5) can be used for a variety of tasks on the same host, such as solving the shrinking backup window problem; data mining; and providing faster access to data that can be located closer to a server for performance-demanding users.


Figure 5: A third-mirror break-off configuration uses three mirrored volumes, or plexes. The mirrored volumes remain fault tolerant during the break-off due to the presence of two mirrored volumes while the third mirror is mounted for backup.
Click here to enlarge image

Two other key features of advanced volume management are critical to applications with high-availability requirements: clustering support and dynamic multi-pathing. Each of these features provides for higher availability in the form of redundancy, so that in the event of certain failures, the application data is still accessible.

Fault tolerance: clustering

Clustering products for the Windows platform improve both availability and manageability to reduce planned and un planned downtime. By deploying clustering, improvements in the availability of applications are made by providing fail-over capability in the event of a hardware or software error. MSCS supports 2-node fail-over on Windows 2000 Advanced Server and 4-node fail-over with Windows DataCenter Server. Other clustering products perform similar functionality and support up to 32-node cluster fail-over.

The use of clustering in conjunction with volume management provides benefits beyond the "sum of the parts." Advanced volume management supports multiple disk groups and can be effectively used in a clustering environment.

Using host-based volume management-in the event of a failure-the Cluster Service will automatically fail-over the storage required for a specific application to another node. Additionally, the volume management solution should allow mirroring of the quorum drive to ensure even higher availability for clustered servers.

For MSCS, a volume management solution for Windows 2000 should provide a number of features to enable storage migration with MSCS, including:

  • The ability to create multiple, application-specific disk groups;
  • A dynamic-link library (DLL) that defines these logical volume resources for MSCS; and
  • A client extension to the MSCS graphical user interface (GUI).

DYNAMIC MULTI-PATHING

Dynamic multi-pathing (DMP) allows unlimited paths between individual servers and attached storage arrays, providing increased availability in either active-active or active-passive configurations if one path becomes unavailable; and increased performance in active-active configurations by spreading I/O between multiple paths non-clustered environments. Whether the environment is clustered or non-clustered, DMP can make the data more available to users.

ONLINE RECONFIGURATION

Online monitoring and tuning of storage can enable the system administrator to identify storage bottlenecks and move data to correct or prevent performance problems. Using advanced volume management, I/O activity can be tracked at the system, volume, logical disk, physical disk, or disk region level. Average I/O activity is typically tracked at the logical subdisk level. A subdisk with a predetermined threshold percentage is flagged as a potential "hot spot." When any flagged subdisk reaches its predetermined threshold, this high I/O activity subdisk region can be relocated to alleviate the I/O bottleneck, without disrupting data access to users during relocation.

REPLICATION

Another critical consideration for providing the highest availability is replication. Real-time data protection for Windows servers can be attained by replicating critical data in real-time from a primary server to a remote secondary server. Typically, replication products work over any TCP/IP based LAN/WAN. In the event of failure at the primary site, the secondary server's critical replicated data is restored for primary usage.

Creating a highly available environment is like walking a tightrope: Each step counts, and one false step could result in unnecessary downtime. Managers need to decide what level of availability is best for their environment and use the appropriate software and hardware solutions to construct that solution. Products from hardware, operating system, and data-availability companies work in concert to improve overall system availability.


Craig Huber is a senior marketing manager at Veritas Software (www.veritas.com) in Mountain View, CA.

null

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives