Best practices for storage environments

Posted on March 01, 2001

RssImageAltText

A discussion of data-management life cycles, NAS and SAN options, I/O requirements, and modular versus frame-based disk arrays.

BY MARK TETER

Understanding all aspects of today's storage technology is crucial in selecting the best overall solution for your organization. Consequently, IT departments are finding it increasingly difficult to determine which disk solutions and which Web commerce computing architectures they should use. The SAN (storage area network) versus NAS (network-attached storage) debate is often a stumbling block, as is whether to purchase modular or frame-based disk arrays.

Organizations are generally in a state of flux with their data infrastructures. IT managers typically need to upgrade existing storage capacity, improve data-availability requirements, or increase application I/O performance-often at the same time.

To address IT growth problems, it's important to understand the fundamentals of the data-management life cycle. As illustrated in Figure 1, organizations revolve around the wheel addressing data-management activities. The circles in the wheel are characterized as

Assessment: Determine organization's current storage investment; define acceptable data storage risks; identify applications' storage footprints; characterize application and database I/O behavior.

Design: Classify storage requirements between SAN and NAS solutions; determine best storage technology for application environments; consider both near-term and long-term requirements.

Deployment: Submit request for proposal (RFP); select modular or frame-based storage; negotiate best vendor proposal; configure and implement solution; document maintenance and support information.

Service functions: Provide secondary storage (near-line and offline) requirements; implement backup, recovery, replication, distribution, retention, and storage management policies.

Management: Deploy management tools; monitor, manage, and administer storage environment; develop system administration procedures, and run books.

Validation: Verify storage requirements; identify I/O bottlenecks, storage use patterns and consumption; perform capacity planning.

These data-management activities help companies provide competitive advantages by being able to quickly and reliably deploy business solutions across the enterprise. IT is continually focused on these management activities, trying to control and manage their data as well as associated equipment, management, and environmental costs; physical space requirements; and time constraints.


Figure 1: IT departments are constantly at different stages in the data- management life cycle.
Click here to enlarge image

In addition, IT departments can improve deployment and management of their data resources by giving vendors a clear vision of their application storage needs. The best way is through a discovery, assessment, and design process, which identifies application storage footprints, quantifies storage capacities, determines acceptable risks and vulnerabilities, and defines performance characteristics. Once this is performed, business-specific specifications can be incorporated into an RFP to solicit valid storage solutions.

A tale of two technologies

The next step is to determine which storage solution-NAS or SAN-is better-suited for the applications. There are many considerations to take into account before such decisions can be made. Figure 2, which illustrates a three-tier computing infrastructure, uses a general rule-of-thumb for deploying disk resources.

NAS solutions (including both NFS and CIFS services) are generally implemented for IP-oriented application networks and for servers being used as replaceable modules, such as load-balanced Web farms. SANs are typically used for application and database servers that require predictable, low-latency block I/O. However, NAS can host databases and SANs can provide network-based storage but there are limits, depending on the application. (With the emergence of clustered file systems and storage domain controllers, SANs provide shared data access.)

One consideration is to use NAS for database environments where there is no database administrator (DBA) to manage and lay out the database storage environment. NAS servers stripe data over many disks (usually more than 10 drives). This process provides a very simple, but effective, way of dealing with the complexities of organizing data tables, indexes, logs, and rollback segments to mitigate I/O contention or hot spots. Caution should be used when using NAS with databases that have high numbers of concurrent users or transaction rates or with databases larger than 100GB.

Due to the operational activities of large database environments, management can be difficult using NAS. DBA maintenance activities-such as importing/exporting, rebuilding large indexes, and dropping/rebuilding data tables-generate large block I/O which, due to TCP inefficiency, puts heavy stress on database servers and application networks. Even with Gigabit Ethernet, 30MBps is typically the maximum throughput per interface. It is recommended to consider Ethernet trucking and Layer-3 switching to get the performance characteristics necessary for storage networks.

Note that NAS comes in two varieties: storage appliances, or NAS-only devices that include embedded software for data mirroring, replication, and backup and "build-your-own" devices using third-party software from vendors such as Veritas.

SAN disk resources provide IT departments with the speed and simplicity of a direct I/O channel and the flexibility of a network. However, SANs are difficult to implement with I/O-intensive file-sharing applications or applications shared by a large number of clients-even in light of clustered file systems and storage domain controllers. The drawback to shared file access over SANs is that metadata operations are gated by the speed of the Ethernet network and by the performance of the domain controller itself.

These issues can be resolved through optimization techniques with file-system layouts, but this requires a high level of staff expertise. Storage virtualization can simplify storage-management complexity; however, it too requires IT expertise. Since no official standards are expected in the near-term, there is no easy way to determine the virtualization solution that is best for your organization.

The best way to decide which storage solution is optimal is to understand the specific I/O requirements. Typical questions that should be answered include

  • Where is the primary I/O bottleneck?
  • Is the bottleneck with small block or large block I/O?
  • What are the performance characteristics of the application?
  • Does the application mainly generate random I/O or sequential I/O?

NAS and SAN exhibit different performance characteristics that may have a big effect on some applications, but not others. To understand these issues, it's important to understand the premise of random and sequential I/O.

The maximum I/Os per second (IOPS) rate is an effective measure of random I/O performance. For sequential I/O performance, data transfer rates-measured in MBps-are used. The IOPS provides a good measurement of small, random read/writes (maximum number of commands that can be generated), and data transfer rates provide a good indication of how much overall data can be moved. IOPS and transfer rates are opposing measurements of bandwidth performance: IOPS never lets the disk subsystems get up to speed; transfer rates depend on the sustainable burst rate of sequential disk access.

Storage solutions cannot concurrently deliver high IOPS and high transfer rates without significant thought to the file system layout, stripe alignment, RAID level, RAID chunk size, and RAID disk group configuration. Storage systems with large cache configurations only slightly mask this I/O situation. This is not to say that read/write caching does not improve I/O performance, but that caching is helpful for some-not all-application behavior. It should be noted that for NAS-specific solutions, the SPEC SFS97 benchmark is an effective measurement to evaluate I/O performance.


Figure 2: A three-tier infrastructure might consist of both modular and frame-based disk arrays.
Click here to enlarge image

It is difficult to compare performance statistics across vendor product lines because of the possibility of benchmark manipulation (such as reading/writing blocks to and from cache as opposed to generating separate, unique read/writes from disk, or by using large numbers of clustered subsystems to completely parallelize all I/O operations). Even disk manufacturers are guilty of miscalculating I/O measurements. Some manufacturers calculate disk access times using revolutions per second (calculating only the rotational delay) and do not include the average seek and settling times for a sector to get aligned for actual read/write operation. The best way to truly calculate I/O performance is to make these determinations empirically through direct application testing. This activity is a necessary part of the data-management life-cycle process-in the assessment phase (before storage acquisition) and validation phase (during storage growth).

Management opportunities

Organizations can deploy either modular or frame-based disk arrays. The table on p. 46 points out the differences between these two architectures. Both approaches have unique advantages, and through data-management assessment and design, IT can determine which approach is better for near- and long-term needs.

In Figure 2, disk storage is deployed with modular storage arrays and a centralized data-frame architecture. Both types of disk resources meet the requirements of enterprise computing architectures; they just address the solution with different approaches. Modular disk arrays take a "building-block" approach, providing application-specific storage units to the computing tier that needs it. They are best described as "stack-and-rack" architectures that provide flexible, pay-as-you-grow storage management solutions.

Frame-based arrays, on the other hand, approach the solution with the idea that storage should be physically centralized, providing disk resources throughout the entire system architecture. They are monolithic product lines designed for disk consolidation and centralized data stores shared by a variety of hosts and applications. Where frame-based storage provides hundreds of RAID volumes, modular disk arrays provide only a handful of virtual disks for application- specific configurations.

Management issues with disk storage are a significant concern in determining the best solution. It is imperative to have centralized monitoring and management for the storage environment, ideally through a Web browser. Even though the modular storage arrays in Figure 2 are physically distributed across the environment, it is more than likely they are all collocated in centrally managed rack enclosures. This is accomplished either by using Jiro-compliant software or with storage systems acquired from a single vendor. In terms of storage management, frame-based arrays help IT consolidate company-wide data into single storage repositories, avoiding the proliferation of multiple vendor solutions (each requiring its own maintenance, support, and training). In one swoop, IT departments can have bulletproof disk capacity across their enterprise application environment.

Click here to enlarge image

As illustrated in Figure 2, NAS can be provided throughout the infrastructure via a centralized, back-end data-frame array. Either by building NAS clusters (using third-party software) or buying the solution directly from the vendor (such as EMC's Celerra product line), large data-frame array configurations permit IT to dynamically allocate disk resources between SAN and NAS as needed, all under one roof. This process allows "read-only copies" of production data to be broken-off and made immediately available for the Web servers-a benefit over deploying multiple NAS appliances because appliances can only provide NAS resources (and cannot have spare capacity provided to other SAN applications).

The best approach in addressing storage requirements for the computing infrastructure is to develop good data-management practices that help identify, define, deploy, manage, and validate storage requirements. Organizations are beginning to look at their data as valuable company assets that require audit, control, and management like any other investment. By performing data-management life-cycle activities, organizations can provide cost-effective, long-term storage solutions. Storage costs are becoming a bigger slice of the IT budget and consequently deserve due diligence like any other business investment.

Mark Teter is director of enterprise storage solutions at Advanced Systems Group (www.virtual.com), an enterprise computing and storage consulting firm in Denver.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.