Moving toward storage re-centralization

Posted on January 01, 2001

RssImageAltText

The trend is toward architectures such as NAS and SAN, but improvements in software management and testing are still required.

BY RICHARD BRECHTLEIN

Click here to enlarge image

The amount of data generated by business, government, and academia is growing at an exponential rate. Economic globalization and the expansion of e-commerce are forcing enterprises to undergo dramatic changes in the way they manage data to maintain their competitive edge. In a recent survey of 30 Fortune 1000 companies, Forrester Research found that the average annual growth rate of data storage was about 100%, with the data storage requirements of some firms increasing by as much as a factor of three.

Not surprisingly, the methods of storing, protecting, and managing this data have assumed paramount importance. The data is commonly a corporation's most valuable asset. Recent surveys indicate that more than 50% of corporate capital equipment investment is now being spent on the procurement of data storage systems. With data storage playing such a central role, it's time to re-evaluate the traditional model illustrating a computer system as a wheel, with the CPU at the hub and all the peripheral devices surrounding it as nodes on the end of the spokes. In an updated model that reflects the critical role of data, the storage devices (or the storage network) become the hub of the wheel, and the CPU assumes the role of a data-crunching engine along with the other peripheral devices.

The methods of data storage are undergoing evolutionary change as the overall computing environment progresses. Before the 1970s, most enterprise systems were mainframe computers using centralized data storage, where disk and tape devices the size of refrigerators resided in special cleanrooms. Big, slow, and expensive by today's standards, these corporate resources were nonetheless relatively easy to manage by the IS department. As computing networks evolved in the 1980s, data storage became increasingly decentralized and dispersed, and data management began to take on a new urgency in the minds of many IS managers.


IT organizations are moving from server-centric architectures to storage-centric configurations.
Click here to enlarge image

The advent of the PC gave rise to the now ubiquitous client/server network model. As data storage demands grew, the most common solution was simply to add more storage to the network. Complex systems evolved to contain various, and often incompatible, hardware platforms, operating systems, and storage management utilities, further complicating the task of data management. All too often, these systems suffer from compromised server and network performance and are vulnerable to data unavailability and system downtime. Finally, the Internet, with an insatiable hunger for data, has come into wide acceptance, resulting in the ultimate nightmare for many IS managers.

The real cost of data storage to a corporation is not simply the initial purchase price of the storage equipment, but the ongoing data management costs-up to 10 times the initial equipment cost, according to a recent survey conducted by the Storage Networking Industry Association (SNIA). To put this into perspective, consider that this year, corporations will likely spend more than $50 billion on storage-related capital equipment-and $500 billion more to manage it! Costs associated with system failure can run to millions of dollars per hour for large enterprises, which serves as powerful motivation for optimizing network reliability and performance.

Return to centralized storage

The increasing demands for high data availability and high-performance networks have continued to place stresses on the server-centric approach to network development. In response, a variety of new architectures for storage systems are evolving, including storage area networks (SANs) and network-attached storage (NAS). These architectures can collectively be termed storage networks, and they all exemplify the critical central role that data storage plays. The underlying architecture common to storage networks is the separation of data storage from servers, logically if not physically, resulting in centralized storage that can be more readily managed through a single common management interface.

Centralizing data storage provides several benefits:

  • Centralized storage saves disk capacity because storage is no longer captive (where only applications on the same server as the storage can access it), and reserve capacity requirements are reduced;
  • Administrative costs are reduced by the consolidation of storage repositories for data warehousing, backup, archives, and other shared resources;
  • Network performance is improved by externalizing storage, resulting in faster applications and improved productivity; and
  • Centralized storage is more efficient, more available, and easier and cheaper to manage and maintain.

Challenges of storage networks

While storage networks are making great advances, they will only realize their full potential with the development of fully centralized and integrated storage network management utilities. However, many storage network management standards in development by groups such as SNIA are years away from being finalized. Current storage networks are commonly assembled with servers, switches, storage devices, and other equipment from various vendors, with management software from each vendor that is not necessarily designed to easily integrate with other vendors' management software.

System-wide testing to ensure interoperability, system uptime, performance, and reliability is required for any storage network. Integrators need to confirm migration and integration of storage network components as well as ensure protocol compliance. To optimize a storage network, end users must conduct performance testing to monitor traffic and test link utilization. Error-recovery and fail-over testing must be performed regularly to minimize downtime.

However, the development of testing software and hardware has lagged behind the development of storage networks themselves and is commonly a bottleneck to maximum performance and reliability. Until recently, testing products have included a narrow range of non-compatible, difficult-to-use software and hardware products from a multitude of vendors. The conventional means of storage system testing suffers from numerous limitations, analogous to many of the problems of managing client/server systems.

The majority of existing storage test hardware and software is extremely complicated to understand and use effectively, requiring a high level of technical expertise to operate and interpret the results at a time when highly skilled workers are becoming a scarce commodity in the workplace. Test applications are commonly product- or vendor- specific and are not portable across applications, interfaces, processors, product configurations, and operating systems-a huge limitation in today's mixed vendor, heterogeneous data storage network environments. Most test applications are not automated and necessitate custom applications to be developed for each test case, which is time-consuming and expensive. Also, remote operation of testing tools from a central administrative console in globally dispersed networks has been almost nonexistent.

A new generation of testing, monitoring, and analytical software and hardware tools that addresses these shortcomings will allow storage networks to fulfill their promise of improved performance, availability, and reliability, while at the same time significantly reducing ongoing data-management costs. New products that offer these benefits are now beginning to enter the market. Automated, cross-platform, adaptable testing and analysis routines designed from the ground up to accommodate the variation in storage networks are now available.

This new approach to testing and analysis can minimize the cost and complexity of data management, reducing staffing and training requirements. It will also allow systems administrators to concentrate on system optimization and planning for an enterprise's most valuable and strategic asset: its information database.

Rick Brechtlein is president and chief executive of Shugart Technology Inc. (www.shugarttech.com) in Irvine, CA.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.