Storage virtualization: Waiting for nirvana

Posted on January 01, 2003

RssImageAltText

The initial promise of storage virtualization has failed to live up to expectations.

By Chuck Hollis

Just a few years ago, a number of storage vendors made great fanfare about their virtualization strategies, promising of virtual pools of inexpensive storage attached to inexpensive networks. The goal was to drive down the costs of storage while increasing utilization.

Part of the initial enthusiasm around virtualization was the promise of making "dumb" storage more intelligent. This argument was based on the assumption that although intelligent disk arrays provided rich management functionality, they were too expensive. However, as with most hardware components, the cost of intelligent disk arrays has followed the same downward curve as components such as processors and memory.

At the same time, storage virtualization products were not free. They required additional hardware and software, not to mention the costs of implementation, training, and support. The net result of declining costs for intelligent storage, coupled with additional costs to implement virtualization, has made for a questionable business case.

Virtualization, or better management?

Perhaps the most useful outcome of the storage virtualization attention is that both users and vendors have a clearer understanding of the real challenge: open storage management (OSM).

Users want to control and manage all their storage arrays, regardless of vendor. They want to take the complexity out of managing large storage networks and be able to automate and simplify tasks such as performance monitoring, allocation, utilization, and backup, while leveraging their existing investments. Basically, they want software that can take all their hardware assets and use them to deliver better service for less money.

Storage administrators want a single management console and the ability to integrate all storage management functions. This includes applications that simplify and automate complex or repetitive tasks; middleware that allows any storage device, switch, or server to be controlled and coordinated; and a repository that ensures management applications work together seamlessly and effectively.

Virtualization and NAS

Most virtualization techniques are based on vendors' assumptions that all data should be represented as disk volumes. This assumption implies that the basic "unit of management" should be a disk or what appears to be a disk.

Virtualization vendors hedged their bets when they found that disk volumes could be used for both structured (database) and unstructured (file system) data. But these products resulted in additional complexity and a new management model for defining and managing these logical volumes.

It wasn't long before many storage administrators saw that they could consolidate and pool storage (and servers) using network-attached storage (NAS). The products weren't explicitly called storage virtualization, but they achieved almost the same result—without the added complexity.

Users who implemented large-scale NAS consolidation were able to raise utilization by pooling their available storage. This gave them an easy management model, based on file systems.

To make things more attractive, users were able to implement NAS without designing and implementing a new storage network or another layer of management. All administrators had to do was to extend what was already in place.

As NAS and storage area networks (SANs) continue to converge over the next few years, storage virtualization may become an extension to existing NAS technology, rather than a new layer of software.

The future of virtualization

The buzz surrounding storage virtualization continues to encourage even more vendors to enter the market. For example, switch vendors are putting storage intelligence in the switch, enabling the switch to perform LUN masking, volume management, and replication. Although this may be an interesting variation on traditional storage virtualization, it raises additional questions when compared to NAS devices that provide similar functions.

Just as NAS and SAN are converging, so will volume- and file-based virtualization. NAS-SAN convergence will likely result in a single network that merges the NAS and SAN approaches to virtualization. This will result in a single environment that combines the strengths of each.

There won't be a "winner-take-all" result in the debate as to where storage intelligence should reside. Already, virtualization occurs at all levels: storage array, network, appliances, and servers—and this will continue. The challenge will be to control and manage capabilities at each level.

Over time, virtualization won't be seen as a specific product; it will be seen as a set of underlying storage capabilities: layered functionality at each level that helps management tools do a better job of managing.

The concept of storage virtualization (as initially defined) may cease to exist in the near future. In its place will emerge an entirely new category, which I'm referring to as open storage management. OSM will combine underlying functionality with lower-level abstractions that simplify the management of complex storage environments. An OSM infrastructure will be able to harness functionality at each level and make the user-visible storage management tools even more powerful.

The role of OSM will be to combine everything in the stack, including path management, volume management, provisioning, replication, and I/O redirection into a single capability.

Chuck Hollis is vice president of markets and products at EMC Corp. (www.emc.com) in Hopkinton, MA.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives