By Michael Keeler
—The concept of storage virtualization has been around for a number of years, but until recently it had not gained widespread acceptance. As with many innovative concepts, it takes time for companies to become comfortable with the idea of placing another device into the heart of the data path between their storage devices and their servers.
The promises of virtualization are many, but so are the risks. Implementing an incorrect or inadequate solution would negatively impact the entire data infrastructure. So, the acceptance of virtualization has followed the same path as SANs, which were discussed for years in the storage industry before end users finally accepted them. Now it is hard to imagine a medium or large data infrastructure without a SAN. The same is becoming true with storage virtualization. It is logically the next step in storage management.
The potential benefits of storage virtualization are many: a single point of management for all storage, increased capacity utilization, standardized copy services, ease of data migration between storage devices, and a common set of multi-path drives and tools. Virtualization is also a key enabler for information life management (ILM) strategies by assisting with data movement between storage tiers. Storage virtualization works by adding a management layer between the servers and the storage.
Servers "see" the virtualization engines as their storage device, while the storage sees the virtualization engines as their server. Once the virtualization layer is in place it becomes the primary management interface for communicating with both servers and storage. It is easy to group storage devices—even storage devices from different vendors—into tiers or by common usage. Typically the entire capacity of the array is mapped to the engines in large increments, which are placed into a storage pool. Virtual disks are then allocated out of these pools for assignment to a host server.
The presence of this layer shields servers and applications from changes to the storage environment. A storage device can easily be replaced with another unit and data copied in the background from one unit to the other without application downtime. The ability to move data at will means that lightly used or outdated data can be easily moved to less-expensive storage devices.
Since the virtualization engines appear as the storage device to the servers, only the multi-path drive associated with the engine manufacturer needs to be used. This reduces the management complexity and interoperability issues associated with having numerous multi-path drivers.
Copy services is also managed at the virtualization layer, which means that point-in-time (PIT) and disaster-recovery data replication are only purchased at the virtualization layer, and they have a common interface for management. It becomes easier to replicate data or create PIT scripts to copy data from one tier to another for backup and data recovery purposes.
There are two basic approaches to virtualization: a stand-alone appliance or a blade within a SAN director or switch. Both approaches have merit. The appliance has the advantage of scalability. To grow the virtualization environment you simply add more appliances, which are managed as a single entity.
Blades are potentially more cost effective since they already reside within the fabric, and there is no need to connect external cabling to attach them to the fabric. Blades are often protected by highly available, director-class hardware.
Like SANs, implementation of storage virtualization has taken time to gain widespread acceptance, but its time has arrived. The cost benefits of having a single administrator, controlling hundreds of terabytes or petabytes of storage capacity, are just too substantial to ignore.
Michael Keeler is a storage architect at Evolving Solutions Inc., a data-on-demand and storage solutions provider (www.evolvingsol.com).