Why hasn't storage virtualization taken off?

Posted on June 01, 2002

RssImageAltText

The problems may lie in the management, reporting, monitoring, and delivery systems.

By Bob Rogers

What is hindering the adoption of storage virtualization? Possibilities include end users' deployment concerns, scalability issues, reliability fears, or something as simple as unfamiliarity with some virtualization vendors.

According to analysts, there are more than 1,000 instances of virtualization products scattered throughout customer environments. But why are we unaware of the high-visibility successes? There are probably a lot of reasons, but one of the primary may be that all data is not equal. Every installation has application data that ranges in importance from "loved one" to "pond scum." It is just not good enough to build a pool of storage for applications if the "pond scum" manages to infiltrate the locale of the "loved ones."

Think back to why stand-alone servers occurred in the first place. Isolation protected an application and its workload. Direct-attached storage perpetuated that isolation. However, now that storage administrators are getting crushed by the rapid proliferation of servers and storage systems and the resulting management nightmares, something else has to be done.

Storage area networks (SANs) helped alleviate some of the pressure associated with the tens, hundreds, or thousands of server "storage islands." SANs consolidate storage resources into fewer, more-manageable, entities. So now, rather than thousands of servers with impenetrable walls around them, we have dozens of arrays sprouting LUNs everywhere. We've moved the problem from one place to another, but we've done little to actually solve the predicament.

Network-attached storage (NAS) helped alleviate the problem, too. The file system on a NAS server is owned by that server, so you get the SAN-like advantages of consolidation as well as LUN consolidation. Of course, nothing is free, and now you have file-level access instead of block-level access to data.

Virtualization, when combined with policies that dictate the applications sharing storage pools, can solve many storage issues. Storage administrators no longer have to monitor hundreds of servers or LUNs and can focus on doing more-productive work.

If you're a storage administrator, you probably spend the majority of your day returning phone calls and e-mails. The incoming message in perhaps 90% of these little distractions is "gimme" or "find for me." If you create a pool of storage, point users to it, and automatically provision it as it gets depleted, then you've addressed a large chunk of those requests. That doesn't mean you'll have free time on your hands; more likely, you will now be able to get farther down your "to-do" list before you again run out of time.

What's next on your to-do list is hard to predict. However, it is a sure thing that somewhere near the top of that list is tracking who is eating you out of house and home. What this means in terms of virtualization is that if you make it easier for your users to chew up space, you also have to make it easier to hunt down who "ate the space." If you're fortunate enough to use quotas to constrain users, that's great; however, quotas are not a panacea because enterprising users will try to circumvent quotas and will lose data in the process.

This takes us back to virtualization. If we have more time to monitor and manage space, have finer control over whose files go where, and understand how to manage quality of service, we significantly improve storage administrators' efficiency. However, one substantial issue related to deploying virtualization solutions today is that the management services and policies to apply those services are currently not good enough.

Consider how virtualization affects database management systems (DBMSs). Most DBMSs try to outfox the operating system, the file system, and anything else that gets between the database and the disks.

The only quality of service (QoS) to be applied is "fastest" in most cases. However, for the sake of business continuity, other factors such as replication or remote copy might intrude. The DBMS has no way to communicate its needs to the virtualization engine: It is an entirely manual process to set up and manage these types of management services. In truth, the DBMS doesn't know the needs of the application either; so again, we have a "knowledge gap."

The management services and policies that need to be applied to address virtualization services have to fulfill the needs of applications and users. There are tradeoffs that only the application and business process owners can evaluate and make. For example, which application is labeled a "loved one" versus "pond scum" is a decision that only the IT organization can make.

Service-level objectives have been the vehicle for expressing application and IT requirements, yet how many organizations can point to storage-related service-level objectives? It is time to do so. The sophistication and intelligence of the storage solutions being built today are still inadequate to give unlimited service and resources to every application. Even if storage were free, someone would have to manage it because the services needed to use the storage are definitely not free.

Why hasn't virtualization taken off? Perhaps the management, reporting, monitoring, and delivery systems aren't ready yet.

Bob Rogers is the chief storage technologist at BMC Software (www.bmc.com) in Houston, TX.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives