By Kevin Komiega
Despite all the hype and debate, storage virtualization is not a priority for more than half of the end users recently surveyed by TheInfoPro research firm. According to the survey, 51% of the Fortune 1000 users interviewed do not have plans to implement block-level virtualization, while a mere 15% of these large companies are currently using or evaluating virtualization (see figure). However, users are quick to attest to the benefits of virtualization.
About 31% of the Fortune 1000 firms surveyed by TheInfoPro will be using block-based virtualization for structured data by year-end.
Dave Samic, senior network analyst at FirstMerit Bank, an Ohio-based bank and financial services provider, is responsible for the design and implementation of FirstMerit’s SAN infrastructure. He is using IBM’s SAN Volume Controller (SVC) virtualization platform, an appliance that combines the capacity from a set of disk arrays into one virtual storage pool that can be centrally managed. SVC also allows for advanced copy services across heterogeneous storage systems from multiple vendors.
“Because of our growth, I’ve had to focus a lot of my time building a scalable solution that can dynamically grow on demand,” says Samic. “When we first got into using SANs we had a lot of stand-alone storage, but SANs only solved about half of our problems and didn’t do all that we thought they would.”
FirstMerit’s goal was to have an IT infrastructure based on blade servers and networked storage, but early on in the process, Samic and his co-workers experienced a number of bumps in the road, including throughput lags and problems booting directly from the SAN. It was clear to him that a virtual storage environment would be the only way to realize the full promise of SANs.
A key goal for FirstMerit was the ability to manage more than 19TB of data and 170 servers with relatively few people. Given that his networked storage was primarily from IBM, Samic began evaluating the SVC virtualization platform.
“It’s added up to a recipe for success that I think others have struggled to find. Either they don’t want to take the chance with the technology or they don’t have the resources to evaluate it,” says Samic. “Virtualization is a concoction for failure if you’re not careful about what you do.”
By installing IBM’s SVC, Samic has boosted FirstMerit’s throughput and can now boot more than 30 blade servers simultaneously from the SAN. It sounds simple enough, but Samic believes IBM and other vendors need to better educate end users. “When you say ‘virtualization’ people automatically think VMware, but now you have server, application, and storage virtualization technologies out there. If people could take the time to learn to use the products they would have a 100% success rate,” Samic says.
Block virtualization isn’t just for huge IT shops. ZIRmed, a Louisville, KY, firm that processes healthcare claims for medical facilities, adopted virtualization in the form of a XIOtech Magnitude 3D storage system after having problems expanding volumes and moving data.
ZIRmed CTO Chris Schremser says investing in a virtualization solution has solved a variety of problems, most notably the time spent on storage management.
Schremser’s team keeps every transaction processed by ZIRmed online and accessible to its clients. The amount of online data totals approximately 400GB and is growing at a rate of about 1.5GB per week.
“We used to spend a lot of time managing storage-8 to 10 hours per month. That’s significant if you’re a relatively small organization without dedicated storage administrators,” says Schremser. “Now we spend almost no time on storage management.”
IT organizations of any size can employ block-level virtualization in a variety of ways, including host-based software, network-based intelligent switches or appliances, or in storage arrays. But in some situations, it’s just not necessary.
As the technical lead for exploration and production at Amerada Hess Corp.’s technical computing group in Houston, Jeff Davis supports his company’s scientific research efforts by managing storage, servers, and security. And he doesn’t need block virtualization.
Virtualization isn’t on Davis’ radar simply because he sees no need for it. “We considered virtualization for things like load-balancing and being able to move data around without impacting users, but we’re just looking for maximum performance,” says Davis.
To that end, he runs a file clustering solution from PolyServe to attain maximum performance. “A lot of the reasons behind using [block] virtualization are related to overcoming technical limitations in storage software, but now we have 64-bit technologies that let you create larger amounts of storage,” he adds.
Davis points out that his hesitancy to implement storage virtualization is related to concerns about integrating legacy hardware with a virtualization solution, as well as concerns about the ability to migrate data: “We have 300TB to 400TB of data online in a non-virtualized environment. How do you migrate that from one storage vendor to another?”
Davis is not alone when it comes to being on the proverbial fence when it comes to block virtualization. According to John Sloan, senior research analyst at the Info-Tech Research Group, virtualization products are still not up to par.
“Vendors have not delivered on the promise of virtualization so far, but in the last year we’ve seen some positive movement with vendors providing management across different devices,” says Sloan. “But virtualization is still very much a proprietary play, and if virtualization translates into vendor lock-in you’re going in the wrong direction.”
The goal of virtualization is a cloud of many different storage products that all talk to each other and present pools of virtual storage available to all servers. The hardware should not matter but, according to Sloan, as long as vendors continue to be proprietary there will be skepticism about when, how, and why users should virtualize their SANs.