Users attest to the benefits of virtualization

By Kevin Komiega

The marketing onslaught behind block-level virtualization technologies that besieged the storage industry five years ago has yet to be matched. Many vendors heralded block virtualization as the “killer app” for SANs. The hype has since tapered off, but the promises of simplified SAN management, on-the-fly provisioning, and non-disruptive data migration have actually been realized by many users.

Click here to enlarge image

It may not be the flavor of the month, but block-level virtualization has quietly become a staple in many storage environments.

“People have started to forget about virtualization, and that is a terrific comment on the value of the technology. It basically becomes like wall-paper. We don’t think about it anymore,” says Mike Karp, a senior analyst with Enterprise Management Associates. “The important issue is that block-level virtualization clearly works. It’s not really a product. No one wants to go out and buy virtualization. It’s an enabling technology.”

Bill Cureton, manager of midrange systems and storage for global IT services provider Atos Origin, started using block-level virtualization in 2004, when his company was looking for ways to consolidate its underutilized storage infrastructure. Atos Origin provides business consulting, systems integration, and managed operations services to clients in 40 countries.

“Our issues were no different from those of any other outsourcing company or large corporation. We had islands of storage dedicated to different clients that couldn’t be connected or shared and were only 50% utilized. We also had multiple hardware and software solutions for replication,” says Cureton. “We wanted a solution that had growth potential and enabled us to lower our human cost in managing storage-basically, anything we could do to lower the cost to our clients.”

Many of Atos Origin’s clients have their own storage. As a result, Cureton’s infrastructure has to be flexible in terms of how data is migrated and provisioned. “Our clients can own the storage, or we can, but we need to be able to move data around very quickly without disrupting our main storage infrastructure,” he says.

Atos Origin kicked the tires on several virtualization offerings from vendors such as EMC, IBM, and others before implementing Hitachi Data Systems’ Tagma-Store Universal Storage Platform (USP), which enables controller-based virtualization, and Hitachi’s Thunder 9585V modular storage system.

Atos Origin uses the asynchronous and synchronous replication capabilities of Hitachi’s TrueCopy Remote Replication software to provide continuous, non-disruptive, platform- independent remote data replication for disaster recovery and data migration over distances.

Cureton uses the Hitachi systems and software as a central point of management for all the storage systems in his data center. These capabilities help Atos Origin manage more than 290TB of client data in multiple platforms from EMC, Hewlett-Packard, Hitachi, and IBM.

One item on Cureton’s wish list is a better way to classify data. “Classification still has a way to go. We have clients that want to have storage classification automatically rendered-in other words, the ability to classify storage automatically based on policies and to put that storage on the right tier,” he says. “On a block-level system that’s difficult to do.”

Better provisioning

Palm Beach Community College stuck its toe in the virtual pool two years ago when the school needed to replace an aging IBM mainframe. The school has more than 49,000 students and operates five campuses and centers throughout Palm Beach County, FL.

“Our server storage was all inside of the box. We had 100 different servers with storage on them and our ERP system had its own IBM Shark,” says Tony Parziale, chief information officer at Palm Beach Community College. “That’s when we had an opportunity to look at an enterprise-wide storage solution.”

The project started as a server consolidation effort, but Parziale decided to take it a step further and look at virtualization as a solution to a hodge-podge of individual silos of storage for applications and servers.

The school ended up with an IBM eServer zSeries 890 mainframe, which is virtualized to run five SuSE Linux partitions that consolidate financial, human resources, and facilities management applications for the college’s 2,000 employees, as well as its entire student registration and tuition system.

Connected to the mainframe is 10TB of storage housed on an IBM TotalStorage DS6800 Enterprise Storage Server with IBM’s TotalStorage SAN Volume Controller (SVC) for block-level virtualization. The storage virtualization software aggregates data from multiple disk storage systems into a single pool of centrally managed storage.

“We can provision storage on-the-fly, which is a great benefit, especially on the server side,” says Parziale. “In the past, if you wanted to test something out it was a [chore]. Now we can provision servers and storage immediately.”

Like other users, Parziale says there is one drawback to block-level virtualization: the pricing. “The only downside is that you have to reconcile how much storage you’re putting under the SVC with the IT budget. There is an ongoing internal discussion because now that people see the benefits they want everything to be virtual,” he says.

Centrally managed pool

The Pacific Northwest National Laboratory (PNNL) was an early adopter of block-level virtualization and is one of the U.S. Department of Energy’s 10 national labs. As such, it performs research for other DOE offices, as well as government agencies, universities, and industry.

PNNL currently has approximately 240TB of SAN storage, about 60TB of which is managed under block-level virtualization. But back in 2001, the lab was tasked with managing and protecting 2TB of SAN storage.

“We had a SAN dedicated just to backups, but decided we were going to use it for [primary] storage as well. The biggest driver for virtualization for us was management. We also wanted to reduce downtime to our servers,” says Daryl Anderson, PNNL’s service manager for application and data hosting.

Anderson, who leads the SAN team and is involved in the organization’s day-to-day architectural decisions, began looking at block-level virtualization to aid in managing data growth and achieving high availability.

The PNNL chose virtualization technology from StoreAge Networking Technologies (which was recently acquired by LSI Logic). StoreAge’s Storage Virtualization Manager (SVM) platform aggregates disk capacity into centrally managed pools and can dynamically provision storage and reclaim unused space.

“We immediately began utilizing the ability to dynamically extend disk. Now we can give our users just a little more storage than they need when they ask for it,” says Anderson.

Click here to enlarge image

Block-level virtualization has matured over the past few years, and that maturation process has allowed Anderson to take advantage of more-advanced features such as StoreAge’s mirroring and snapshot applications.But there is a line that has to be drawn in the virtual sand depending on the size of a given IT environment. It is not necessarily cost-effective to virtualize everything. “It costs money per terabyte to license virtualization software,” says Anderson.

But the benefits can outweigh the costs. “There were some growing pains in the beginning, but now my SAN administrators and I would be reluctant to live without [virtualization],” he says. Anderson believes those growing pains can be attributed to the relative immaturity of the technology at the time of adoption and advises others to choose a virtual architecture that best fits their business. Some of the factors that played into the Pacific Northwest National Laboratory’s decision to purchase StoreAge’s virtualization platform was its out-of-band management and its ability to interoperate with heterogeneous storage devices.

This article was originally published on April 01, 2007