Virtualization approach stirs debate
BY SONIA R. LELII
Hitachi Data Systems' (HDS) recent unveiling of its Tagma-Store Universal Storage Platform (USP) disk arrays did more than just introduce the next generation of HDS's crossbar switch architecture. Judging from statements from competitors, Hitachi fueled the debate about whether virtualization-a key tool for managing tiered storage-should reside in the storage controller or in -- the network fabric.
HDS is betting that customers will be willing to dig deep into their budgets for a high-end subsystem that sits on the front-end of multiple storage arrays-including platforms from competitors EMC (CLARiiON and DMX) and IBM ("Shark")-and takes the capacity from those systems and turns it into a centralized, virtual pool of storage. HDS executives say the embedded virtualization layer can manage up to 32 petabytes of storage.
"Virtualization is placed behind the controller so we are not adding a new, complex layer to the network," says Hu Yoshida, vice president and CTO at Hitachi. "We are putting [virtualization] behind the storage."
EMC issued a stinging response: "Most of the industry [is putting] virtualization in intelligent switches," says Chuck Hollis, vice president of storage platforms marketing at EMC. "Hitachi's idea is a single-vendor solution. They're saying, 'Use our array to manage all the storage in your environment.' "
HDS's announcement last month included hardware and software introductions that industry analysts say could improve Hitachi's competitive position if the company delivers on its promises. On the hardware side, HDS unveiled the third generation of its Universal Star Network architecture, which is based on a scalable crossbar switch architecture. Hitachi claims performance of up to 68GBps, two million I/Os per second, and 256 concurrent memory operations.
The company introduced three new disk array models: the entry-level TagmaStore USP100, midrange USP600, and high-end USP1100 (see at a glance, p. 21). Host connectivity options include Fibre Channel, -FICON, ESCON, and NAS.
Hitachi positions the TagmaStore USP as a high-end storage controller that can aggregate, access, and manage up to 32PB of storage both internally and externally from competitors' storage systems as well as HDS's Thunder and Lightning arrays. Internal disk capacity ranges from 38TB to 332TB. The controller is priced from about $600,000.
On the software side, the announcement included several offerings, centering primarily on virtualization, logical partitioning, and replication. The Hitachi Universal Volume Manager software can virtualize up to 32PB of internal and external capacity while enabling replication and migration across heterogeneous subsystems. Virtual Partition Manager software allocates internal and external physical resources-such as disk, cache, and ports-into independently managed Private Virtual Storage Machines so that IT managers can tune the performance and firmware for specific applications. And Universal Replicator software enables heterogeneous, asynchronous remote replication for disaster recovery.
Of all the pieces of the announcement, how HDS chose to package virtualization is spurring the most debate. Hitachi placed the virtualization intelligence in the storage controller because, according to company executives, it has the most immediate knowledge of where the data is located, and it is least likely to impact network and applications operations.
IBM and EMC, however, disagree with Hitachi's virtualization approach. EMC plans to ship a router next year with virtualization capabilities. And IBM's virtualization strategy is based on its TotalStorage SAN Volume Controller (SVC) and TotalStorage SAN File System. (For more information, see "IBM takes 'virtual' steps with SFS," InfoStor, July 2004, p. 1, and "SVC for MDS supports non-IBM arrays," September 2004, p. 12.) The SVC is available as a stand-alone network appliance or on a blade in Cisco's MDS line of SAN switches.
Richard Villars, vice president of storage systems at research firm International Data Corp., says the hardware part of IBM's virtualization solution is based on a "diskless head" controller. He says that one of the key differences between the IBM virtualization device and HDS's virtualization approach is scalability. "Hitachi's system is a lot more scalable than what IBM has developed today," says Villars.
EMC's Hollis says that HDS is taking the opposite stance with its decision to implement virtualization in the array controller. He contends that it makes more economic sense to put the intelligence on the network rather than in the array. "The intelligent switch vendors are competing from a totally different price-performance point," says Hollis. "You're dealing with switch economics versus array economics."
However, some analysts maintain that HDS is making the right bet. David Floyer, CTO at ITCentrix, a consulting firm based in Framingham, MA, says HDS is the first company to design a solution in which the storage controller manages all the external and internal volumes attached to it; the hosts see only a virtualized pool of storage.
"You can put the control on the server side or on the storage side," explains Floyer, "but anything that requires real-time management resources [such as volume management and backup] should be done by the storage controller. This is the first attempt to manage external volumes from a controller point of view."
In addition, Floyer points to HDS's logical partitioning tool as another important part of the company's strategy. The Virtual Partition Manager is an extension of a feature used in Hitachi's existing disk arrays. It logically allocates internal and external physical storage resources--including ports, cache, and disks---into independently managed Private Virtual Storage Machines. It can scale up to 32 of these virtual machines.
One of the key enhancements, according to HDS executives, is the built-in ability to virtualize both disks and cache. Previously, HDS only partitioned the disk drives, which meant one application could affect the performance of another. By partitioning both the cache and disk drives, administrators now can assign individual quality of service (QoS) levels to each -created partition.
IDC's Villars agrees that Hitachi's partitioning scheme is one of the more interesting parts of the recent announcement. The enhancement will give users the ability to set up an overall QoS policy within the system, while giving administrators the ability to fine-tune the performance of individual applications in each partitioned domain. For instance, administrators of an ERP application can set up different QoS standards than administrators of a CRM application.
Hitachi's Universal Replicator software, which is due in December, addresses issues that replication software introduced years ago-specifically, the use of expensive cache and the inability of the system to size the network for peak traffic times, according to Claus Mikkelsen, senior director of storage applications at Hitachi. Arrays initially were not designed to do replication, and functions such as asynchronous replication require a lot of cache resources.
"Putting replication on the array has been a retrofit," claims Mikkelsen. "All the resources that should be dedicated to the application get chewed up by the cache."
Instead of writing data to cache, the Universal Replicator writes data to disk-based journal files, which are updated continuously. The second storage site reads the data from the journal files instead of the cache, thus improving performance. Moreover, Hitachi's Yoshida says previously their systems had to guess when the peak traffic times on the network occurred. Now, the replication software is tuned to identify peak write rates.
"Writes take more effort and they are a real drag on the network. Before, we had to guess [the peak times]," says Yoshida. "Now, it's foolproof. So if you are backing up, you are backing up to disk instead of cache. And the journal files allow you to maintain the updated write sequence for fast point-in-time recovery."
Of course, Hitachi's success depends on whether the company can deliver on its promises. "They have a challenge ahead of them," says EMC's Hollis. "They've bitten off a lot here. Now it's execution time."
Hitachi has two major channel partners. Sun Microsystems will resell the Tagma-Store USP through its solution providers as the StorEdge 9990, while Hewlett-Packard will sell an OEM version, the StorageWorks XP12000, through its channels.
TagmaStore is derived from the Greek word meaning "to organize."
At A Glance
The Universal Storage Platform (USP) has a maximum internal raw capacity of 332TB delivered through support for up to 1,152 hard disk drives available in the following capacities: 73GB, 146GB, or 300GB (available end of 2004). The USP is available in three new models, with a non-disruptive upgrade path.
- Maximum internal raw capacity: 74TB (up to 256 drives)
- 17GBps of cached bandwidth (roughly equivalent to Lightning 9980V)
- Connectivity: Fibre Channel (up to 64 connections), FICON (32), ESCON (32), NAS (two blades with up to eight ports), up to 8,192 virtual ports, one back-end director
- Maximum internal raw capacity: 150TB (up to 512 drives)
- 34GBps of cached bandwidth
- Connectivity: Fibre Channel (up to 192 connections), FICON (48), ESCON (96), NAS (up to four blades with 16 ports), up to 24,576 virtual ports, two back-end directors
- Maximum internal raw capacity: 332TB (up to 1,152 drives)
- 68GBps of cached bandwidth
- Connectivity: Fibre Channel (up to 192 connections), FICON (48), ESCON (96), NAS (up to four blades with 16 ports), up to 24,576 virtual ports, two to four back-end directors