The big vendors want to wipe out the competition without wiping out their assets.
By John Sloan
Remember the controversy back in the 1980s about the neutron bomb? This nasty nuke’s claim to infamy was that it killed the enemy while leaving much of the enemy’s infrastructure standing. For the top-tier storage vendors, storage virtualization is a neutron bomb. Of course, vendors don’t use nuclear weapons analogies. They use terms like “clearing the field of competition in the data center,” which isn’t a whole lot different than saying “clearing the battlefield of the enemy but leaving their assets unscathed.”
IBM, for example, is an early leader in the nascent networked storage virtualization field. The company already has a customer base of more than 1,700 for its SAN Volume Controller (SVC), a storage virtualization appliance. SVC helps storage managers get the most out of existing storage by combining storage devices from different vendors into one centrally managed storage pool. SVC also has the impact of putting IBM in the driver’s seat: The field is effectively cleared of competitors, but their assets are left behind to be managed by IBM’s proprietary product.
Virtualization is a hot topic in SAN circles right now due primarily to a lot of push coming from big vendors such as IBM, EMC, and Hitachi Data Systems (HDS). We agree that virtualization is an important development, but it is important because it is integral to a larger movement toward utility-based infrastructure. In separating the value from the hype, IT decision-makers must evaluate how any solution moves the infrastructure toward utility management while being wary of how virtualization may translate into vendor lock-in.
Toward a utility infrastructure
Utility infrastructure is the ultimate goal, and virtualization is just one of the roads to this greater promise. In an ideal utility IT infrastructure, processing and storage are managed as resources similar to the way that water and electricity are managed for a business or home. The underlying infrastructure is not as important as simple commodity metrics: how much can be made available and at what cost per unit.
Utility IT infrastructure has the benefits of being highly responsive and making the most use of available hardware investments. Take, for example, the need to provide IT resources for a new business process. In a traditional distributed architecture, satisfying this need will likely require the purchase of servers and either internal disks or an attached array to house the applications and data that this new process will require.
In a utility infrastructure, the new application is provisioned with processing power, and storage space apportioned, from a pool of available resources. The application resides in a virtual machine and data is stored on a virtual storage volume. In this scenario, an enterprise will still buy servers and disks, but these will be commodity purchases made only to increase the size of the processing and storage pool as needed.
Commoditization and virtualization are key enablers of a utility infrastructure. Through commoditization, the hardware is standardized, undifferentiated, and lower cost. Virtualization is the means by which the processing and storage are abstracted and managed separately from the underlying hardware layer.
The many become one
Virtualization in a SAN involves abstracting the logical presentation of storage (a storage volume) from the physical asset (disks) where the data actually resides. The result is more-efficient use of storage assets. For example, data can be migrated from one device to another behind the scenes while the logical presentation of the storage remains constant. This no-downtime migration is a boon to maintenance and for migrating storage blocks from expensive to less-expensive arrays in information lifecycle management (ILM) schemes.
Storage virtualization is hardly new. Carving up the combined storage of a disk array into multiple LUNs is already a type of virtualization. This is homogeneous virtualization where the storage pool is managed within a single vendor’s product. The kind of virtualization that is getting more attention this year is heterogeneous virtualization, whereby storage devices from different vendors are managed in one virtual storage pool.
Big name storage vendors that are getting into heterogeneous virtualization include IBM with its SVC, EMC with the Invista platform, and HDS, which touts the virtualization capabilities of its TagmaStore array controller. Each vendor implements virtualization differently. SVC is a storage network appliance. Invista is software for intelligent SAN switches, and TagmaStore is a storage controller that can recognize and pool storage from non-HDS arrays.
These are not the only products for, or approaches to, heterogeneous virtualization. Various switch vendors, such as Maxxan, delivered heterogeneous virtualization capabilities long before the big vendors entered the market. But the industry giants are starting to move, and when the giants move the ground shakes and users take notice.
Fighting to stay on top
Utility infrastructure has a number of dire implications for hardware vendors. First, as the hardware layer becomes more commoditized, competitive advantage will come not from proprietary hardware but in how the processing and storage utilities are managed. Second, as utilization increases through resource pooling, enterprises will purchase processors and storage devices less often.
Big hardware vendors are fully cognizant of the threat utility infrastructure poses to their markets and their margins. Their response is twofold: Seize the high ground of the management layer for virtual infrastructure, and clear the field of the competition. Their interest is not in advancing a new paradigm for infrastructure management, but rather in surviving the paradigm shift.
This strategy is why EMC, for example, has been working like mad to become a software company even as they continue to lead the market in external disk sales. EMC is the market leader in x86 processor virtualization software with VMware, and it has put a stake in the ground for block-level heterogeneous storage virtualization with Invista. EMC has seen the future and is working to make sure that its products figure prominently in that future.
Avoid neutron radiation
Heterogeneous virtualization only makes sense if existing heterogeneous storage is a problem that needs to be solved. A larger enterprise, for example, may already have a lot of external storage in place. It may already have several SAN fabrics and be looking to improve utilization by consolidating those SANs into one big storage pool. Enterprises that have more-modest needs, or are building out their first consolidated storage network, will be better served by going with virtualization within a homogeneous array or storage cluster from a single vendor.
Compellent, for example, is doing interesting things with managing storage blocks within its expandable SAN storage array. In the iSCSI world, LeftHand Networks’ SAN/iQ creates clustered storage pools from multiple storage “bricks”-storage servers populated with Serial ATA (SATA) drives and standard Intel processors. Clustered storage that leverages industry-standard processors and Ethernet probably comes closer to the ideal of commoditized storage virtualization.
In terms of long-term infrastructure strategy, Info-Tech encourages IT organizations to move toward a consolidated utility infrastructure for both processing and storage. IT decision-makers will find that major vendors wholeheartedly agree with this proposition, as long as it is their utility infrastructure being purchased. In the near term, IT decision-makers should ignore the virtualization hype and look to what business problems a given storage solution will solve.
John Sloan is a senior research analyst at the Info-Tech Research Group (www.infotech.com).