October 4, 2010 — The concept of virtualization is nearly as old as computing itself. It has been applied with various degrees of success to computer memory access, processing, storage and networking. At its most fundamental level, virtualization does exist within all of these elements.

However, virtualization is now beginning to be made visible at an external level. Rather than being a tool used to accomplish a goal for the engineers designing a product, virtualization is now being used to transform the major elements of information technology. The three core elements of IT are compute, network and storage.

Success as a service provider or an IT department delivering cloud services depends upon flexibility and efficiency in order to be cost effective.

Service providers have tried, and failed many times, to deliver services without virtualization and other core aspects of ITaaS. Witness the spectacular collapse of the xSP or service providers just after the turn of the millennia. Their cost structures could not support their business models. These providers tried to deliver services without the critical layer of virtualization. As a result, these models were unsustainable.

Because virtualization breaks the relationship between applications and the IT systems they run on, it frees system administrators from being tied to provide specific hardware, with static configurations. By providing an entire layer of virtualization, the computing, networking and storage elements are broken into standard services.

The modern data center will run in a virtual environment, composed entirely of virtualized computing, networking and storage elements.

Virtual storage defined

There is a fundamental difference between a resource that uses virtualization internally, versus something that provides a set of virtual interfaces. This point of confusion is often exploited by vendors obscuring whether or not a resource delivers virtualization. As an example, all modern operating systems use virtualization, but only hypervisors such as VMware, Hyper-V and Xen deliver virtualized computing. Similarly, nearly all enterprise storage systems utilize virtualization, but only a few products provide virtualized storage.

Without allowing the use of any storage system and any network connectivity, storage virtualization does not imply “virtualized storage.” In order to deliver the type of virtualization required for highly flexible cloud services and ITaaS, virtual storage must provide standard, virtual interfaces, supporting multiple storage vendors’ products.

Benefits of virtual storage

Some of the high-level benefits of using virtual storage (rather than storage with virtualization) include:

–Improved efficiency through greater storage utilization

–Standardized management of storage, providing decreased operational expenses

–Storage product inter-changeability, providing lower capital expenses

How virtual storage works

Virtualization is an abstraction that provides a simple consistent interface for a potentially complicated system. By providing a consistent interface, it frees both the engineers designing systems, and users from being tied to any one specific implementation.

Most commonly, virtualization is implemented through a mapping table that provides access to resources. The use of mapping tables is the reason why 64-bit addresses, or even larger, are required. In order to keep track of the billions or trillions of resources, a large address space is needed. In order to overcome limitations of grouping, and the size or granularity of resource access, it is also common to use multiple levels of mapping or indirection.

With the advent of thin provisioning, multiple point in time copies of volumes and multi-terabyte disk drives, many vendors have found it necessary to employ three levels of mapping.

Implementation approaches:

Symmetric: This method is commonly known as “in-band.” With this approach, all I/O moves through the virtualization layer. The mapping table is also managed and maintained on the devices providing the in-band virtualization.

Asymmetric: This method is also commonly referred to as “out-of-band.” In this implementation, the data and meta-data are handled differently. The mapping table, or meta-data about where data actually resides, is loaded into each host accessing the storage.

Hybrid: There is still another hybrid approach, known as “split-path.” This method is still an asymmetric approach, although the I/O mapping does occur somewhat “in-band.” The split-path method is typically used with a storage network switch, which contains the virtualization layer or mapping table. Although the I/Os do flow directly in the path and through the storage networking switch, the management and administration occur out-of-band. For this reason, the approach is known as “split-path.” The most common example of this approach was EMC’s Invista product. LSI’s SVM (also sold as HP’s SVSP) also uses this method.

Where virtual storage occurs

Host based: This was one of the first methods of providing virtual storage. This method delivers more than just virtualized storage, because it uses the hosts’ ability to connect to multiple storage systems from different vendors to provide a common way of managing and allocating resources.

The term most often used for this class of products is a “volume manager,” which manages volumes or LUNs on a host system. Several operating systems have basic volume managers included, such as HP-UX, AIX, z/OS, Solaris, Linux and Windows. Third-party volume manager products are also available such as Symantec’s Veritas) Volume Manager.

Network based: As implied by the name, this approach places the virtualization within the data path between the host and the storage system. With the advent of storage networks, the network-based approach to delivering virtualized storage has become popular. One issue that plagued early versions of these products was the lack of advanced software protection capabilities.

There have been several popular and successful products in this category, most notably IBM’s SAN Volume Controller (SVC) and NetApp’s V-Series, as well as products from vendors such as DataCore, FalconStor, LSI StoreAge and others. Another recent offering, though only for virtualized server environments, is EMC’s VPLEX[1].

Storage system: This approach is somewhat similar to network-based virtualization. Storage networking connections such as Fibre Channel and IP are used to connect third-party storage to the primary storage system that is providing the virtualization. This method has the advantage that existing data protection and storage management tools may be extended to support the external, virtualized storage. The Hitachi USP storage platform is the most complete and successful example of this model to date.

The new data center required to deliver ITaaS and cloud computing requires virtualized components at its foundation. Without virtualized computing, networking and storage, administrators will be unable to meet the dynamic demands of their customers without over-provisioning, over-charging, or both.

The success of virtualized computing is now seen by nearly everyone as a transformational event. However, without virtualized networking and storage, data centers will continue to operate inefficiently. The next wave of transformation begins with the use of virtualized storage.

Russ Fellows is a senior analyst with the Evaluator Group research, education and consulting firm, which provides unbiased product analysis and comparisons. Detailed comparisons of virtual storage offerings are available in the Evaluator Group’s SAN Virtualization Comparison Matrix.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *