Over the past decade the data center has been transformed by the emergence and mainstream adoption of virtualization. Today, the data center is a far different creature than any architect would have imagined prior to the year 2000.

Virtualization has changed the ability of IT to deploy and manage workloads, and lent tremendous power to the administrator for manipulating those workloads in clever ways. As a consequence, the transformation has not merely impacted the data center floor and the inner workings hidden from the business, but it has changed the very nature of computing for the business as well.

Now test and development processes have, by the power of virtualization, been injected with new speed, lending agility to the business. Workload availability has increased; more workloads are possible in a given amount of floor space; easier deployment of workloads has created better separation of application components, improved configuration management, and lowered disruptive “incidents” in the IT and application infrastructure.

All of these changes have infused the business with a new ability to depend on IT, and to do so at lower cost and with less risk of disruption.

A look at virtualization alone looks rosy indeed, but it is not real because it does not stretch far enough. The reality is there is much more to the infrastructure than just the computing workload touched by virtualization, and these other systems – storage, networks, data protection – have remained far outside the touch of virtualization.

As an illustration, just imagine today’s multi-tasking administrator, finally comfortable even with the most important apps, and ready to undertake a virtualization initiative for a new business application with multiple components.

Such an exercise today involves more than simply firing up server hardware and installing hypervisors. Today, a minimal set of separately purchased storage, server, network and storage fabric equipment leaves the administrator facing enormous unknowns and considerable complexity.

How much performance can any one application expect across all of these systems? Are each of these layers configured right to achieve good performance? Is there enough bandwidth and IO available to allow backup? How should some VMs be segregated and isolated to work with a DMZ?

What is a suitable and cost effective HA and failover approach for an important application? What about DR? How can multiple VMs be grouped together to enable data protection and cloning operations with consistent, usable data?

Is easy data reuse for testing best enabled by vCloud? Should my network topology include vSwitches and VLANs distributed vSwitches, or VXLANs?

No doubt, the virtual infrastructure unleashes agility and power. Armed with a good virtual infrastructure, an administrator can protect and reuse applications and data with merely a few mouse clicks. But the virtual infrastructure has unleashed equal if not greater complexity than before.

The Power and complexity of Abstraction
Before virtualization, the data center was long dominated by physical systems that were each highly sophisticated and specialized. When virtualization entered the data center, it came as a solution to the hardware dependencies and consequent software complexity introduced by some of this highly sophisticated hardware – specifically, the server.

By abstracting hardware into a homogeneous layer that could pool physical systems and slice them up for higher utilization, the server administrator’s job was simultaneously simplified and empowered. The server administrator could deliver all sorts of new capability to the business – from provisioning easy test and development systems with real data on-demand, to faster deployment, to higher utilization and lower IT costs.

But long after its introduction, the virtual infrastructure running on top of server hardware has yet to tackle other physical, specialized hardware systems in the data center such as networks or storage. The virtual infrastructure has instead duplicated the functionality of many of these systems in order to better support virtual machines, but has done little to extend the power of virtualization and abstraction to these physical systems.

Today, the virtual infrastructure is effectively a second data center within the data center and this has impeded IT’s ability to tackle other challenges in the infrastructure. Physical, separate, scattered, and non-scalable resources cannot be efficiently pooled. Storage, networks, and other resources are often over-provisioned and under-utilized because they must be manually touched and are less able to adapt or flexibly share resources compared to the virtual infrastructure.

What is the consequence? Unique and powerful features like virtual networks and virtual infrastructure specific storage presentations come at a price: complexity. Efficiency improved in the server infrastructure and the business has new IT capabilities.

But complexity from managing duplicate layers of functionality has increased human overhead to the point that the business may be less agile, and IT efficiency may be worse off. It is in fact this complexity that hobbles many virtualization initiatives. Especially as the virtual infrastructure scales, complexity may put an end to cost savings, unless businesses can find a better solution.

Infrastructure complexity can make virtualization costly. Virtualization typically starts by consolidating servers and reducing equipment, making management easier and saving dollars spent on servers, storage, and networking. But when the infrastructure grows, managing new layers of virtual infrastructure along with physical infrastructure can put an end to time and effort savings, and frequently require more time and effort.

The next wave of virtualization will be different, and it is starting now. Technology is entering the market that will integrate the functionality of many of these still physical and separate systems into one single infrastructure, with complete virtualization.

The Next Wave – Hyperconvergence
We call this transformational wave of technology “hyperconvergence.” For the first time, an approach has emerged that combines all of the functionality of a data center in an appliance-like form factor that can be connected together to build an entire infrastructure in building block fashion.

Hyperconvergence stands in stark contrast to the latest iterations of convergence. Convergence has too often packaged existing technologies simply to ease consumption and integration, while allowing the consumer to buy fewer individual parts. But converged solutions too often left IT managing the same separate units of functionality.