Over the past decade the data center has been transformed by the emergence and mainstream adoption of virtualization. Today, the data center is a far different creature than any architect would have imagined prior to the year 2000.

Virtualization has changed the ability of IT to deploy and manage workloads, and lent tremendous power to the administrator for manipulating those workloads in clever ways. As a consequence, the transformation has not merely impacted the data center floor and the inner workings hidden from the business, but it has changed the very nature of computing for the business as well.

Now test and development processes have, by the power of virtualization, been injected with new speed, lending agility to the business. Workload availability has increased; more workloads are possible in a given amount of floor space; easier deployment of workloads has created better separation of application components, improved configuration management, and lowered disruptive “incidents” in the IT and application infrastructure.

All of these changes have infused the business with a new ability to depend on IT, and to do so at lower cost and with less risk of disruption.

A look at virtualization alone looks rosy indeed, but it is not real because it does not stretch far enough. The reality is there is much more to the infrastructure than just the computing workload touched by virtualization, and these other systems – storage, networks, data protection – have remained far outside the touch of virtualization.

As an illustration, just imagine today’s multi-tasking administrator, finally comfortable even with the most important apps, and ready to undertake a virtualization initiative for a new business application with multiple components.

Such an exercise today involves more than simply firing up server hardware and installing hypervisors. Today, a minimal set of separately purchased storage, server, network and storage fabric equipment leaves the administrator facing enormous unknowns and considerable complexity.

How much performance can any one application expect across all of these systems? Are each of these layers configured right to achieve good performance? Is there enough bandwidth and IO available to allow backup? How should some VMs be segregated and isolated to work with a DMZ?

What is a suitable and cost effective HA and failover approach for an important application? What about DR? How can multiple VMs be grouped together to enable data protection and cloning operations with consistent, usable data?

Is easy data reuse for testing best enabled by vCloud? Should my network topology include vSwitches and VLANs distributed vSwitches, or VXLANs?

No doubt, the virtual infrastructure unleashes agility and power. Armed with a good virtual infrastructure, an administrator can protect and reuse applications and data with merely a few mouse clicks. But the virtual infrastructure has unleashed equal if not greater complexity than before.

The Power And Complexity Of Abstraction

Before virtualization, the data center was long dominated by physical systems that were each highly sophisticated and specialized. When virtualization entered the data center, it came as a solution to the hardware dependencies and consequent software complexity introduced by some of this highly sophisticated hardware – specifically, the server.

By abstracting hardware into a homogeneous layer that could pool physical systems and slice them up for higher utilization, the server administrator’s job was simultaneously simplified and empowered. The server administrator could deliver all sorts of new capability to the business – from provisioning easy test and development systems with real data on-demand, to faster deployment, to higher utilization and lower IT costs.

But long after its introduction, the virtual infrastructure running on top of server hardware has yet to tackle other physical, specialized hardware systems in the data center such as networks or storage. The virtual infrastructure has instead duplicated the functionality of many of these systems in order to better support virtual machines, but has done little to extend the power of virtualization and abstraction to these physical systems.

Today, the virtual infrastructure is effectively a second data center within the data center and this has impeded IT’s ability to tackle other challenges in the infrastructure. Physical, separate, scattered, and non-scalable resources cannot be efficiently pooled. Storage, networks, and other resources are often over-provisioned and under-utilized because they must be manually touched and are less able to adapt or flexibly share resources compared to the virtual infrastructure.

What is the consequence? Unique and powerful features like virtual networks and virtual infrastructure specific storage presentations come at a price: complexity. Efficiency improved in the server infrastructure and the business has new IT capabilities.

But complexity from managing duplicate layers of functionality has increased human overhead to the point that the business may be less agile, and IT efficiency may be worse off. It is in fact this complexity that hobbles many virtualization initiatives. Especially as the virtual infrastructure scales, complexity may put an end to cost savings, unless businesses can find a better solution.

Infrastructure complexity can make virtualization costly. Virtualization typically starts by consolidating servers and reducing equipment, making management easier and saving dollars spent on servers, storage, and networking. But when the infrastructure grows, managing new layers of virtual infrastructure along with physical infrastructure can put an end to time and effort savings, and frequently require more time and effort.

The next wave of virtualization will be different, and it is starting now. Technology is entering the market that will integrate the functionality of many of these still physical and separate systems into one single infrastructure, with complete virtualization.

The Next Wave – Hyperconvergence

We call this transformational wave of technology “hyperconvergence.” For the first time, an approach has emerged that combines all of the functionality of a data center in an appliance-like form factor that can be connected together to build an entire infrastructure in building block fashion.

Hyperconvergence stands in stark contrast to the latest iterations of convergence. Convergence has too often packaged existing technologies simply to ease consumption and integration, while allowing the consumer to buy fewer individual parts. But converged solutions too often left IT managing the same separate units of functionality.

Hyperconvergence is a seamlessly integrated whole, built upon homogeneous building block appliances that deliver all infrastructure functionality – compute, storage, and networking. Moreover, it will glue all of these parts together so the solution is highly automated top to bottom.

The most significant alteration in the Hyperconvergence approach is that it starts with storage, and aims to make the most difficult to manage resource in the data center highly efficient, performant, and scalable. From that foundation, Hyperconvergence clusters together a practically unlimited set of homogeneous storage+compute+network appliances. After initial deployment the customer can easily scale any starting configuration by simply adding more homogeneous appliances.

With internal storage virtualized across the cluster, any building block can access any stored data, while the cluster’s network makes access to any network port or amount of bandwidth instantly possible. By way of management tools, virtual administrators will be able to impose restrictions, pools, or barriers to facilitate organization and separation of workloads and resources for multi-tenancy or security.

With the Hyperconvergence solutions now entering the market, the IT administrator will be able to focus entirely on the server or application, and trust that all of the integrated components transparently work together in the background. When more interaction is needed, the administrator will be able to see and centrally manage the entirety of all of those other systems, without leaving the Hyperconverged solution.

Then, when more storage, networking, or processing power is needed, a single building block addition will add to the total resource pool, without deployment or integration effort.

This will radically simplify provisioning, scaling, and failure avoidance, and redefine utilization patterns in the data center. Much overprovisioning will vanish, and utilization will match what is actually consumed. Moreover, it will transform the speed and ease of IT infrastructure adaptation, and make the IT driven business vastly more agile.

A VENDOR SURVEY

Surveying the vendor marketplace for Hyperconvergence is straightforward. This marketplace is relatively new, and has only a few recently introduced pioneers: Nutanix, Scale Computing, and SimpliVity.

In common to all three products is a system architecture that starts with scale-out storage. On top of this storage foundation, the vendors layer various flavors of storage functionality and compute virtualization.

With an eye toward the SMB-market, Scale Computing has transformed their highly-affordable, scale-out, iSCSI/CIFS/NFS storage cluster into a Hyperconvergence solution by integrating a highly polished KVM hypervisor and comprehensive management layer.

Scale Computing’s approach has simplified deployment and management to such a degree that we identified an 8X improvement over typical vSphere infrastructures in a recent hands-on Technology Validation.

Nutanix was the earliest of these vendors to introduce Hyperconvergence in their Nutanix Complete Cluster, targeted at mid-sized enterprise customers. Nutanix has received high visibility for unleashing a tremendous amount of power per unit of rack space. They employ dynamic auto-tiering of stored data onto NAND Flash SSD that cranks up the total IO available from each Nutanix node.

SimpliVity more recently entered the market with a similar IO accelerating architecture, alongside global capacity optimizing deduplication. SimpliVity customers can globally distribute SimpliVity OmniCubes across sites and even into the Amazon EC2 cloud, while managing all of the OmniCubes in a single global resource pool as a federated OmniCube cluster. SimpliVity also uses their deduplication technology to optimize WAN data movement across distributed OmniCube nodes. Both Nutanix and SimpliVity use an ESXi hypervisor and VMware’s vSphere for virtual infrastructure management.

Hyperconvergence Matures The Convergence Vision

Hyperconvergence itself is not really so radical – it is simply a better-realized iteration of the vision that every major vendor is pursuing. Those major vendors hope to extend their management approach to control and automate everything in the data center. They operate under a banner of “software defined” IT, or in other cases under the banner of simple convergence.

Hyperconvergence turns this model on its head. It starts by building its foundation on scalable storage-layer glue integrated with compute, that in turn makes all of the physically imposed boundaries and complexity disappear. Convergence has never before started with storage and aimed to tackle higher-level problems. This is what stands to make Hyperconvergence uniquely disruptive.

This handful of pioneers has launched Hyperconvergence in a thunderous start. The promise is that this will fundamentally alter the complexity of the infrastructure. If our hands-on assessment of one of these vendors (Scale Computing) is any representation of the norm, Hyperconvergence will have a big impact. It will create fundamental alterations in the cost of compute, the agility of the business, and in the daily responsibilities of the IT administrator.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *