Evaluating Data Protection for Hyperconverged Infrastructure

Posted on February 02, 2016 By Guest Author

RssImageAltText

By Jim Whalen, Senior Analyst and Consultant, The Taneja Group

Hyperconvergence is a still-evolving trend, and the number of vendors in the space is making the evaluation of hyperconverged infrastructure complex. One criterion to consider in any infrastructure review is data protection.

Effective data protection —backup and restore, replication, disaster recovery— in the hyperconverged infrastructure space is just as important as it is in the more traditional discrete component IT space. Historically, there have been two ways that hyperconverged infrastructure vendors have chosen to provide data protection to their customers: either partner with third-party data protection software vendors or layer some data protection capabilities onto the platform so customers can go with a ‘one box’ solution if so desired. But hyperconverged infrastructure is unconventional, so if you’re thinking about it, don’t let data protection be an afterthought.

SimpliVity is taking a different philosophical approach than other hyperconvergence vendors. Whereas traditionally, hyperconvergence was defined as integrating compute, storage and networking, SimpliVity chose to integrate the entire IT stack, also pulling deduplication, compression, WAN optimization and data protection into its implementation of hyperconvergence. No one had done this before, and it now introduces a third way of providing data protection to customers – build it in from the start as a key element of the architecture to make it as simple, efficient and effective as possible.

The earliest market entrants started with factory pre-configured rack programs, referred to as converged infrastructure, to make it easier to purchase, install, and manage their equipment. Over time, the degree of integration increased, and the industry then began referring to VM-centric hyperconverged architectures, where compute, storage and networking are all tightly pulled together in one box below the hypervisor and provide an integrated, virtualized platform to run workloads on. The big benefit of hyperconvergence is that it basically provides “infrastructure in a box” that enables very short time-to-value. The administrator buys an appliance, plugs it into the IT center, and spins up the desired VMs. Scale-out is transparent; if more compute/storage is needed, another appliance is simply added to the mix, providing more IOPS and storage space to run additional VMs. Almost everything is self-managed by the appliances, significantly reducing the amount of IT involvement required. In essence, hyperconverged systems allow users to operate up at the VM level instead of down at the IT component level.

Partner With Third-Party Data Protection Software Vendors

Pivot3 and Gridstore are examples of vendors that have taken the first approach to data protection and rely solely on third-party software packages for all data protection capabilities. There are advantages to this approach. It allows the platform vendor to focus on what is presumably their core competence —providing a reliable, highly functional, converged platform. It gives the customer a choice of data protection software, which might be particularly valuable if they’re already using that software elsewhere in their data center. It also allows customers to leverage the entire third-party data protection industry to provide comprehensive, advanced functionality and features more quickly than possible on their own.

This is certainly a viable way of supplying data protection; there are a number of excellent, full-featured data protection packages out in the market. However, it costs substantially more for the extra software and dilutes the ease-of-use message of hyperconverged platforms by forcing you to install and manage a critical piece of technology that’s missing from an otherwise complete solution.

More subtly, it surrenders control of the experience while also complicating it. Now, you’re forced to work with both the management interface for the platform and the backup software. You have to learn how to configure and run the software. If something goes wrong, you have to determine if it’s the platform or the backup package. Which vendor do you call?

You may also have to deal with performance impacts to production workloads when the backup software is running. While this third-party approach to providing data protection is functional, it may not be ideal.

Layer Some Data Protection Capabilities onto the Platform

Nutanix and Scale Computing are examples of hyperconverged infrastructure vendors currently using a layered-on approach. Initially, Nutanix relied solely on third-party application vendors for all of its data protection capabilities, and today, they continue to partner with and recommend outside data protection vendors. Recently, however, Nutanix added its own backup, replication and scheduling functionality to its platform. They do this by utilizing the VM snapshotting interface on the hypervisor they’re running. They can replicate these snapshots elsewhere in their cluster or to another site, create clones and perform failover, providing native disaster recovery functionality. Additionally, they can utilize the public cloud as a backup destination. With these features, Nutanix now offers enough data protection capabilities on its platform that users no longer have to buy additional software.

Similar to Nutanix, Scale Computing can take VM snapshots, replicate them elsewhere in the local cluster or to another site, and spin them up as clones.



Page 1 of 2

1 2
  Next Page >>

Comment and Contribute
(Maximum characters: 1200). You have
characters left.