At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible. Servers themselves have become commodities, and dense memory, server-side flash, even compute power continue to become increasingly powerful and cost-friendly. Many datacenters already have a glut of CPU that will only increase with newer generations of faster, larger-cored chips, denser packaging and decreasing power requirements. Disparate solutions from in-memory databases (e.g. SAP HANA) to < ahref=””>VMware’s NSX are taking advantage of this rich excess by separating out and moving functionality that used to reside in external devices (i.e. SANs and switches) up onto the server.

Within storage we see two hot trends – hyperconvergence and software defined – getting most of the attention lately. But when we peel back the hype, we find that both are really enabled by this vastly increasing server power – in particular server resources like CPU, memory and flash are getting denser, cheaper and more powerful to the point where they are capable of hosting sophisticated storage processing capabilities directly. Where traditional arrays built on fully centralized, fully shared hardware might struggle with advanced storage functions at scale, server-side storage tends to scale functionality naturally with co-hosted application workloads. The move towards “server-siding” everything is so talked about that it seems inevitable that traditional physical array architectures are doomed.

Yet moving all of an enterprise storage array’s functionality into a server essentially requires adopting and migrating the whole IT stack over to fully hyperconverged solutions (e.g. SimpliVity, GridStore, Nutanix, Scale Computing and various reference architectures) or risk having heavy storage workloads compete with and impact production application performance. Hyperconverged appliances offer great opportunity to simplify the whole stack and optimize TCO, although there can be challenges with wholesale migration, potential vendor lock-in and aligning everything to available appliance SKUs.

Short of hyperconvergence, virtualized storage solutions hosted within application servers can be locally efficient and convenient, but can sometimes (or often?) hinder the optimal global sharing of persisted data, increase the total new infrastructure required and spawn islands of isolated capacity. And those solutions with a more distributed virtual storage grid design can overwhelm networks not designed for massive amounts of IO-heavy east-west traffic flowing between servers.

Overall, both hyperconverged solutions and virtualized storage have a big role to play in the future IoT, hybrid cloud and increasingly distributed/mobile/ROBO world (e.g. Riverbed’s edge hyperconverged SteelFusion). Still, they will not meet the needs of everyone. Hyperconvergence is about replacing the entire infrastructure, and there are indeed situations where this is not warranted, at least for specific applications.

The question is, in those situations, is there a better storage alternative?

Distributing Infrastructure Functions Intelligently

Some storage vendors are now exploring a new, optimally balanced approach, perhaps following the example of network function virtualization (NFV). With NFV, compute-intensive network “functions” are modularized, removed from their previously tight embedding in hardware (e.g. switches) and hosted virtually. This lets key network functions like security sit close to applications, become software upgradeable, offer cloud-like service and scale naturally. As a bonus, network hardware can then be built more simply and cheaply.

In a similar fashion, new array designs are emerging that first smartly modularize storage functions and then intelligently host those components in different layers of the infrastructure. These distributed array designs cleverly move only key “modules” of performance-enhancing storage functionality up into each server client while still maintaining data persistence in a central pool of capacity. In this way they leverage both scale-out commodity server resources and the shared access, optimized capacity and data protection of centralized storage. These new arrays achieve truly scalable performance at an effective price – all without having to re-envision or re-architect the storage array-structured data center.

As an example, turning on global inline deduplication can overwhelm a traditional array design controller. As with scale-out big data architectures, it now makes sense to farm out and “push” compute-intensive processing like deduplication up the stack into each client server. And by deduplicating upstream near the consuming application, everything IO-related downstream becomes even more efficient – including network transmission, data persistence, and protection tasks. Likewise, it makes most sense to persist data in a centralized, shared pool of protected storage. This provides for the easiest global access and shared data workflows, the most resiliency and the lowest TCO for a given capacity.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *