Server-Side Resource Leverage

As an example, consider flash. Flash is a big resource investment for IT shops these days, deployed to pump up IO performance. But a big question is where should the flash investment best be made – server cache, server storage, network cache, array cache or array tiers? There are arguments for each, but key is to balance cost versus the performance benefit. For maximum performance one might deploy flash as cache closest to needy workloads as possible, while traditional/hybrid/all flash arrays argue that core flash at the shared pool level provides the most leverage for the investment.

We’ve noticed that server-side flash is available in many sizes, formats and options, and it is almost always cheaper (per GB) than costly array vendor SSDs. The trick is to be able to leverage commodity server-side resources like flash (and increasingly dense RAM) intelligently. For example, there are many focused vendors (e.g. Infinio, Pernix, SanDisk, PrimaryIO) offering server-side flash and/or RAM caching for IO hungry applications. These server-side solutions avoid the need to invest in costly performance-oriented resources in underlying shared array storage. By server-hosting key storage functionality like IO acceleration, IT can invest separately (due to specific needs, budgets or timing) in either more performance by adding flash/RAM at the server or in more capacity by adding cost-effective large disks to arrays.

However, there are other considerations to bear in mind. Because these solutions are not necessarily integrated with a persistent store, they don’t always allow end-to-end data services to be as rich as with a storage array. Deduplication, compression, snaps and clones, if they exist in both the server side solution and an underlying array, don’t necessarily synchronize to share benefits. If dedupe is done on a cache, it usually has to be done separately on the array. That’s inefficient, and often these features don’t exist consistently across the two domains. As always, there are pluses and minuses for each approach.

Network Efficiency Is (Still) Key

One of the key enablers to the effective distribution of functionality is optimizing all the storage traffic across the network. Dedicated high-end SANs like FC (or InfiniBand) traditionally stitch together enterprise servers with shared storage, but fundamentally add significant cost and complexity (and often lowering ultimate agility). ISCSI may be just fine for virtual clients accessing shared storage, but in these new intelligent designs where storage functionality is split between servers and centralized disks it falls down. There is room yet for a more highly optimized “inter-array” network protocol. Here is where new innovative storage array vendors like Datrium provide real differentiation.

Between their server-side storage layer that provides scalable performance using local flash and compute, and their cost-optimized shared capacity storage nodes (simplified two-controller capacity-oriented array shelves), they’ve implemented a distributed filesystem design with a customized network protocol. This optimized “internal storage” data network is designed to increase IO performance, avoid many of the IO impacting issues with standard Ethernet-based protocols, and yet still take advantage of commodity networking infrastructure (i.e. Ethernet).

By smartly splitting array functionality across servers and storage sub-systems, IT architects are free to take advantage of and deploy existing infrastructure or new kinds and formats of servers (e.g. blades) and server resources (e.g. PCIe flash v. SSD flash) when and where they wish without impacting the underlying data store. This also splits storage host-based performance provisioning from appliance-based data durability, so IT can dynamically manage hot spots in mixed-VM environments – a design element that is unlike both hyperconverged approaches and arrays.

We think of this new style of storage architecture as Server Powered Storage (SPS) and we expect a number of startups are building products using these principles. But to our knowledge Datrium is leading the pack.

Doing The Right Things At The Right Place And Time

In summary, we think the traditional monolithic storage array is doomed. The line between compute servers and storage nodes is getting fuzzier every day whether we are talking about the best infrastructure for big data, mission-critical (RDBMS-based) applications, virtual hosting or cloud building. Any technology development that enables hosting modular pieces of formerly monolithic functionality at the best places in the IO lifecycle and workflow path is worth evaluating.

With these new distributed function storage systems, IT can leverage expensive protected storage in a shared pool manner, while taking full advantage of relatively inexpensive server-side assets to really ramp up local performance.