Why Is Software-Based Storage Management Still Hardware-Dependent?

Posted on March 12, 2014 By Content Contributor

RssImageAltText

By Bill Stevenson, Executive Chairman, Sanbolic

In recent years, enterprises have dramatically improved the utilization and flexibility of their compute resources—approximately 68 percent of servers are now virtualized. But storage has not yet seen a similar large-scale architectural shift in the enterprise or service provider market. Less than 5 percent of the $29 billion external storage market has shifted to new architectures. But storage will likely look very different five years from now.

Web-scale data centers have demonstrated the benefits of the “software-defined data center” using converged architectures built around industry standard servers and storage components. The big three (Facebook, Amazon and Google) make little or no use of expensive proprietary storage arrays. The storage intelligence of these hyperscale deployments resides in sophisticated software running on the servers. The software developed by the players is not available commercially, but disruptive new storage vendors have been working to deliver these same economies to the enterprise.

Several new storage players like Nutanix and Simplivity are going to market with appliance-based models. These solutions offer converged infrastructure—compute and storage both reside in the commodity servers, enabled by custom software. Hybrid storage vendor Nimble highlights its file system as its core competitive advantage but also comes to the market as an appliance. And at least a dozen other startups have developed flash-centric appliances.

So will new types of storage appliances or converged compute/storage appliances emerge as predominant in next-gen storage architectures? Or will the pure-play “software defined data center” extend to encompass the majority of networking and storage assets a decade from now?

Looking back a couple of decades, the type of storage array currently in widespread use emerged at a time when server processors were much less powerful, when disk drives had much less capacity, when solid state memory was much more expensive and when systems management was much less automated. There was a clear technical and economic advantage of locating storage resources in a dedicated appliance that could be managed locally. Dedicated storage appliances remain the most common architecture for Tier 1/Tier 2 storage, but many of the assumptions that drove their adoption no longer hold. Today, server processors often have excess capacity available for storage workloads. The smallest server chassis can hold many terabytes of storage, and solid state memory has become inexpensive enough to use as persistent storage. There is a lot of inertia around the appliance business model though, even for new vendors in the storage space.



Page 1 of 2

1 2
  Next Page >>

Comment and Contribute
(Maximum characters: 1200). You have
characters left.