What’s Missing in SDS?
So what’s missing?
Further, as we shift to a multi-cloud era, the continued advancements around security and availability become key. SDS should look to continue to advance in areas of encryption as well as tighter integration with network virtualization solutions to deliver the necessary levels of security regardless of where the data resides, added Caswell.
“True software-defined storage will always result in a hyper-converged infrastructure (HCI), with an accompanying hypervisor for server virtualization — those two components (hypervisor and SDS) are the fundamental components to HCI,” said Caswell. “It’s hard to separate out just the storage component versus what is happening in the rest of the IT stack around compute, networking and management.”
Davis largely agrees. She said that SDS ultimately leads to an infrastructure with end-to-end integration for consolidation, efficiency and simplicity purposes — fewer devices to use and manage, and improved agility to respond to new demands.
“This also leads to hosting SDS instances in the public cloud to act as a tier of storage for hybrid IT use cases,” said Davis.
Blandini augmented that with the fact that SDS can provide the missing capabilities to enable hybrid cloud data management. Ultimately, infrastructure will consist of the hardware networked together that enables hybrid cloud, where the compute, networking and storage are indistinguishable as segments — it just happens to be what software the hardware is running at the time.
“The value-added features that had been provided by individual storage controllers in the data center historically will be available across multiple clouds as a service,” he said. “Storing data will be a given, how it is managed is where the segment will go.”
SDS is an inevitable evolution for storage across the board from small to large data centers, much like server virtualization. So says Andy Mills, CEO of Enmotus. But where is it best to virtualize? If it’s done too high in the file system volume management layers like many early SDS solutions do, you lose the benefits offered by the continued evolution of storage media as it evolves towards memory-class performance. If you do it too low in the storage layers, you can lose application- or file-level visibility into how much and when storage is used.
“That’s why SDS will remain evolutionary, continually adapting to changing applications and underlying storage media types for some while until we reach a truly intelligent, self-adapting, scalable environment that is truly software enabled and controller,” said Mills.
Near term, he added, the key missing piece is the ability to fully automate the process of allocating storage media and resources to compute. SDS provides a means to simply implement the same old storage in a software defined way, but does not address the need for intelligently placing data on the right media. The next step, then, is to dynamically allocate flash (or other) types of premium storage based on real-time measured usage profiles and continually adapt to changing workloads without taxing the system.
“When that is achieved, we have our first truly software-defined storage architecture that is more than just a means of replacing expensive SANs with commodity storage and software,” concluded Mills.
Photo courtesy of Shutterstock.