Let’s take a closer look at the last type of SDS platform. We consider the virtual controller approach to be a hybrid approach that maintains the best features of the control and data planes. It abstracts storage management tools to a control layer located next to the workloads. The virtual controllers act as multiple distributed instances to manage and optimize the I/O stream. These software controllers tune I/O performance at the server level, which minimizes latency and accelerates storage performance. A central console manages all virtual controllers and automates priority queues to optimize application performance. The underlying physical layer becomes a managed storage capacity pool matched to application I/O needs.
A leading example of this architecture that emphasizes massive grid scale-out and performance is Gridstore. Its highly scalable grid delivers SDS performance and capacity for high performance and highly available computing needs.
Benefits of Virtual Controllers
We perceive three major benefits of the Virtual Controller SDS architecture: 1) overcoming the limitations of physical storage, 2) optimizing storage performance, and 3) providing a flexible response to dynamic application environments. Ultimately these benefits of SDS are about one overarching consideration: consistent operational and storage Quality of Service in the dynamic infrastructure.
· Efficiency: Overcoming the limitations of physical storage. SDS helps to overcome the inherent limitations of physical storage by adapting storage performance and capacity on demand, and by enabling better mobility and scalability. SDS deploys across a broader assortment of storage resources while greatly simplifying storage administration. When an application server is retired so is its virtual controller, and there is no need to reassign or re-provision the underlying storage resources.
· Utilization: Aligning application I/O with storage resources. Some SDS platforms add storage intelligence to the workloads, which optimizes I/O from the point where it enters the network. This intelligence drives higher performance and matches application needs to storage resources. SDS accomplishes this in different ways: in some cases, this might be through localized SSD and auto-tiering technologies, or it might be through storage control or QoS algorithms.
· Quality of Service: Improving QoS throughout infrastructure. Flexibility, storage intelligence, mobility and scalability are all big issues in achieving QoS in growing data storage environments. SDS can go a long way towards solving these problems even in its first generation. This is particularly true in SDS platforms that optimize application I/O and match them to appropriate storage resources. This ability improves QoS throughout the application/networking/storage infrastructure.
Taneja Group Opinion
Storage virtualization has always lagged behind advances in virtual servers and networking, largely because of its strong ties to physical controllers and storage media. Virtualization enabled IT to construct logical views of physical storage but most data management and provisioning is still done at the physical storage system level.
But continuing development has made software-defined storage a reality where management, provisioning and workload optimization are happening at a level above the storage. There is no SDS product that does it all since we are still in an early stage of software-orchestrated storage management. But today, right now, SDS technologies like orchestration layers, VSAs and virtual controllers are making it possible to detach data and I/O management from the siloed physical systems.
We are closely watching SDS in all its incarnations. By severing storage interaction from the physical storage location, SDS has significant potential to improve QoS throughout the storage infrastructure. Ultimately SDS is a means to provide new life and higher functionality to storage environments than was previously possible, making it a very interesting market to watch as it innovates and evolves.