As predicted, software defined storage is the buzzword of the year, but what no one is talking about is: what file system is going to interface with all of this storage?
File systems are the hard part of the stack compared to the software defined storage part, in my opinion. It’s pretty easy to buy a generic motherboard with PCIe 3, Linux and storage connection, and – voila – you have a storage platform, but what’s the file system? Are the file system and storage allocations, and tunables all working together to make sure the applications perform well?
I have not seen much written in this area. And after spending multiple decades tuning file systems for storage and applications, I am wondering if this wunderkind called software defined storage is going to be a step backwards for environments with high performance requirements. Now you might think that I am just some HPC bigot. But there are many applications outside of HPC that require high performance from video capture for police and security camera to large data analytics to something as simple as medical imaging in a large hospital.
The usual answer to all I/O performance problems is to throw hardware at the problem, which works for some problems some of the time and does not work for some problems any of the time. Every time you change the storage configuration you have to consider changing the file system and volume configuration.
Software defined storage sounds really simple and sounds like a really good idea but as is often the case, the devil is in the details. How do file system volume managers storage system configurations all interact as software changes the storage configuration? Raise your hand if you do not care about performance. Of course you can define an all-flash environment, but that cost big bucks – and if you are misaligned it wastes big bucks. I think the evolution of software defined storage needs to be watched to see how it’s going to solve this problem.
Photo courtesy of Shutterstock.