But one reason why this is unlikely to happen in the immediate future is these services need to be standardized, and that is a time consuming task.
"If Intel or Seagate made devices that can take snapshots, it would be pretty useless to me as a software developer," says Karamanolis. "I don't want a single vendor API, I want to write software that works on all hardware. So it's going to take time before these features are supplied by all hardware vendors."
He adds that the choice of features that will be implemented in hardware will largely be driven by what software vendors need. From VMware's perspective, the company's Virtual SAN product is designed with flash caches for fast access, disks for mass storage and software to carry out snapshots and other services.
"One day these could move from the virtual abstraction layer into hardware, but that would require that hardware vendors work with us and others, find a common denominator, and implement those requirements," he says. "And that could take some years."
Once again, the benefit of this would be the reduced complexity of storage software. If the devices themselves offer snapshot services then all the storage software would be required to do is some degree of coordination when a higher level object (like a multi-disk volume) is being snapshotted. That means the CPU would consume less cycles doing storage-related tasks, so the entire software stack would work more quickly.
Inevitably there will be drawbacks to this approach, and the most obvious one has to do with the increased complexity of the storage device firmware. More complexity means more opportunities for bugs and security vulnerabilities in the drives themselves, and the concomitant need to ensure that drive firmware is updated to avoid these problems.
"Software engineering is an art, so you have to suspect that early on there will be issues with this type of firmware," says Karamanolis.
On the other hand, you could also argue that by removing functions like snapshotting from the storage system software and moving it to the storage device, you end up with two relatively simple pieces of software compared to one monolithic, complex and difficult to manage application.
In the medium term we may well end up seeing more and more of the storage software stack ending up inside smart solid state drives. That sounds almost like a reversion to big proprietary storage systems, and away from the current trend for storage systems made up of commodity disks, commodity servers, and clever storage software.
Almost, but not quite. That's because the world has moved on from the proprietary storage approach, Karamanolis believes. "We are beyond the point that customers are stuck with one hardware vendor providing the solution. That won't fly any more," he says. "Customers will want the same features irrespective of the hardware vendor, and a commoditized interface. I don't think we will see customers locked in to hardware vendors."
Although some vendors will object as they would rather offer their own additional value with "multi-million line" proprietary software, Karamanolis expects that there to be more combinations of smart drives and open source storage software to control it.
Ultimately the whole move toward smart storage devices has one simple root cause: at the moment, the software stack has to do all sorts of clever tricks and storage operations: a single I/O operation is magnified, consuming memory, CPU cycles and bandwidth.
If storage services are moved out to the storage devices then this consumption of resources won't be needed, and storage operations will end up being faster, simpler, and cheaper.
Photo courtesy of Shutterstock.