Just to be clear, storage class memory is a type of storage which is persistent (so it doesn't lose data if power is lost) like an SSD or HDD, but which operates at around the speed of DRAM.
There are various types of storage class memory, including DIMMs made with flash storage that is treated as memory (known as NVDIMM-F), and a similar setup which is treated as persistent storage (NVDIMM-P.) But NVDIMM-N is something different and more complex.
Essentially NVDIMM-N is a kind of Frankenstein memory module made up of DDR4 SDRAM, a similar quantity of flash storage, and a capacitor, battery or other power source that enables the contents of the DRAM to be written to the flash in the event of a power failure, so that the data persists. Once power is restored the contents of the flash storage can immediately be written back into the DRAM.
Storage class memory built for speed
The benefit of this kind of storage class memory is that it sits directly on the memory bus close to the CPU, so it is fast. Very fast indeed. Much faster than a standard SSD, and faster than flash storage placed on the PCIe bus.
That makes it ideal for software governing financial and other transactions where there is a huge performance hit due to the need to ensure each operation is written to persistent storage such as SSD caches or HDDs before proceeding to mitigate the risk of data loss from a power failure. By using storage class memory this performance bottleneck can be removed in one fell swoop.
Of course there's a financial penalty to using storage class memory like NVDIMM-N because it's expensive: for each megabyte of NVDIMM-N you have to pay for a megabyte of DRAM, another megabyte of flash storage, and then you have to add the cost of the power source on top of that. And since NVDIMM-N is not as popular as regular DRAM or flash storage it's not made in large quantities, which means that manufacturing costs are relatively high.
Easier to adopt storage class memory
One of the stumbling blocks to the adoption of storage class memory (aside from cost) is that because it's a different type of storage than either memory or persistent storage, applications generally have had to be modified to be able to take advantage of it. But Windows Server 2016 makes it much easier to adopt storage class memory because it can use it in one of two ways.
The first, and simplest, way to use NVDIMM-N storage class memory is to use it in what Microsoft calls block mode. This treats NVDIMM as if it were a disk device – albeit a very fast one – that is accessed through a file system.
Effectively what happens is the operating system – Windows Server 2016 – looks for a storage class memory module and if it finds one it automatically loads a storage class memory driver, and presents it as a disk. In this mode an administrator can configure the storage class memory with whatever file system they like, and any applications use the same APIs that they have always done to write sectors to this storage class memory. The only real difference is that the app gets sub-10 microsecond random 4k access to data.
By way of comparison, Microsoft says that, using an NVMe SSD, it's possible to get performance of 55MBps throughput (or perhaps more, at the cost of higher latency,) while with an NVDIMM-N storage class memory module the same workload could get 700MBps throughput with latency of just 0.01ms.
Byte addressable storage class memory
While using storage class memory in block mode means that software doesn't need to be altered in any way, the drawback is that storage operations still need to go through the entire software stack, with all the latency that that involves. The alternative is to use storage class memory in a byte addressable way, providing applications with direct access to memory though memory mapped files. To do this an administrator would configure NVDIMM-N storage class memory as a direct access (DAX) volume.
Now conventionally when you use memory mapping (from an SSD, for example) the application would take a file, put it into memory space, manipulate it, and once it had finished with the file it would be written back to disk. But using storage class memory configured as a DAX volume the data is already on the memory bus so there is no need to move it. The app modifies the data directly at DDR memory speeds. That makes storage class memory addressed in byte mode (as a DAX volume) very much faster than when addressed in block mode.
The drawback to this is that software has to modified to take advantage of storage class memory used in byte mode, and Scott Sinclair, a senior analyst at Enterprise Strategy Group, believes it’s a major drawback. "Hardware that requires a rewrite of software tends to have slow adoption – if any. Innovations that work with applications as they are will be adopted much faster," he says.
Storage class memory adoption
An interesting question to ask is how quickly storage class memory is likely to catch on? Jim Handy, solid state storage expert and semiconductor analyst at Objective Analysis, believes that the speed gain from using NVDIMM-N in block mode (compared to a fast SSD) will have niche applications, particularly for storing journaling or log files for databases. This can speed up transactions without the need to purchase large amounts of storage class memory. And indeed one of the use cases that Microsoft suggest is for SQL database log files.
But he sees the fact that NVDIMM-N is expensive as a major problem. "One thing that is key that is working against it is that it costs more than DRAM – which is expensive – but it isn't faster. To fit in to a storage hierarchy you need it to be more expensive but faster than something, or cheaper but slower. But I think this does pave the way for 3D XPoint in a DIMM."
(3D XPoint, when it is made available by Intel and Micron, will be faster than flash yet slower than DRAM. It will also be more expensive than flash, but cheaper than DRAM. That means it will fit into Handy's storage hierarchy more easily than NVDIMM-N.)
There is still a place for NVDIMM-N, however, because when used to speed up transactions when speed is a key consideration it is faster than flash (the obvious persistent storage alternative.) In this and other niche applications the extra speed of storage class memory will be worth the extra cost.
Enterprise Strategy Group's Sinclair expects adoption to be slow, but believes it will follow the same trajectory as SSDs did a few years ago. "SSDs found immediate adoption when there was a financial gain from speed – where reduced latency translated into dollars. It will be the same with persistent memory. As the hardware cost comes down and storage layer software comprehends storage class memory better it will be more widely applicable to more and more workloads – but it could take several years."
Software tools for storage class memory
At the moment there is a dearth of applications which can take advantage of DAX volumes, so until such time as they become available enterprises will have to do their own modifications to their own software. But the good news is that there is an open source project maintained on GitHub working on a non-volatile memory library (NVML) – a toolset which is designed to make the use of storage class memory easier to use as a DAX volume.
At the moment it is available for Linux but work is underway to port it to Windows, according to Tobias Klima, Microsoft's program manager responsible for work on Microsoft storage drivers. "It gives you the ability to create a persistent memory-aware log structure, or a persistent memory-aware heap. You need help with flushing because you don’t feel like figuring out what the latest, best flushing instruction on a Broadwell CPU is? Great. Use NVML and it can help you with all of these things."
Once the tools are in place to make it easy to modify software to take advantage of storage class memory, and once the price has fallen to a more reasonable level, then the rise of storage class seems to be inevitable. "This is the wave of the future – the next evolution of high transaction media," concludes Sinclair.