Henry Newman's Storage Blog Archives for May 2014

Why Streaming I/O Still Matters in Data Storage

More and more applications that dominate our storage world require streaming I/O. The main driver for streaming I/O is streaming video capture. From home security to police to (soon) drones, this has been the fastest growth area of the storage market for a number of years. And it’s expected to grow even faster each year for the foreseeable future.

Most file systems with small allocations are designed for multi-user IOPS, which is a totally different set of design requirements than streaming video. Of course what is done today with standard file systems like NTFS, EXT-4 and others is to add hardware and reduce the number of video streams being written per device given the file system efficiency. But that really does not solve the problem and has a high cost with more disk drives, communications channels, and of course added power requirements.

File system efficiency for streaming I/O is one of the things that is really never published and yet it is critically important for many reasons. My definitions of file system scalability is: what is the performance of the raw hardware (HBA/NICs, switches, storage controllers and storage) running I/O to the raw devices streaming, and not taking into account the impact of cache (unless the real applications can and will take advantage of the cache).

I wish vendors would publish this number to give people configuring systems an understanding of the performance they can expect. The scalability of the file system is a critical aspect to the requirements for streaming video capture and it amazes me that vendors – both storage (block and NAS) nor file system vendors – publish numbers on scalability. I wish someone had the money and time to test this, as it would be fascinating.

Photo courtesy of Shutterstock.

Labels: data storage, file systems, streaming, IOPS

posted by: Henry Newman

SanDisk 15nm Products: Will Flash Save the World?

SanDisk stated in a press release that “1Z technology to deliver NAND flash solutions with no sacrifice in memory performance or reliability.”  People welcome this, but my view is a bit different.  The performance is not going to improve, which means that write performance for flash once again does not improve and stays flat as it has for the last three years.

This of course is no surprise if you read my column this month, but further proves my point that flash performance cannot improve much – and that it is not going to save the world.  Flash of course has its advantages over disk for IOPS, especially if they are aligned to the 4 KiB block. But for a number of applications I am dealing with data alignment is – at best – iffy and at worst not really possible without a huge recoding effort. And therefore it’s not really possible given the cost and time.

What SanDisk is doing is 100% correct for the majority of their market, which is removable cards and other flash devices, but it clearly will not help the SSD storage market except from a density perspective.

Of course the other part of the statement is good news: that the reliability has not changed, but my question regards the engineered reliability or the per cell reliability.  There is a big difference between the two statements.  If the per cell reliability is the same then we can expect significantly increased density; on the other hand, if the engineered reliability is the point of reference then the density improvement will not be as great.

I am a firm believer that every storage technology has its place, from flash SSDs to enterprise disk to nearline disk, even to tape.  The market requirements will define the winners and losers, not what vendors tout or research firm quadrants state.

Labels: Flash, SSD, enterprise storage, Sandisk

posted by: Henry Newman