I do not know if you saw this press release from LSI. I must say it is an impressive number. This performance is for SSDs, and as I have said in the past, this kind of performance is likely going to challenge traditional external RAID storage, given the price/performance, especially for parts of the midrange market.
But what I have on my mind is that if SSD vendors are depending on PCIe 3 and beyond for performance, the limitations are not SSD performance but PCIe and OS performance. I am not sure how LSI got the 1M IOPS if it was a legitimate real-world test or some SBT (slimy benchmark test--as a reformed benchmarker, I understand the desire and methods for SBTs as I did them myself). My question is how can an operating system do 1.2 million IOPS efficiently and do any other work? OS interrupts are costly in terms of time, and only so many can be done. Now this is nothing against LSI. I am DARN impressed that their chip can deal with the throughput, but what I would like to know is how this translates to real-world performance on Linux or Windows with real applications like a database.
What I am not sure about is, can an operating systems scale to match these impressive hardware numbers? Having 1.2 million IOPS, say 4K random, equals only 4.5 GB/sec, which is pretty high utilization. Let's say each OS interrupt takes 15,000 clocks, which I think is very low. That equals 18 billion clocks of interrupt time for the 1.2 million IOPS. Well you see the picture. Something must change.