Henry Newman's Storage Blog Archives for June 2014

PCIe 4.0 Delays

The PCIe 4.0 specification was scheduled to be available late 2015, but as of now after the developers’ conference, the PCIe 4.0 specification is going to be available in early to mid 2016.

If you go back to 18 months before the PCIe 3.0 specification was supposed to be released, the PCI-SIG had almost another year delay.  Back in September of 2011 I wrote about the need for PCIe roadmap to get into gear and go faster.  Back then the PCI-SIG said that the specification would be available in 2015 or 2016. 

The issues are still that we are only expecting a total 8x improvement over the initial PCIe 1.0 specification in 12 years (2004 to 2016 for product, if they are on time with the specification in 2016). 

Let’s make some reasonable assumptions based on the past.  PCIe 4.0 specification will be out maybe 6 months late and products will be available 4 months later.  This is similar to PCIe 2.0 and 3.0 lateness and availability.  

So that means we can see PCIe 4.0 products in late 3Q16 or early 4Q16.  This will be in time for 24 Gbit/sec SAS, but PCIe 4.0 will be very late for EDR InfiniBand and host side 100 Gbit ethernet.  It’s easy, of course, for me to say just get it done, but these are hard problems.

You have to integrate the PCIe channel such so that there is connectivity to memory to supply enough bandwidth. Today that number is 40 GB/sec and with PCIe 4.0 that will likely go to 80 GB/sec.  Heck it was not but a few years ago that the memory bandwidth of a whole Xeon was less than 80 GB/sec, but as I have always said the issue is a balanced system. CPU performance has gone up, memory bandwidth has gone up both more than 8x since 2004, while I/O bandwidth in and out of the system has not.  80 GB/sec per bus is not going to be enough.

Photo courtesy of Shutterstock.

Labels: data storage, PCIe

posted by: Henry Newman

IEEE Mass Storage Conference Highlights

I attended this year's IEEE Mass Storage Conference, and as always it was a great learning experience. I have been attending the conference for over 15 years and the conference has changed focus significantly. 

Back 15 years ago  – and even 5 years ago – the conference was dominated by high performance computing people, as they were the ones pushing storage to the limit. Today the conference has presentations from Ebay, Facebook and others, both infrastructure users and providers. The reason is that these are the environments that are driving storage not the HPC community.  

Some themes in the conference are that big data will be using flash for workloads from Ebay and others; the challenges for storage using OpenStack; and details and usage examples for Seagate’s Kinetic drives. It was very interesting to hear from this presentation how the drive can take random I/O and make them sequential, instead of 150 random IOPS and .59 MB/sec random write performance of 50 MB/sec, as the data is written sequentially on the disk. This is one of the many reasons why you want the disk drive doing the allocation rather than a file system on a server that does not understand the file system topology.   

In thinking about the performance for random writes, that is an 84x improvement in performance over a standard disk drive, which is pretty amazing. The Kinetic drive picks up the baton where ANSI T10 Object Storage Device was supposed to be providing a SCSI methodology of having disk drives manage the allocation. This technology was lost in the recession as far as I am concerned, but Kinetic provides a put/get interface with key value store to manage the drive. Yes, random reads will be the about the same as a standard disk drive today, but 84x on writes is something to write about.

Photo courtesy of Shutterstock.

Labels: IEEE, enterprise storage, IOPS

posted by: Henry Newman

8 Gbit/sec SATA: Really?

I read this article back over a year ago and was told that I would get 8 Gbit/sec SATA in 2013.  It is now 2014 and we still have no products.   The SAS train has left the station, leaving SATA in the dust with SAS at 12 Gbit/sec with plans to go to 24 Gbit/sec.  The SAS people hit their 12 Gbit roadmap from a few years ago pretty much on target, but as you can see the SATA roadmap is falling way behind.

What does this mean for SATA connected devices, specifically consumer disks, but also nearline disks with both SATA and SAS connectivity?  My guess is they are moving back to two classes of connectivity.  For those of you who were around about a decade ago, we had two classes, were SATA and Fibre channel.  These got merged in the late 2000s to SATA and SAS and they operated at the same speed. 

Consumer drives always had a SATA or IDE connection and never had SAS connectivity.  So what happens to disk drives over the next few years?  My guess is that the SATA connectivity on the drives does not move up to 8 Gbit/sec, when and if the standard gets codified.  The reason is pretty simple: most if not all storage vendors are moving to SAS connectivity. Why should the drive vendors spend the money to build something that no one will use?

My good friend Jeff Layton will be addressing the issue of SAS and SATA reliability soon on Enterprise Storage Forum. Look for it. On the consumer side, SATA will continue to dominate and you might ask why. The reason is that Intel has SATA on their CPUs so it is easy to connect without building external chipsets on the motherboard.  That is the way I see it going.

Labels: enterprise storage, SAS, SATA

posted by: Henry Newman