Henry Newman's Storage Blog Archives for May 2014

What Do 6TB Drives Mean to Data Storage?

Well, now both Seagate and HGST/WD have released 6TB drives. What does that mean to the storage suppliers? 

We all know that RAID rebuild time with RAID-6 and 8+2 can take days, depending on a number of factors. But add 50% to the density and it's almost a surety that if I/O is going on from applications, it is going to take days to rebuild. Seagate publishes drive specifications that show 216 MB/sec or 205 MiB/sec, which the Seagate states is maximum sustained transfer rate. 

Usually the average is in the manual, but it’s not available yet online. So my guess is the average transfer rate, based on historical data, is around 175 MB/sec. Let’s first assume best case, that is, maximum performance. That means that for a 6TB drive it will take 27,778 seconds (or about 7.7 hours), and using my best guess we could assume 9.5 hours. 

This of course assumes no one is doing anything else but rebuilding. No user I/O, no system I/O, no disk scrubbing, nothing else. My guess based on today’s 4TB rebuilds is that we are talking best case rebuilding in 2.5 days on reasonable load, and under heavy load maybe as much as 4 days. 

This of course is why I do not think RAID methods such as standard RAID-6 are going to work for large scale for drives at this size. The specifications for hard error rate did not change and we had another 50% increase in density. Consumers of enterprise storage are going to need to band together and demand that vendors use or develop new methods that address device failure and allow the system to operate at reasonable performance and high reliability. This will require all of us to make these demands.

Labels: data storage, hard drive, IOPS, HDD

posted by: Henry Newman

400 Gb/s Ethernet: What Does It Mean for Storage?

The IEEE has released the 400 Gbit/sec (50 GB/sec) standard for Ethernet.  Once again the Ethernet interconnect bar has been raised higher than any other networking connection and is now a factor of 4x over what we have today.

Another way to think about this is that it is 10 Gb/sec over what we have today for PCIe bus bandwidth, with the PCIe 3.0 bus on Intel chips at 40 lane at 1 Gb/sec per lane. 400 Gb/sec Ethernet is targeted initially at switch interconnects, which should be no surprise because that is where the targets were for 10/40 and 100 Gbit/sec to start.

With 400 there is a new stake in the ground, which is even faster than what the FCIA has set for fibre channel performance.  Inter-switch connectivity at 50 Gb/sec is going to allow, for example, 128 high performance SSDs (400 MB/sec write performance) to replicate to other devices.  I still think no matter what anyone else says that in the future we’ll see local storage connectivity with something like SAS and then groups of storage connected via Ethernet.  

The reason is simple economics.  Ethernet development has more vendors, more chips are made and the market is far larger from your TV to routers in the cloud and it is not likely the volume will change that much. This is true even though a standard for gigabit Wi-Fi very recently came out  –  this is most likely not going be fast enough for most high speed applications either at home or the office as network connectivity gets faster. 

Right now I am stuck at 50 MB/sec download speed with my cable company. That’s the fastest home service I can buy for now.  I think over the next few years this is going to change and it already has in many major markets. This is going to drive Ethernet volumes and performance, and though I have not seen them on the market I suspect that 10 GbE Wi-Fi routers are on the horizon, maybe even in 2014.  I do not think that Ethernet dominance is going to change anytime soon.

Photo courtesy of Shutterstock.

Labels: storage networking, EtherNet

posted by: Henry Newman

Helium Drives for Data Storage

Amazon in now selling the HDS Helium filled drives, and though Storage Newsletter made a completely unfair comparison by comparing the cost of Seagate desktop drives with the enterprise quality Helium Drive, the cost issues do compare.

I thought it best to compare the cost of the HGST Helium Drive at $739, with the cost of HGST enterprise UltraStar 4TB drive at $270.  This is the correct comparison as both drives have a hard error rate of 10E15 bits. So the cost per GB for the helium drive is $739/6000 or $0.1232 per GB and the cost of an UltraStar is $.0675 per GB using pricing from Amazon.

Assuming a RAID configuration, even adding in the cost of the tray slot, power and cooling and the rest of the infrastructure, this cost difference makes no sense at 33% reduction in space. At a cost of ~1.83x, there needs to be some serious advantage for these drives other than moving from 4 TB to 6 TB. As I mentioned in this update there is not a great deal of information on the HGST site on the performance of the drive  -- and there still is not.

So what happens to helium drives? What I have seen in the past is that big storage suppliers such as IBM, HP, Dell, EMC, HDS, etc., want and in the past have required two suppliers for a drive generation. This has been the case for at least a decade and a half. Seagate and Toshiba are the other two suppliers to these big storage buyers and neither has announced a plan for helium drives.

So one of two things is going to happen. Either HGST/WD is going to change the way the big buyers do business or the big buyers are not going to jump on the Helium bandwagon.  I am sure the big suppliers have the details on the drive that should be public, but even with that I am betting that the reason you can buy them on Amazon rather from your RAID vendor or choice is that is that they are not going currently being qualified in a product. I actually have never seen an enterprise drive available to consumers before it was available and qualified by the big guys. You of course can draw your own conclusions.

Photo courtesy of Shutterstock.

Labels: data storage, hard drives, helium

posted by: Henry Newman