Henry Newman's Storage Blog Archives for January 2013

Is 2013 the Year that SSDs Change The Storage Market?

As we enter 2013 we all know that the traditional disk drive vendors have entered the SSD market in full force. These are the vendors that regularly ship hundreds of PB of storage yearly to the major external storage vendors (EMC, NetApp, IBM, HDS, etc). Of course I am taking about enterprise SSDs, which support SAS interface, T10 PI/DIF and have at least an error Rate (non-recoverable, bits read) of 10E16 bits.

The storage vendors spend enormous time and money qualifying disk drives. Disk drive qualification often takes months to confirm performance, reliability and error handling. So if you were a vendor and you made a controller that, say, does 500,000 IOPS, do you need a single SSD can do 100,000+ IOPS? And likely not even if the controller does 1,000,000 IOPS.

So there are two points here:

1) Storage vendors do not want to spend time and money doing qualifications on unknown drives. I think that they rather buy drives from the two big players (Seagate and WD and maybe the third vendor Toshiba) rather than qualifying another vendor’s SSD. There needs to be a very good cost reason to do use an SSD from an SSD vendor that you are not buy disk drives from.

2) Controller vendors do not 100K IOPs SSDs. The performance of these drives overruns the performance of the controller.

As I have said for a very long time, there is going to be consolidation in the SSD market space. We have just seen the beginning over the last year of that consolidation.

It’ a given that most controllers can be saturated by even a small number of SSDs from the traditional disk drive vendors, which have notably less performance than the first crop of enterprise SSDs (you know the vendors). Maybe the traditional disk drive vendors know this and are producing SSDs per the specifications of the controller vendors. Either way the traditional disk drive vendors have a significant advantage and will continue to have that advantage.

Labels: SSD, Storage

posted by: Henry Newman

The Human Face of Big Data

A friend of mine over at InsideHPC suggested I read this book The Human Face of Big Data. (The link goes to 10-minute video based on the book – definitely worth your time.)

It’s interesting how author Rick Smolan discusses big data analysis and collection. We in the computer science field think about algorithms and what algorithm will need to be applied to what type of data to get the answer we are looking for. Smolan takes it up a few levels. He looks at what the impact of getting the answers will be and how the data is going to be collected via the billions of people on the planet connected via the cellular networks.

One of the most interesting points is the amount of data that is going to be collected. I am sure that this will please the storage companies, but it leaves me with the sinking feeling that the storage demands are not going to keep up with the storage requirements for the analysis that is going to be needed.

As I’ve said before, we need to save data as we do not know everything about that data, and quite possibly in the future we will be able to extract new information about data that has been collected. This is true whether that data is genetic data, climate data or something like seismic traces to find oil.

All of these are excellent examples of data that is archived and people have found new information from older data that – if it had not been archived – would likely have been lost or have a high cost to duplicate.

The tradeoff will be: what is the cost of storing the data as compared to recollecting it. I have ordered the book?

Labels: big data

posted by: Henry Newman