Henry Newman's Storage Blog Archives for December 2013

Data Storage: What about I/O?

For any of you or following the Supercomputing show here in Denver, we have heard the latest announcements of the Top 500 Supercomputers in the world. We also get daily reports from the Student Cluster Competition, but sadly neither of these competitions addresses real world problems. Why? Because data movement to storage for input, results and checkpoint are not considered.

The Supercomputing show is one of the preeminent events for great talent in all disciplines of computer science and engineering, and this Denver is the 25th anniversary of the show. Given its importance, I believe it is time for the computer industry to take a hard look at the computational problems, whether they be scientific research, engineering, big data analysis, or business processing. And then to realize that benchmarks, competitions performance tests and the like that do not consider moving data in and out the system in a way that is consistent with the workflow for the applications that are trying to be measured, emulated or simulated really do not provide a total picture.

You can have the fastest system in the world – and China does – but without the ability to read and write data to storage, what good is it? At a bare minimum, large applications on large systems are going to have to checkpoint their work, given the high probability that some component is going fail and the job will therefore fail.

Today, many environments consider the storage design and complexity to be far more difficult to manage and design than the computational environment. File systems, networks and storage system are complex. And though efforts have been in progress by many to make them simpler, many still often require significant efforts. If we keep forgetting I/O and leaving it out of benchmarks, we get what we deserve.

Labels: data storage, I/O, supercomputer

posted by: Henry Newman

Data Storage Issues: Cloud Computing Fails Again

Editor’s note: As more and more companies look to the cloud for data storage, big questions remain about reliability and uptime. Storage pundit Henry Newman wonders if the cloud is truly ready for enterprise use.  

Today I woke up to find that Microsoft’s 365 has some issues for users worldwide, but not me as I'm not using 365.  It seems like it’s case of “another day, another high reliability cloud down.”

Clearly, vendors either have very poor test procedures and plans or they do not have an adequate test environment.  This is not a surprise to me, given the size of installations, the complexity of the configuration and the cost of maintain a test environment. 

This might be a reason to not go to the cloud.  I have not seen that testing procedures and infrastructure listed as a reason to not deploy to the cloud, but the more I think about it, it makes sense to me that it should be a factor.  So what do Microsoft, Google, Amazon and the other providers have for systems, configuration and workload generation that will give me a warm and fuzzy feeling about the cloud? What will they do in the future (because we all know that in the past all of them have gone down) to provide a robust test environment that will thoroughly test each of the providers cloud implementations? 

Maybe the vendors should work this into a marketing campaign. They could have a global cloud testing “arms race.” Amazon, for instance, could say something like “We have 4,000 servers, high speed networks and 5 PB of storage dedicated to testing.” Then – still hypothetically – the next week Google puts out an add that says “We have 5,000 servers, multiple high speed networks and 10 PB of storage dedicated to testing.” You get the picture.  

It is now clear to me that testing environment, procedures, and resources should be a good part of the evaluations of any cloud provider, but I do not see testing as one of the advertised could features.   All the hype is about availability and applications. But you do not get high availability without testing and clearly all of the vendors have issues.  Maybe it is time vendors re-think things.

Photo courtesy of Shutterstock.

Labels: data storage, cloud computing

posted by: Henry Newman

Helium Drives: Big Questions

Well they’ve finally been announced. Yet while Helium-filled drives sound great, is the world ready for them? The hype is not backed up by many facts, and the facts that exist are very concerning to me – and should be to you.

Let’s take a look at the details from the HGST web page.

First the UltraStar 4 TB drive that we all know:

helium drives

Next, the new Helium drive:

tech comics, funny tech terms

Note the missing information on performance, hard error rate, seek time, etc. If you are not willing to clearly compare the drives so that people can estimate performance, reliability and the time it is going take to rebuild a RAID set, I would like to suggest the drive is not ready for the enterprise.

At 6 TB in size declustered RAID is going to be required, and without the performance data it is a paper tiger as far as I am concerned.

Labels: data storage, enterprise storage, helium

posted by: Henry Newman