For any of you or following the Supercomputing show here in Denver, we have heard the latest announcements of the Top 500 Supercomputers in the world. We also get daily reports from the Student Cluster Competition, but sadly neither of these competitions addresses real world problems. Why? Because data movement to storage for input, results and checkpoint are not considered.
The Supercomputing show is one of the preeminent events for great talent in all disciplines of computer science and engineering, and this Denver is the 25th anniversary of the show. Given its importance, I believe it is time for the computer industry to take a hard look at the computational problems, whether they be scientific research, engineering, big data analysis, or business processing. And then to realize that benchmarks, competitions performance tests and the like that do not consider moving data in and out the system in a way that is consistent with the workflow for the applications that are trying to be measured, emulated or simulated really do not provide a total picture.
You can have the fastest system in the world – and China does – but without the ability to read and write data to storage, what good is it? At a bare minimum, large applications on large systems are going to have to checkpoint their work, given the high probability that some component is going fail and the job will therefore fail.
Today, many environments consider the storage design and complexity to be far more difficult to manage and design than the computational environment. File systems, networks and storage system are complex. And though efforts have been in progress by many to make them simpler, many still often require significant efforts. If we keep forgetting I/O and leaving it out of benchmarks, we get what we deserve.
Labels: data storage,supercomputer,I/O
posted by: Henry Newman