Data Storage Issues: Big Data Benchmarking

The Center for Large-scale Data Systems Research has developed a big data benchmarking community and has held big data workshops. I recently got an announcement that they are looking at moving the big data benchmarking effort to SPEC and specifically SPEC Research Working Group on Big Data Benchmarking.

Having an industry standard benchmark is always a good thing, if the benchmarks are not gamed. Sadly, many industry benchmarks are gamed. For example, the SPC-1 benchmark has responses numbers with 100s of thousands to over 1 million IOPS. What are real users with real applications doing and needing for IOPS numbers?  Do a significant number of real buyers, with real applications, use all-flash arrays for their storage needs?

I think for the most part the likely answer to these questions is no, yet many organizations seem to buy hardware based on SPEC benchmarks rather than understand their workloads and buy hardware based on their requirements and of course budget Do not get me wrong – I am in favor of benchmarks and most, if not all, of the SPEC benchmarks are well suited to measure what they are designed to measure. The issue I have is that the hardware configurations are not designed around real world representations of the real problems, nor does anyone (that might be an exaggeration, but just about anyone) goes out and buys the SPEC benchmark configuration that is at the top of the heap.

In the benchmark world today you have to run SPEC benchmarks and publish, given the environment we live in. I am not sure that, given how SPEC benchmarks are used, it makes good sense. But on the other hand what other agreed upon benchmarks do we have that measure aspects of the hardware and software that we need to have measured?  Clearly, SPEC storage benchmarks are a double-edged sword, where they are well designed and well conceived, yet at the same time the underlying hardware used is often not configured for the real world.   

Photo courtesy of Shutterstock.

Labels: benchmarking,big data storage,big data,big data analysis,data storage

posted by: Henry Newman

Henry Newman, InfoStor Blogger
by Henry Newman
InfoStor Blogger

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.

Previous Posts

Archives