Hadoop, Spark and other big data analysis tools all have one thing in common: they need some form of big data storage to hold the vast quantities of data that they crunch through. The good news is that big data storage options are proliferating.

Nine years ago there was actually very little choice when it came to big data storage for a Hadoop implementation: you’d set up a bunch of commodity servers to process the data, and the big data storage was provided by locally attached disks. In fact the whole point of Hadoop was that it allocated processing tasks to the processors closest to the data involved. Adding storage was as simple as adding new nodes to the compute cluster, with the added benefit that new processing power was added at the same time.

Big Data Storage Without The DAS

But quite apart from the fact that using DAS for big data storage means there is a complete lack of enterprise storage capabilities (such as compliance and regulatory controls, access and audit controls, and even security,) using a “traditional” Hadoop setup is undesirable for reasons of flexibility, says Mike Matchett, a senior analyst at Taneja Group.

“What happens if you want to change your big data processing from MapReduce to Spark? Or you want to use different Hadoop distributions, or share the data? Or you write your data to NFS and don’t want to make copies (to store it in Hadoop Distributed File System – HDFS)? Or you have more complicated workflows with data that doesn’t live in one place?” he asks.

Externalized HDFS Big Data Storage

The obvious solution to this big data storage problem is to externalize HDFS as an API or protocol, Matchett says. What that means is that big data processing clusters are made up of computer-only nodes, while the big data storage is provided by existing (or new) enterprise SANs or NAS devices. These offer the enterprise storage capabilities mentioned above, but more importantly the data stored on them can be accessed using NFS or HDFS.

That means the data can continue to be used by enterprise applications, while the big data analytics systems such as Hadoop can also use it without it having to be copied to a Hadoop cluster first. And in fact multiple analytics programs can access the data at the same time. Products like these from the likes of DDN and EMC Isilon may not cheap, but in many cases a TCO analysis will show that they make more sense than “traditional” big data storage if the data can be shared instead of copied, and if more people in an organization can use the data.

Software Defined Big Data Storage

SANs tend to be expensive, but an alternative type of big data storage is the creation of virtual storage pools using software defined storage (SDS) solutions like EMC’s ViPR software. Other software such as DriveScaleallows you to adjust the ration of compute to big data storage in each cluster node on the fly in software.

In fact you could argue that HDFS is already a type of SDS, but one that needs to be improved upon to get away from the commodity server node based architecture it encourages. “I think we will begin to see more SDS options that are more flexible than HDFS for big data storage,” Matchett says.

Hyperconverged

He adds that SDS leads on to the idea of hyperconverged infrastructure for big data analytics – an approach that is becoming increasingly popular. “DDN offered this with its hScaler appliance, but it was ahead of its time and has now been sunsetted. But now people are more interested.” Cisco, HP, IBM and a number of startups now offer hyperconverged infrastructure for big data that includes big data storage as well as compute and networking.

Another way to take advantage of hyperconverged infrastructure is to operate big data analytics in a virtualized environment, spinning up a cluster for each user or each application on the same physical hardware and implementing QoS to ensure that performance is acceptable. By virtualizing your Hadoop infrastructure it’s possible to create separate virtual machines for compute nodes and big data storage nodes. Then you can spin up more compute nodes when you need them, while keeping the big data storage nodes running, sharing data with different compute clusters.

A benefit of virtualizing Hadoop is that when network traffic between nodes is local (i.e. going from one VM to another within the same physical host) there is zero latency as the traffic goes through the hypervisor’s virtual switch rather than out to a physical one. That means that virtualized Hadoop clusters can actually outperform physical ones.

Big Data Storage Containers

Virtualization may soon be overtaken by containerization, and there are a couple of reasons for this. One is that containers are lighter weight, use less resources and are quicker to spin up. The other is that while VMs have state, containers are immutable. That means they are easier to create, kill and then restart – as long as the big data storage is not included in the container. “People who use containers today use Flocker or something similar to redirect local storage,” explains Matchett.

An alternative approach would be to run big data analytics entirely on cloud infrastructure, and many companies provide this. A problem, though, is moving all your data to big data storage in the cloud. One way to avoid this is to keep your compute and big data storage containers on premise, but use the cloud for container-enabled cluster setup and management. A new startup called Galactic Exchange recently unveiled a system that promises to do just that.

Big Data Flash

One of the problems of using SAN rather than DAS for Big Data storage is that the data is further from the processor nodes, and that increases latency and therefore slows jobs down. One solution for reducing this latency could be to use all-flash arrays for big data storage.

Back in 2015, IDC coined a term for this: Big Data Flash. “Big Data Flash solutions consistently deliver sub-millisecond latencies, scale to hundreds of petabytes, exhibit enterprise class reliability, availability, and serviceability, and bring the secondary economic benefits of flash deployment at scale to big data applications,” said Eric Burgener, a research director with IDC’s Storage Practice.

For super-fast real-time analytics many companies use Spark, an in-memory big data analytics solution. The issue here is that memory is expensive, which limits the practical size of Spark environments. There’s also the issue that DRAM is volatile, so data has to be loaded into memory before it can be processed. Matchett believes there is always a middle ground where in-memory analytics is too expensive, but disk-based analytics is too slow. That would be the area where flash-based big data storage could be interesting.

3D XPoint Made For Big Data Storage

But where it gets really interesting is when new storage media – specifically Intel and Micron’s 3D XPoint storage – becomes available. (3D XPoint is slated for release in the next few months.) This new storage medium is, in theory, a thousand times faster than a conventional flash-based SSD, and much cheaper than DRAM. And don’t forget it’s non-volatile too.

That means as a big data storage medium it could be perfect for “Spark-lite” implementations. In other words it could be used as a cheaper, slightly slower alternative to DRAM for huge in-memory (or strictly in-3D XPoint) analytics. Or it could be used to reduce the cost of smaller Spark analytics setups that don’t require the full speed of a true DRAM-based system. And because it’s non-volatile, there’s no issue with loading vast amounts of data into a Spark system’s DRAM before starting.

Another way that 3D XPoint may revolutionize big data storage is by using it in an NVMe over Fabric setup. NVMe Fabrics have a latency similar to DAS, so a vast central array of 3D XPoint will appear and behave like shared DAS to any processor that needs it. Rather than moving data from Hadoop node to Hadoop node for processing it will be available almost instantly from shared DAS.

Ultimately the right big data storage solution is the one that suits your organization’s needs most closely. But the good news is that with DAS, NAS, SAN, SDS, virtualization, containers, flash, 3D XPoint, NVMe and the cloud there has never been a wider range of options to choose from.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *