NAS Data Storage Buying Guide

Posted on May 26, 2016 By Drew Robb

RssImageAltText

Organizations have historically preferred Network Attached Storage (NAS) where data manageability was a higher priority than raw performance. Data manageability was particularly important for those with large files or large numbers of files. In recent years, the reach of NAS storage architectures has expanded because of a variety of technologies such as flash. With flash, NAS is able to deliver performance that previously was only available from block-based SAN systems.

So let’s take a look at where we are today.

NetApp

All-flash arrays such as the NetApp AFF use log-structured architectures to match the low latency performance of flash. Similarly, copy-on-write or redirect-on-write snapshot architectures that were originally developed in NAS systems are standard in flash to minimize the performance overhead and capacity impact of data protection. NetApp has also added clustering support for its ONTAP operating system to simplify manageability of files at scale.

“Flash makes it possible for customers to consolidate traditionally separate SAN and NAS systems into a single architecture that delivers performance and manageability for mixed workloads,” said Lee Caswell, vice president of product and solutions marketing, NetApp. “We now support ONTAP running natively in AWS as well as on white-box hardware so that customers can leverage the same manageability regardless of what the underlying hardware looks like.”

Dell Fluid File System

Earlier this year, Dell launched version 5 of the Fluid File System (FluidFSv5), high performance scale-out NAS designed to address the challenges of managing growth in the number and size of user files. It supports file-based workflows in specialized areas like video surveillance, media and entertainment, and scientific research. Enhancements include increased scalability with global namespace, data governance with Dell Change Auditor, PowerShell and REST API, and global namespace.

“Dell FluidFSv5 is available in a new cost-effective dense solution providing 4PB of capacity for $0.17/GB,” said Travis Vigil, executive director of product management, Dell Storage.

DataCore

DataCore offers unified NAS/SAN with an emphasis on high-availability in compact configurations targeted at Windows environments. Just recently, it introduced a version suited for long retention file shares, cold data and archives with cloud economics. The software scales up to multiple petabytes by adding more capacity and scales out by adding more I/O processing nodes to accompany larger capacities. Deduplication and compression are built in. It may also be configured for scale-out file server, all managed from the DataCore SANsymphony software-defined storage platform.

“This cheap and deep bulk storage solution appeals to organizations that prefer the security of in-house control rather than relying on public cloud services,” said Augie Gonzalez, director product marketing at DataCore Software.

Nexsan

Many NAS tools have morphed into unified appliances. A good example is the Nexsan NST family of unified hybrid storage. It starts around 10 TB and can scale to 5,000 TB. It supports CIFS/SMB, NFS, FTP, Fibre Channel and iSCSI. It also includes a two-tier solid state cache called FASTier, which uses DRAM for L0 writes and reads, and flash memory modules for L1 reads.

Panzura

Panzura Cloud Controllers are certified for Microsoft Azure as a means of enabling an end-to-end global file system (using Panzura controllers on-premises and Panzura controllers running inside Azure). This opens the door to Azure storage being used for all tiers of file storage across distributed locations, as well as the in-cloud NAS, which is integrated with the rest of the global file system. Data is secured at rest as well as in transit in between controllers and the cloud with FIPS 140-2 certified security and security keys.

“When the data is only in the corporate datacenter, the performance of applications in the cloud suffers; when the data is in the cloud, the performance of applications in the corporate datacenter suffers,” said Barry Phillips, chief marketing officer of Panzura. “Splitting data between the two causes inefficiencies due to data integrity and versioning issues.”

OpenStack Manilla

OpenStack Manilla has racked up quite a list of active contributors from the data storage world. This includes Mirantis, NetApp, Huawei, HP, EMC, SUSE, Hitachi, Red Hat, Dell and Intel.

The Manilla shared file system service for OpenStack provides coordinated access to shared or distributed file systems. While it is primarily for OpenStack Compute file sharing, it is accessible as an independent capability for multiple vendor systems and file systems. This expands the reach of OpenStack, which was previously block or object storage focused.

Amazon EFS

Amazon Elastic File System (Amazon EFS), the file storage service for Amazon Elastic Compute Cloud (Amazon EC2) instances, is said to be easy to use. It provides the interface to create and configure file systems, and grow or shrink capacity automatically. It supports the Network File System version 4 (NFSv4) protocol. Multiple Amazon EC2 instances can access an Amazon EFS file system simultaneously. In that way, it acts as common data source for workloads and applications running on more than one EC2 instance.

Qumulo

Qumulo Core is said to be data-aware scale-out NAS. The goal is to manage and store enormous numbers of files with built-in real-time analytics directly within the file system itself. Its latest QC-Series hybrid storage appliances use 10 TB Ultrastar He10 hard drives from HGST. They have capacities ranging from 96TB to 1PB and pricing begins at $50,000.

“Successfully using drives this dense requires a fundamental rethinking of the way that scale-out file storage uses disk drives,” said Jeff Cobb, vice president of product management at Qumulo. “With our flash-first hybrid design and sequential rebuild technology, Qumulo Core provides built-in performance acceleration while still delivering on less than one hour rebuild times regardless of file size, even with 6TB, 8TB, and now the highest density 10TB drives.”

Primary Data

This one is a little different. Primary Data does not offer a NAS product or any other storage system. Its data virtualization platform, called DataSphere, is software that moves data across different storage types and protocols, including NAS, according to IT-defined policies. It moves data intelligently and automatically with little disruption. The benefits are said to be more granular visibility into data requirements, workload patterns and resource consumption. In addition, it offers simplified management of a unified storage pool.

“To move data across file, block and object systems, we place all data under a single global namespace through data virtualization,” said Kaycee Lai, Senior vice president of product management and sales at Primary Data. “It can create scale-out NAS using x86 servers with local DAS storage, or a single cluster of storage with existing storage.”

Nexenta

NexentaStor delivers unified file (NFS and SMB) and block (FC and iSCSI) storage services, runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes data management functionality. This open source-driven software-defined storage platform is said to reduce storage costs for cloud and enterprise workloads. It uses ZFS and scales up to the PB range. Use cases include VMware cloud backend storage, OpenStack and CloudStack backend storage, generic NAS file services and near-line archive and large-scale backup repositories.

Red Hat Gluster Storage

Red Hat acquired Gluster a few years ago. It can be employed to help enterprises build distributed NAS services on bare metal, in virtualized and containerized environments or in the public cloud. Storage clusters can synchronize across long distances and provide standard SMB and NFS interfaces for large numbers of simultaneous clients. This approach works well for rich media, archival, analytics and generic file-sharing workloads.

“Red Hat offers software-defined storage solutions for next-generation workloads that pool the resources of standard servers and storage media, making tens, hundreds, or thousands of hard drives behave like one,” said Ross Turk, director of product marketing, Red Hat Storage. “They are distributed systems designed for durability at large scale, they can grow and shrink on demand, and they can work on a large collection of standard hardware from major vendors.”

More NAS Choices

The above list selection is barely scratching the surface. There are many other NAS tools out there that could easily be included in this guide. EMC VMAX, EMC Isilon, Panasas, HDS, DDN, Ceph (particular with the new Jewel release), Seagate Clusterstor, IBM and Oracle are just a few of the noteworthy ones.

Photo courtesy of Shutterstock.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.