SanDisk and IBM this week announced a partnership that sets the groundwork for new software-defined, all-flash storage solutions for cloud providers and enterprises seeking to economically deploy flash to speed up their big data applications.
Dubbed InfiniFlash for IBM Spectrum Scale Solution, the offering combines SanDisk’s InfiniFlash all-flash storage arrays and IBM’s Spectrum Scale filesystem software. According to SanDisk, its storage hardware starts at less than $1 per gigabyte, even before deduplication and data compression are taken into consideration.
Big Blue is contributing its Spectrum Scale distributed filesystem, which provides file, object and integrated analytics services. Based on IBM’s General Parallel Filesystem (GPFS), the technology is tailored to the demands of compute clusters, public clouds and big data analytics.
Together, the companies hope that enterprises will think of flash first while planning their storage infrastructures.
“By combining these solutions, we bring the best of flash, unified storage and software-defined storage together. This solution is a high performance, POSIX-compliant [Portable Operating System Interface], multi-protocol (NFS, CIFS, HDFS and object) storage system that can cater to a broad range of workload requirements and leverage compute from an ecosystem of industry leading partners,” wrote Shailesh Manjrekar, director of Product and Solutions Management for SanDisk, in a blog post.
“Furthermore, it enables private, hybrid and public cloud customers to deliver Infrastructure as a Service (IAAS), by starting small and easily scaling to multiple [petabytes]. Right out of the box, you’ll find best in class [dollar]/IOPS/TB, with tremendous footprint savings,” continued Manjrekar.
InfiniFlash for IBM Spectrum Scale Solution is a versatile all-flash storage foundation, claims IBM Storage’s Eric Herzog, vice president of product marketing and management. “These solutions will be designed to break new ground and make all-flash storage exceedingly cost-effective for a wide range of use cases, from high-performance databases to virtualized environments to big data oceans to extra-dense active archive repositories and more,” he said in a statement.
Customers can work their way up from 3U, 512 terabyte (TB) array to a multi-petabyte system by adding additional 3U storage appliances. “The building block SKU provided 16 GB/s of read throughput and 7 GB/s of write throughput in a replica configuration with 2xNSD Servers and 2x192TB InfiniFlash units,” reported Manjrekar.
Moreover, the architecture provides flexible deployment options, he asserted. “InfiniFlash disaggregated deployment allows independent scaling of compute and storage, along with choice of server and networking vendor. Starter (small), performance-optimized and capacity-optimized Rackscale reference architecture consumption models (bundles) meet customers’ varying workload requirements.”