In a previous article, we covered ten data storage applications. But there were just too many good ones to cram everything into one brief top ten listing. So here are another ten. And even with this additional list, we are still doing little more than scratching the surface of the data storage application world.
DataCore Hyper-converged Virtual SAN loads on a cluster of application servers and makes their internal storage shareable as if it were coming from a high-end networked SAN array. Powered by DataCore Parallel I/O technology, critical data is synchronously mirrored between servers to ensure uninterrupted data access and prevent data loss.
“Since 2013, there’s been a growing backlash against proprietary storage hardware, particularly from larger IT organizations and cloud service providers,” said Augie Gonzalez, director of product marketing, DataCore Software. “Storage architects have recognized that faster, more flexible and more economical software alternatives can be relied on to meet their reliability and performance objectives.”
SwiftStack bills itself as being like Amazon, Google and Rackspace but inside your firewall. OpenStack Swift is an object storage system provided under the Apache 2 open source license. It can be used to store files, videos, analytics data, web pages, backup, images, virtual machine snapshots and unstructured data. It is said to be highly scalable and to provide twelve nines of durability. It powers storage clouds for the likes of Comcast, Time Warner, Globo and Wikipedia, and it is used in the public clouds of Rackspace and IBM SoftLayer.
Openstack also provides load balancing that is often needed by scale-out storage solutions. This software feature of Swiftstack provides integrated load balancing, so admins gain more control over maximizing the throughput performance of a cluster.
“For storage teams that do not have control over load balancing hardware in the network, SwiftStack provides integrated load balancing as a software feature to give admins control over maximizing the throughput performance of a cluster,” said Mario Blandini, chief evangelist, SwiftStack.
Amazon Elastic Block Store (EBS) offers persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Amazon EBS volumes are automatically replicated. Users can scale usage up or down rapidly and only pay for what they use. Amazon also has S3 storage as well as Amazon Elastic File System (EFS) Amazon Glazier and Amazon Snowball.
“For smaller deployments and with deployments with a lot of variability, public cloud services such as Amazon EBS might be an option,” said Ashok Rajagopalan, head of product management, Datera.
Datera’s Elastic Data Fabric is described as elastic block storage for on-premise with the flexibility of cloud infrastructure (like AWS EBS) with the high performance of traditional storage arrays. It is sold as standalone software or pre-installed in x86 servers. You can have hybrid clusters (all flash, hybrid and/or all-disk in a single cluster). This allows you to move critical data or archival data from one media to another as needed and optimize performance as needed.
“Datera Elastic Data Fabric runs any application securely, on any orchestration stack, at any scale and is policy-based,” said Rajagopalan. “You can deploy it at your own pace with flexibility for all clouds (private, hybrid, public) on-premise and it’s easy to deploy. It’s one universal infrastructure that runs it all.”
Greg Schulz, an analyst at StorageIO Group, called attention to Storage Replica (SR) for Windows Server 2016. SR comes with new DR and preparedness features in Windows Server 2016. In effect, this brings zero data loss to Windows Server along with synchronous data protection on different racks, buildings and cities. In the event of a disaster, data resides elsewhere to eliminate the chance of data loss via replication between locations. SR also has the ability to switch workloads to safe locations prior to possible outages or disasters. This can be done either as a stretch cluster, in a cluster-to-cluster arrangement or server-to-server. And SR now supports thin provisioning.
EMC ScaleIO creates a server-based SAN from local application server storage; i.e., it converts direct-attached storage into shared block storage. This aids in the management of bandwidth consumption and the eliminating of application hogging, as well as adding snapshots capabilities, thin provisioning and more. It is essentially a convergence play, taking storage and compute resources and unifying them into a single-layer architecture. The platform is hardware agnostic and is designed to massively scale (from three to thousands of nodes) while permitting extreme flash-based performance.
“Throughput and IOPS scale in direct proportion to the number of servers and local storage devices added to the system, improving cost/performance rates with growth,” said Aviv Kaufmann, lab analyst at Enterprise Strategy Group. “Performance optimization is automatic; whenever rebuilds and rebalances are needed, they occur in the background with minimal or no impact to applications and users. The ScaleIO system autonomously manages performance hot spots and data layout.”
The Microsoft Azure universe includes Blog Storage, Queue Storage, File Storage and Data Lake Storage. There are also various tiers of storage available at different performance and price levels. They range all the way from high-performance hot files to less frequently accessed data and archiving. Cool Blob Storage, for example, is Azure’s low cost storage option. It is a good place to dump object data such as backups, compliance and archival data.
Condusiv’s Diskeeper contains features designed to boost storage performance and efficiency. One feature improves storage workload performance and reduces latency by dynamically caching hot reads using idle DRAM. This form of intelligent caching is particularly valuable as a means of raising SSD write speed and producing a longer lifespan for an SSD. In addition, Diskeeper provides fragmentation prevention so file access is faster.
“Diskeeper uses idle DRAM to serve frequently requested reads without causing memory contention or resource starvation,” said Brian Morin, product marketing manager at Condusiv. “Its fragmentation prevention engine ensures contiguous writes from Windows. This eliminates the small writes that normally take place as they steal throughput.”
Caching and micro-tiering tools such as Enmotus FuzeDrive provide automated movement and manipulation of virtual and physical data pages related to flash and Solid State Drives (SSDs). Its data migration engine keeps statistics of virtual pages as a way of figuring out when pages need to be moved to different tiers. Infrequently used pages get moved off the fastest tier, replaced by more frequently requested content.
OpenStack Platform 10 enables Ceph Storage 2 out of the box for block storage. The Red Hat Storage Console 2 is included for better usability and improved lifecycle storage management. Automated Ceph deployment for object storage is also included as part of a larger platform for cloud management and monitoring. 64 TBs of storage capacity is provided via Ceph Storage.
“In just a few short years, OpenStack has become a foundation for mission-critical private cloud deployments,” said Radhesh Balakrishnan, general manager, OpenStack, Red Hat.
Photo courtesy of Shutterstock.