Storage startup Gluster has become the latest vendor to develop an open-source storage platform that scales to hundreds of petabytes in a single volume across commodity storage nodes.

The companyr recently announced the general availability of its Gluster Storage Platform, which combines the open-source GlusterFS file system with an operating system layer and a management interface to aggregate disk and memory resources into a single pool of capacity under a global namespace.

The Gluster Storage Platform is capable of replicating data for high availability and performs real-time error detection and correction within files while they are running and during recovery from hardware failures. It also provides integrated management of volumes, data resources and servers with centralized logging and reporting.

Gluster’s senior director of marketing, Jack O’Brien, says the Gluster Storage Platform is different from other scale-out clustered storage platforms because it does not rely on a centralized index to perform data management and movement tasks.

“GlusterFS is the only commercial file system that doesn’t use a centralized metadata server. Other vendors use it because it’s easier to manage, but it creates a bottleneck,” O’Brien claims. “We eliminate that problem by using an algorithm that allows any node in the cluster to locate and place data. It is a big reason why we get linear scalability.”

O’Brien also touts the Gluster Storage Platform as a virtual storage pool for virtual server deployments. He claims the software ensures uninterrupted operation of virtual machines (VMs).

Replicated VMs can continuously operate in the event of hardware failure, and recovery is performed in the background without requiring a restart or blocking I/O to the live VM, according to O’Brien. The Gluster Storage Platform uses checksum-based healing, which detects and corrects errors within the VM rather than for the entire VM image.

Gluster Storage Platform is now available with subscriptions starting at $1,500 per storage node per year.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *