Ibrix debuts parallel file system

By Heidi Biggar

At Linux World last month, four-year-old start-up Ibrix demonstrated its ability to overcome scalability issues with Linux clusters with its FusionFS parallel file system, the central component of the company’s recently announced Fusion Software Suite.

The FusionFS file system allows users to build a single, segmented file system that can scale to 16PB under a single name space and provide up to 1TBps of aggregate throughput performance, according to company officials. This compares to basic Linux clusters (without sophisticated file systems) that have a 2TB limitation (16TB is in the works) that can suffer performance degradation as computational nodes are added (see figure).

Click here to enlarge image

Ibrix gets around the 2TB cluster limitation by breaking the file system into multiple segments, which together form a single file system with a single global name space. The file system scales linearly as nodes (servers) are added, claim company officials.

“Economics are driving a shift from monolithic architectures to commodity [Linux-based] architectures,” says Sudhir Srinivasan, chief technology officer at Ibrix. “The goal is to deliver the same aggregate I/O throughput across thousands of nodes in a cluster [that you can get from a handful of nodes] in a monolithic architecture.”

Ibrix’s FusionFS is one solution, although other options are also available from vendors such as Hewlett-Packard (StorageWorks Scalable File Share), Isilon (OneFS), and Panasas (Active Scale File System), among others. Microsoft also offers its Distributed File System (DFS), but only for Windows environments, and ADIC (StorNext), IBM (TotalStorage SAN FS), and SGI (CXFS) offer shared file systems.

“The primary difference between Ibrix’s file system and many of the others is scale,” says Steve Duplessie, founder and senior analyst with the Enterprise Strategy Group (ESG) consulting firm. “Ibrix has a true distributed lock manager, which means [the file system] doesn’t run out of gas at 16 nodes like some other file systems.”

Additionally, Duplessie says that Ibrix’s file system is built for both high throughput and I/O-intensive applications.

“Some of the other file systems are designed for throughput versus I/Os, which makes them well-suited for applications such as data streaming or seismic data processing where there are a smaller number of very large files,” says Duplessie. “Ibrix, on the other hand, designed the file system to be more transactional in nature, which has more general-purpose appeal.”

While Ibrix is initially targeting high-performance computing (HPC) applications such as seismic processing and computational flow dynamics simulation, its architecture is not limited to these types of environments. The file system can be used in cluster, grid, and mainstream commercial environments.

In addition to the parallel file system, software components include various high-availability features (FusionHA) and a Web-based management system (FusionManager). (See “at a glance,” left). The software suite is available through Ibrix’s channel partners, including Dell.



  • Scalable parallel file system with support for multi-node byte-range locking.
  • Provides linear scalability from a single file system, directory, or file.
  • Supports online file system expansion; an integrated Logical Volume Manager allows users to deploy and configure a storage pool using any combination of SAN-attached storage or DAS.
  • Accessible from clients using NFS and CIFS protocols or Ibrix’s FusionClient driver. (FusionClient is supported on a variety of Linux kernels.)


  • Provides component-level and active-active, server-to-server fail-over and dynamic load balancing, which allows administrators to re-provision servers and storage within a single file system.
  • Ability to monitor a variety of hardware and software elements, detect failures, and automatically transfer control to another server without loss of service.


  • GUI or CLI interface for administering and monitoring Fusion clusters and file systems.

This article was originally published on March 01, 2005