At Linux World last month, four-year-old start-up Ibrix demonstrated its ability to overcome scalability issues with Linux clusters with its FusionFS parallel file system, the central component of the company’s recently announced Fusion Software Suite.

The FusionFS file system allows users to build a single, segmented file system that can scale to 16PB under a single name space and provide up to 1TBps of aggregate throughput performance, according to company officials. This compares to basic Linux clusters (without sophisticated file systems) that have a 2TB limitation (16TB is in the works) that can suffer performance degradation as computational nodes are added (see figure).

Ibrix gets around the 2TB cluster limitation by breaking the file system into multiple segments, which together form a single file system with a single global name space. The file system scales linearly as nodes (servers) are added, claim company officials.

“Economics are driving a shift from monolithic architectures to commodity [Linux-based] architectures,” says Sudhir Srinivasan, chief technology officer at Ibrix. “The goal is to deliver the same aggregate I/O throughput across thousands of nodes in a cluster [that you can get from a handful of nodes] in a monolithic architecture.”

Ibrix’s FusionFS is one solution, although other options are also available from vendors such as Hewlett-Packard (StorageWorks Scalable File Share), Isilon (OneFS), and Panasas (Active Scale File System), among others. Microsoft also offers its Distributed File System (DFS), but only for Windows environments, and ADIC (StorNext), IBM (TotalStorage SAN FS), and SGI (CXFS) offer shared file systems.

“The primary difference between Ibrix’s file system and many of the others is scale,” says Steve Duplessie, founder and senior analyst with the Enterprise Strategy Group (ESG) consulting firm. “Ibrix has a true distributed lock manager, which means [the file system] doesn’t run out of gas at 16 nodes like some other file systems.”

Additionally, Duplessie says that Ibrix’s file system is built for both high throughput and I/O-intensive applications.

“Some of the other file systems are designed for throughput versus I/Os, which makes them well-suited for applications such as data streaming or seismic data processing where there are a smaller number of very large files,” says Duplessie. “Ibrix, on the other hand, designed the file system to be more transactional in nature, which has more general-purpose appeal.”

While Ibrix is initially targeting high-performance computing (HPC) applications such as seismic processing and computational flow dynamics simulation, its architecture is not limited to these types of environments. The file system can be used in cluster, grid, and mainstream commercial environments.

In addition to the parallel file system, software components include various high-availability features (FusionHA) and a Web-based management system (FusionManager). (See “at a glance,” left). The software suite is available through Ibrix’s channel partners, including Dell.