Special effects and shared file systems

Posted on September 01, 2004

RssImageAltText

The Orphanage visual effects studio leverages a shared file system and a SAN to coordinate artists and save time and money.

By Michele Hope

The Orphanage, a San Francisco-based special-effects studio, is no stranger to explosions. In fact, the company's nearly 200 artists—whose credits include the creation of 2D and 3D special effects for such feature films as HellBoy, Day After Tomorrow, Sky Captain, Spy Kids 3D, and Charlie's Angels II—tend to immerse themselves in the art of how to make explosions look more devastatingly real on the screen.

The group's impressive arsenal of credits has led to an explosion of sorts within the company as well, with staff growing from 20 employees about a year and a half ago to almost 200 today. And according to Nicholas McDowell, director of IT, The Orphanage is just at the start of an exponential growth curve.

With ultimate plans for a staff of 500 to 600 employees, The Orphanage puts new meaning to the phrase "explosive growth." It also highlights how important it's been for McDowell and his four-person IT team to get the most use out of the company's current IT systems and storage—all while developing an underlying architecture that is easy to scale at a moment's notice.


In a few years, The Orphanage's storage system has expanded from one server to nine, and from 2TB of disk storage to 18TB.
Click here to enlarge image

The Orphanage's storage needs have also grown exponentially since McDowell joined the company a few years back. Take the recent HellBoy special-effects project that involved almost 10TB of storage capacity at its peak and more than 100 of The Orphanage's artists. "At one point during HellBoy we were generating 500GB of new data a day," says McDowell. "We did some things that were pretty difficult."

When McDowell joined The Orphanage, the studio had one Apple Xserve server, relegated to the mailroom, and only 2 TB of locally attached storage. Today, they have nearly 20TB of storage in the company's core production system. The production system consists of a SAN with nine SGI TP9100 disk arrays and InfiniteStorage Shared Filesystem CXFS cluster software. (Each array in the cluster has a capacity of 2TB for a total of 18TB.)

Presenting itself as a massive, scalable NAS device to clients on the network, the CXFS cluster has eight server nodes at the front of the SAN that act as NAS heads, with open-source Samba code to allow files from the SAN's shared file system to be saved or accessed elsewhere on the network via the CIFS or NFS protocols.

Three of the eight servers are SGI Origin systems (models 300 and 350) running the IRIX operating system. The other five are Linux servers from a variety of manufacturers (see "The Orphanage's SAN at a glance," p. 38, for configuration details).

The Orphanage's artists save and retrieve data from the shared file system via the three IRIX servers, while the company's renderfarm communicates with the file system via the Linux servers. Artists use Adobe After Effects software along with custom plug-ins to develop special-effects sequences. The artists then send their work to the renderfarm consisting of 200 Windows-based dual-processor Intel systems from BOXX Technologies.

"When the artists are finished manipulating the files, they submit them to the renderfarm. The farm then cranks it out," says McDowell. "It's the workhorse for the artists' work. To render out a shot—which requires up to 300 frames—efficiently and quickly, you need a massive amount of rendering power." So, instead of trying to do it locally and going home for the day while it renders, the artists send files to the renderfarm."

According to McDowell, this process also demonstrates the power of the CXFS shared file system. "An artist can be reading from the same file as the renderfarm works on it," he says.

McDowell, who was already familiar with other clustering technologies prior to implementing CXFS, admits to a steep learning curve when it came to exploring the potential of the shared file system. "It's very powerful, but complex," he says. Nevertheless, McDowell says the shared file system has played a pivotal role in The Orphanage's ability to scale its architecture, in terms of both server and storage capacity, without incurring downtime.

"It was very hard to expand before because it involved downtime, where you had to put another card in the server, attach more disks, etc.," McDowell explains. "Now we can expand the cluster without incurring any downtime."

McDowell takes advantage of what he calls vertical and horizontal scalability within the CXFS cluster. "Vertical scalability is adding more things: servers, disks, etc. Horizontal scalability is increasing the hardware within any of those devices. We have a four-processor Origin 350 that can scale to 65 processors. And we could multiply that by 32 nodes. The shared file system has unbelievable scalability," he says.

Future plans

How does McDowell see the system growing over the next year or two to accommodate the company's growth in staff, projects, and storage capacity? He already has plans in the works to increase the number of NAS heads to 15 nodes from the current eight nodes. On the storage side, he is looking at more TP9100 arrays and also the possibility of a TP9300 or 9500 array to accommodate the addition of The Orphanage's editorial group (which uses Macintosh platforms) to the CXFS shared file system.


For rendering high-resolution images like this scene from Day After Tomorrow, artists at The Orphanage send their work to the studio's 200-workstation renderfarm. The shared file system allows them to read from a file as the renderfarm works on it.
Click here to enlarge image

"We still send gigs and gigs of data a day across the Gigabit Ethernet network. With storage now centralized, it helps to cut down on the amount of data transfers," McDowell explains. "There are still a lot of data transfers going on now because the editorial group is not tied into the CXFS file system." The editorial group is responsible for tasks such as processing all the tapes received in the studio, getting the plates in line, getting frame counts right, and getting the dailies together so that work done that day is available the next morning for review. The team currently operates on its own separate network, which is not yet fully integrated into The Orphanage's production systems.

Once that happens, McDowell anticipates the volume of file transfers will go down considerably, which will translate into less storage capacity required (and, therefore, money saved). "SGI recently released an OS X client for CXFS. Instead of the editorial group having its own storage, they can be [integrated into our central SAN storage]. That would save us a lot of money," says McDowell.

Michele Hope is a freelance writer and owner of TheStorageWriter.com. She can be reached at mhope@thestoragewriter.com.

The Orphanage's SAN at a glance

  • Servers (NAS heads):
    • 2 SGI Origin 350 (IRIX)
    • 1 SGI Origin 300 (IRIX)
    • 2 HP/Compaq ML350 (Linux)
    • 1 Open Source Systems AMD 64 Quad (Linux)
    • 1 SGI Altix 350 (Linux)
    • 1 Boxx 3D workstation (Linux)
  • Disk storage:
    • 9 SGI InfiniteStorage TP9100 disk arrays, each with 2tb
  • Storage software:
    • CXFS shared file system


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives