Storage propels the creative process

Posted on September 01, 2005

RssImageAltText

With SANs and NAS, studios are addressing the unique requirements of digital content creation.

By Michele Hope

In today’s digital studios, storage-centric IT networks have become critical components that can either help or hinder the efforts of artists and creators to produce finished work for high-profile film, TV, DVD, and video projects.

With high-capacity storage more affordable than ever before-6TB of networked storage can be purchased for approximately $14,000-studios are rapidly moving toward a more-streamlined, all-digital workflow that relies on centrally accessible, shared disk storage systems to perform all facets of work in progress, including content creation, rendering, editing, color correction, and review. However, the digital approach is not for everyone. Some studios still output to digital videotape, re-ingesting digital video footage back to disk for further editing.

“Studios really want to get out of the world of videotape,” says Tom Shearer, president and CEO of Los Angeles-based Talon Data Systems, a systems integrator that serves the broadcast and entertainment industries. “Everybody is pushing hard to come up with a workflow that lets them stay on disk throughout the production cycle.” With an increased interest in centralized, networked storage technologies such as NAS, SANs, or a combination of the two architectures, shared file systems make it easier to share the same files among multiple users simultaneously.

Understanding the options

Deciding on the right storage technology for production tasks can be a complex process. According to Shearer, studios and postproduction houses today can choose from a wide variety of NAS and SAN solutions, as well as shared file systems from vendors such as ADIC, Isilon, Network Appliance, Panasas, Pillar Data Systems, SGI, and others.

Studios must also choose from a wide range of disk drive technologies that include both high-performance Fibre Channel drives and lower-cost, higher-capacity Serial ATA (SATA) drives.

Shearer says file-based NAS systems can be the better choice when you are working on many small 1MB frames at a time. This includes short-form work with visual effects, compositing, or frame-by-frame rendering. In contrast, block-based SANs work well when you need to quickly move large blocks of non-sequential, uncompressed data, or when you need to perform real-time writing or playba ck from disk.

To meet the various storage requirements, it’s now common for studios to have a combination of technologies, such as SAN and NAS as well as Fibre Channel and SATA disk drives. “Often, studios should also have some type of shared file system,” says Shearer. (For more information, see “Shared file systems enhance postproduction,” p. 34.)

Sharing content

Sometimes, the right storage technology for studios comes down to how well it integrates with existing processes. Success is viewed by how well artists can focus on what they do best: creating and editing content, as opposed to waiting for files to open, frames to render, or lengthy data transfers to complete.

The goal of such a successful marriage of technology and process is what Reel FX Creative Studios’ executive vice president Dale Carman calls “working creative at the speed of thought.” Reel FX, based in Dallas, works on direct-to-video DVDs, feature films, and TV shows and commercials, such as JCPenney Back to School. To accommodate exponential growth and expansion of its services, Reel FX upgraded its storage from an initial SGI InfiniteStorage NAS 2000 system to what Carman estimates is now about 24TB of storage capacity on a SAN running SGI’s CXFS shared file system.


Reel FX Creative Studios uses an SGI-based SAN and a CXFS shared file system to facilitate their work on commercials such as JCPenney Back to School. In this multiple-element shot for DDB-Chicago, the character’s body, dressed in an articulated fat suit that Reel FX designed, was created by shooting a person against greenscreen.
Click here to enlarge image

Reel FX’s SAN includes eight SGI InfiniteStorage TP9300 disk arrays and four TP9300S arrays. The TP9300 uses 2Gbps Fibre Channel disk drives; with the TP9300S, users can mix high-performance Fibre Channel drives with lower-cost, higher-capacity SATA drives. The InfiniteStorage TP9300 has four 2Gbps Fibre Channel host interfaces and as many as 112 drives, for a total capacity of up to 16TB. Operating system support includes SGI’s IRIX, as well as Windows 2000, Linux, Solaris, and NetWare.

Carman notes that one of Reel FX’s primary goals was to centralize its storage resources and provide seamless, simultaneous access to the same data by multiple users. “What it came down to was finding something with enough horse-power,” he says. “We have 150 people accessing the data, plus 400 processors on a render farm accessing the data. The typical way to do that is to segment it out with different servers and storage for different users, but then you run into all sorts of management problems.” The SGI-based SAN and CXFS shared file system solved Reel FX’s performance, content sharing, and storage management issues, and SGI’s guaranteed rate I/O, or GRIO, feature allows the studio to dedicate I/O to specific tasks such as rendering.

Managing pipelines, workflows

Unlike Reel FX, India-based Pentamedia Graphics Ltd. chose to segment its network storage based on the needs of each of its four production groups: 3D modeling and animation, 3D rendering, special effects, and digital editing and mixing. Pentamedia has produced feature films, visual effects, and animation features such as The Legend of Buddha, Ali Baba, Son of Alladin, and Sinbad: Beyond the Veil of Mists.

To serve the storage needs of each group, Pentamedia assigned each of four 5.6TB Nexsan ATABoy2 storage systems to its own subnetwork (one per production group), using either 100Mbps Gigabit Ethernet or Fibre Channel connections. According to Riyaz Sheik, general manager of Pentamedia’s animation and production unit, this type of arrangement has allowed his teams to avoid much of the resource contention and throughput issues experienced by some other studios.


Pentamedia Graphics Ltd. used four Nexsan ATABoy2 storage systems to help it create the Son of Alladin.
Click here to enlarge image

“To make the pipeline work better [and to avoid previous bottleneck problems], we had to break production groups and networks into a lot of subnetworks,” Sheik explains.

Sheik chose the Nexsan storage subsystems, which are based on ATA disk drives, for a combination of factors, including pricing, support, and reliability, the latter of which has been tested under extreme conditions. “These products can work in any conditions, from freezing temperatures to hot temperatures and air conditioning failures,” says Sheik. Pentamedia plans to add 12TB to 20TB to its existing 22TB+ of Nexsan storage. (Nexsan subsequently shipped disk arrays based on SATA drives; see August 2005, p. 46, for details.)

The need for speed

Montreal-based Digital Dimension knows what it’s like to almost top out your storage. The 3D animation, motion graphics, and visual effects studio recently had to juggle data storage for two projects simultaneously: Zathura, a full-length animated picture, and Magnificent Desolation, a 3D stereoscopic IMAX feature. Digital Dimension has also been recently involved in other high-profile films, including Monster-In-Law and Mr. and Mrs. Smith.

According to Joe Boswell, a lead systems administrator for the studio, work for Zathura alone has required almost 7TB of storage space to accommodate about 200 shots, many of them miniatures. The storage required has added up rapidly, since each shot consists of 100 frames, 30 layers to a frame, at standard 2K resolution of 12MB per frame, he says. With the 3D IMAX film, the studio had to work with two separate plates, from two projectors shooting slightly offset, where each 6K frame takes up about 100MB of storage multiplied by two. The studio stores its content on approximately 16TB of disk capacity provided by an Isilon IQ 1920 clustered storage system, which includes 160GB SATA disk drives.


Digital Dimension relied on Isilon’s IQ 1920 clustered storage system and OneFS shared file system to help it develop this mountain-climbing scene for the movie, Mr. and Mrs. Smith. This 2D composite of actress Kerry Washington was derived from two separate plates: one of the actress against the blue screen, and another with the first unit background plate of the mountain face. Intricate rotoscoping work was also performed to show the wind against Washington’s bandana.
Click here to enlarge image

Anticipating the peak usage required for the two projects, Digital Dimension moved to the Isilon storage system and Isilon’s OneFS shared file system earlier this year. So far, the studio has been pleased with the system’s speed, as well as the low cost and reliability of the SATA drives compared to more-expensive Fibre Channel components.

The studio’s 2D rendering is the biggest contender for bandwidth. “Our 2D render nodes work on shots the artists have set up and sent to render. The render nodes are going pretty much all day and all night pulling frames from, and writing frames to, the Isilon system all the time,” Boswell explains, noting that Isilon’s clustered design provides automatic node-balancing for clients across each of the system’s eight 2TB nodes. “We can have eight nodes all pushing about 95MBps, with an aggregate of more than 700MBps. I’ve tested it up to 400MBps, where I was actually overrunning our switch trunks, which was pretty phenomenal.” Isilon’s storage servers use high-speed InfiniBand interconnects.

Storage performance has improved over the NAS array the studio previously used. “It used to get so bogged down that people couldn’t browse directories,” says Boswell. “There would be days where we’d have to send people home or ask artists to delete stuff. Before we installed the Isilon systems, storage was always the bottleneck.”

VFX at 140k IOs/sec

Meteor Studios knows what it’s like to have to send people home, or split artists into two shifts, to better manage the resource-contention issues that arise when a storage system is close to capacity and working overtime to process thousands of read/write requests per second.

Meteor performed complex visual effects work on one of the longest sequences in the Fantastic Four film. This process involved more than 100 artists working on 240 shots depicting just 3 to 4 seconds of scenes from the Brooklyn Bridge sequence of the film. Knowing it needed to upgrade its storage system in anticipation of this type of project, the studio decided to explore its options.

Whatever storage system the studio chose had to be able to handle very high I/O rates while allowing for rapid expansion in capacity, according to Jami Levesque, Meteor’s director of technology. The studio considered storage systems from vendors such as BlueArc, Isilon, Maximum Throughput, SGI, and Terrascale before opting for BlueArc’s Titan Storage System.


For complex, multi-layered visual effects sequences like this Brooklyn Bridge scene in the superhero movie, the Fantastic Four, Meteor Studios relied on a 10TB BlueArc Titan storage array to fuel the joint work of more than 100 artists and a render farm consisting of about 130 dual-CPU servers.
Click here to enlarge image

Levesque particularly likes the Titan storage system because it has a modular design that allowed the studio to grow quickly, adding bandwidth and capacity as needed, at a relatively low cost. The Titan disk array also integrated well with the studio’s existing storage arrays from LSI.

Performance was another factor. In one job at Meteor, the Titan storage server clocked 140,000 I/Os per second, which was well above the studio’s typical peak throughput rate of 45,000 to 50,000 I/Os per second. The studio’s Titan system currently includes more than 7TB of capacity on Fibre Channel disk drives and almost 3TB on SATA disk drives.

Video editing on a SAN

The Maine Public Broadcasting Network (MPBN) has learned a thing or two about storage in its efforts to transform itself into a videotape-free operation. The non-profit network has produced a number of TV shows, including the award-winning Quest series. Unfortunately, the process often required a video editor to spend up to 10 hours a week archiving video footage out to tape, or waiting to re-ingest a tape at another station before continuing. Editors at the station’s Bangor and Lewiston locations often used “sneakernet” to physically shuttle tapes between sites to share work.

According to MPBN systems integrator Kevin Pazera, the station’s use of Avid editing stations with non-shareable, direct-attached storage (DAS) was one key cause for inefficiencies at the network. As a result, the economically minded nonprofit station is moving away from high-cost, proprietary systems with DAS to a more “open” SAN configuration.


Video editors and producers at the Maine Public Broadcasting Network used two Compellent SANs at two different locations to store raw and working footage used to create local shows like the award-winning science and nature series, Quest.
Click here to enlarge image

MPBN plans to phase in Apple Mac G5s running Final Cut Pro at both of its facilities. For back-end storage, MPBN will be using 30TB of storage capacity on two Fibre Channel SANs from Compellent (one in Bangor and one in Lewiston). MPBN plans to replicate data asynchronously between the two sites.

According to Pazera, the Compellent SAN solution will make a huge difference for video editors, not to mention the station’s other business units whose storage needs will also be served by the SAN. MPBN is using Tiger Technology’s MetaSAN to handle resource contention issues and let each editing workstation bypass the server, connecting directly to the 2Gbps Fibre Channel SAN.

Now, MPBN editors can keep entire raw footage for each story on disk and work on the footage from any editing workstation. They can also replace their sneakernet by directly accessing files on the SANs.

“This will be great for our editors because we want them to be editing all the time and not moving data back and forth,” says Pazera. And in the end, that’s the ultimate sign a studio’s storage is doing its job.

Michele Hope is a freelance writer. She can be reached at mhope@thestoragewriter.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.