Storage technologies meet studio needs

Posted on September 01, 2006

RssImageAltText

Studios can benefit from relatively new technologies such as SATA and SAS disk drives, as well as network interfaces such as 4Gbps Fibre Channel and InfiniBand.

By Mark Brownstein

Today, all aspects of filmmaking and distribution are changing radically. Much of the ongoing film production uses film at only two stages-shooting the original footage and creating the final reels of film.

There are many other changes. Original footage can be captured using digital cameras, and processed, edited, and delivered digitally. In the case of computer animation, the process can be all digital. The animated frames are digital, and the edits and assembly of a completed “film” and delivery to theaters, homes, in-flight movie systems, etc., can all be done without creating a single frame of film. In movie theaters, delivery today can be on disk. In the future, content may be transmitted by satellite or over a broadband connection and stored on tiny multi-terabyte disk drives in digital projectors.

The process

Before exploring specific storage requirements for digital content studios, let’s look at the process of creating digital films.

For movies still using film (and most of them do), dailies are scanned in at 2k (2048 x 1556 pixels) or 4k (4096 x 3112 pixels) resolution, frame by frame. These raw digital frame images have become the basic elements of the film being created.

In the past, film editors would go through a film’s footage, frame by frame, and create an edit decision list (EDL). Each frame has a unique frame number, and the EDL is used to create the final, edited master. Using digitized frames instead of actual film, the editing process can be the same, but assembly of the film merely involves linking the digital files as instructed by the EDL.

The process goes much farther than this, however. Since the frames are digitized, they can be modified as indicated by the film’s director, director of photography, or another person involved in preparing the final version of the film. Images can be cleaned up, colors corrected, and effects added, and numerous other processes can be used to create the final product.

Click here to enlarge image

Animation takes a slightly different approach. In this case, each frame is rendered by one or more computers in a render farm. A render farm can use thousands of CPUs and often runs 24x7. Unlike having raw film to be modified after it is digitized, the individual frames can be redrawn to reflect changes to be made. As with digitized film, animation frames can be created at 4k, or even higher resolution, to create a master that can be used for everything from IMAX to HDTV for distribution to a wide spectrum of playback devices.

One second of 4k images (24 frames) can require more than 1.2GB of storage capacity. And for special effects or image overlays, the working files can be considerably larger.

“Creating a movie involves far more digitized frames than are used in the actual released film,” says Bob Eicholz, vice president of corporate development at EFILM, a digital intermediate (DI) studio in Hollywood. “At 4k resolution, a two-hour movie during its lifecycle might require more than 40 terabytes.”

Eicholz claims a completed film, created at 4k resolution, typically requires 9TB to 10TB, whereas a film done at 2k only requires 2.5TB to 3TB. However, he says, “2k is quickly being abandoned in favor of 4k.” The result is huge storage capacity requirements at studios of all sizes.

For its storage requirements, EFILM relies in part on equipment from SGI and file system management software from DataFrameworks. For example, EFILM recently implemented an SGI Infinite-Storage RM660 storage system to add three real-time data streams to a SAN running SGI’s CXFS shared file system software. EFILM also deployed 61TB of disk capacity in an SGI InfiniteStorage TP9700 disk array that’s based on Serial ATA (SATA) disk drives. At the heart of the SAN is a Brocade SilkWorm 48000 director-class Fibre Channel switch.

“More and more movies are digitized earlier in the process,” says Olivier Brun, solution architect, broadband media, at Hewlett-Packard, which sells equipment to studios. “We’re even seeing movies for which shots are directly taken in digital format.” Superman Returns, for instance, was digital from end-to-end and shot with digital cameras rather than film cameras. (For more information on the storage setup behind Superman Returns, see “VFX require high speed and capacity,” p. 38.)

Storage challenges

The new digital studio has faced many technology challenges. For pure processing, dual-core processors will significantly speed the rendering process for animation and special effects. Graphic processing units like those used in high-end video cards for computer gamers are being used to generate special effects much faster than previously possible. What this boils down to is faster creation of an increasing number of large files. One of the basic issues, then, is where to store these files.

This depends on how the file is to be used. For files coming from a scanner, for example, there’s little need for extremely fast storage devices on the write path. However, there may be a significant need to access these files rapidly, so that they can be reviewed, processed, or viewed in real-time. This often requires a “tiered” storage hierarchy that includes relatively low-cost, low-speed storage for frames that are being held, but not worked on, and a transfer to more-expensive, high-speed storage systems for work that is more performance-intensive. It may also involve moving the required frames onto direct-attached storage for fastest access to the frames.

Because rendering and scanning are relatively slow processes (a rendered page can take minutes to more than an hour to produce), the need for a fast pipe to move the files to storage is relatively low. However, reads from the disks and transferring the frames to editors and others working on the film are considerably more demanding. Real-time viewing at 24 frames per second may require bandwidth of 1.2GBps. If more than one person wants to view or work on the file, the storage systems can slow down the overall creative process.

Storage of the rendered frames does not have to be on expensive storage devices. In fact, arrays from multiple manufacturers are often employed. Either NAS or a SAN, or both, is often used at this phase of production.

“We use both SAN and NAS, but on the production side we use more NAS,” says Derek Chan, head of digital operations at DreamWorks Animation SKG. “We like to create a hierarchy of storage and use caching to serve up NFS read/write operations. And we try to maintain a global namespace so we can easily hop around and get to various data very quickly.”

Chan’s group uses caching methods to load-balance the read/write operations. “We do mostly reads,” says Chan. “In the animation pipeline, there’s a lot of reading and calculating, but a limited number of writes.”

Providing fast access to files is another story. Pacific Title and Art Studio, in Hollywood, CA, upgraded its storage infrastructure to 4Gbps Fibre Channel in July, according to Andy Tran, Pacific Title and Art Studio’s CTO. “We added a 4Gbps Brocade 48000 Fibre Channel switch and an S2A 8500 storage device from DataDirect Networks [DDN],” says Tran. “Our goal was to maintain several 2k streams playing at the same time, and DataDirect guaranteed the required performance for that.

“At this point, we can maintain six or seven 2k read/write streams, moving data around real-time from one volume to the next volume on the SAN,” Tran reports. “We’re probably doing about 1.6GBps, because each 2k stream requires 300MBps and the DataDirect storage system supports multiple applications simultaneously.”

The use of 2k resolution for image manipulation is common in the digital content creation (DCC) industry, because of the performance and cost issues associated with 4k resolution. The changes made to the 2k frames are later applied to 4k.

Pacific Title and Art Studio is using DDN’s S2A 8500 storage system primarily for DI and real-time playback, and uses disk arrays from LSI Logic for secondary storage.

According to Bob Woolery, vice president of marketing at DataDirect Networks, the S2A 8500 storage servers can deliver up to 3GBps. The DataDirect storage arrays support Fibre Channel and/or Serial ATA (SATA) disk drives.

A variety of storage vendors now support 4Gbps Fibre Channel host connections. For example, iQstor’s iQ2880 disk array provides four 4Gbps Fibre Channel ports on the front-end and four 4GBps Fibre Channel loops on the back-end. The iQ2880 allows users to mix Fibre Channel and SATA drives in the same system. With 500GB SATA drives, total capacity is about 120TB. (SATA drives are also available in 750GB capacities, whereas Fibre Channel drives typically max out at 300GB.)

Introduced earlier this year, SGI’s InfiniteStorage 4000 series of disk systems is also based on 4Gbps Fibre Channel host connections (up to eight ports) and drives, and provides up to 800MBps of read/write throughput. Users can mix Fibre Channel and SATA disk drives for a total capacity of up to 56TB. The model 4000 can be upgraded to the Infinite-Storage 4500, and the 4Gbps Fibre Channel drive enclosures can be added to SGI’s TP series of disk arrays.

With many productions running more than 20TB in size, a studio that works with dozens of projects concurrently can require petabytes of storage. Even when you consider the likelihood that finished projects are typically moved off the primary storage systems, the sheer volume of storage required would have been mind-boggling to consider even a few years ago.

In addition, the digital assets used during the production process must, in many cases, be accessible to a variety of file systems. A boutique graphics shop may be creating special effects with Maya on a Mac. Another may be running Maya on a PC, while the main studio and the production houses are working with one or more flavors of Windows, Linux, or other versions of Unix. The files have to be easily accessed by everyone who needs to work on them.

As a result, storage vendors that focus on the graphics market typically support a wide variety of operating systems. For example, SGI’s CXFS shared file system supports virtually all operating systems, including most types of Linux and Unix, Windows XP and NT, and the Mac OS.

CXFS is a 64-bit file system that allows shared access to files. The journaling file system makes it appear to users as if all storage is local and available. File recovery after crashes can be done in seconds, and the system can be scaled to millions of terabytes.

New interconnects

Today, 4Gbps Fibre Channel is the primary interface for production storage at most studios. Most Fibre Channel disk array vendors support 4Gbps front-end connections, and 4Gbps host bus adapters (HBAs) are available from vendors such as Atto Technology, Emulex, LSI Logic, and QLogic.

Atto, which specializes in the entertainment market, has been shipping 4Gbps Fibre Channel HBAs for about a year. Sherri Robinson Lloyd, Atto’s director of markets, reports that, particularly in the digital content creation market, there is a rapid shift to 4Gbps. For example, over the last 3 months, Atto’s HBA sales were about 83% 4Gbps and only 17% 2Gbps.

“The digital content creation market has been enabled by 4Gbps, because it gives studios the bandwidth to run high-definition video and audio,” says Robinson. “Most studios have moved to HD and are moving to 4K.”

Although the front-end may be Fibre Channel, the disk drives can be Fibre Channel, SATA, SATA-II, SCSI, or the newer Serial Attached SCSI (SAS). SAS is the successor to the parallel SCSI interface and an alternative to Fibre Channel disk drives. One of the advantages of SAS is that it shares connectors with SATA. A studio can start with a SAS system with SATA drives, add SAS drives, and/or replace the SATA drives with SAS drives when the need arises.

“SATA is medium performance, but low cost. The advantage of a SAS backplane is the ability to run high-performance SAS drives and/or high-capacity SATA drives in the same environment,” says Tim Piper, director of marketing at Xyratex.

Although SATA disk drives don’t have the performance/reliability of Fibre Channel drives, they’re now available in capacities up to 750GB and are inexpensive relative to Fibre Channel or SAS drives.

“SAS will generally replace SCSI and erode Fibre Channel’s market share,” says Michael Ehman, chief executive officer of Cutting Edge, which sells large-scale network storage products. Recently, Cutting Edge introduced storage systems that use the InfiniBand interconnect. InfiniBand is an emerging technology that may find acceptance in high-speed server-storage environments.

“A big advantage of InfiniBand is price,” says Laurent Guittard, product manager for infrastructure at Autodesk, which is using InfiniBand internally. “The price per port is very advantageous compared to 10Gbps Ethernet. The throughput is also very high compared to 10GbE. And the latency of InfiniBand is much lower than Ethernet. That’s why we’re using InfiniBand.”

Storage systems vendors such as DataDirect Networks, Isilon Systems, and SGI offer InfiniBand connections. In the case of Isilon, InfiniBand can be used to cluster the company’s storage nodes.

InfiniBand is increasingly becoming a viable choice for DCC studios. “InfiniBand allows us to network 16 CPUs in a single room with very high bandwidth,” says EFILM’s Eicholz. “With networked InfiniBand systems, we can do complicated color manipulations, hit play, and it plays. InfiniBand allows artists to be more creative, to do more ‘what ifs,’ and not worry about waiting for the computer to do the work.”

Pacific Title and Art Studio is also using InfiniBand for rendering. “We’re running a render node and server with Lustre [file system] software,” says Pacific Title and Art Studio’s Tran. “At 2.5GBps, it’s a lot faster than 4Gbps Fibre Channel. I don’t think we’ll saturate the bandwidth of InfiniBand any time soon. And it’s a lot less expensive: A 24-port InfiniBand switch was about $5,000. A 4Gbps Fibre Channel HBA can cost $2,000, vs. $300 for InfiniBand.”

Some storage vendors provide a variety of interface choices, whether it’s Fibre Channel, SAS or SATA for disk drives, or Ethernet, Fibre Channel, or InfiniBand for external connections.

“We have InfiniBand on our Infinite-Storage 4500 line,” says Louise Ledeen, a segment manager at SGI. “We support 10GbE, too, but on storage devices we primarily offer 2Gbps and 4Gbps Fibre Channel or InfiniBand. The whole idea is to offer users a choice.”

Mark Brownstein is a writer in the Los Angeles area who specializes in storage and technology.


Sidecars expand Power Macs

Although Apple is becoming one of the leading storage vendors (the company recently cracked the list of the top-10 vendors of external disk arrays), many Mac users need storage expansion options that aren’t available from Apple. Recognizing this need, Applied Micro Circuits Corp. (AMCC) this month began shipments of the 3ware Sidecar, a high-speed external disk subsystem that can store up to 2TB of content.

Stephen Burich, owner of Shadowtree Studios and Maya Productions, in San Jose, CA, uses the Sidecar disk array, attached to a Power Mac G5 system, for primary storage of audio and video files, as well as for backing up those files. Burich’s studio provides full audio and music video production, with an emphasis on hip-hop artists.


Stephen Burich, owner of Shadowtree Studios and Maya Productions, uses a 2TB Sidecar disk array from AMCC, connected to a Power Mac G5, for both primary and secondary storage of audio and music video files.
Click here to enlarge image

Before installing the Sidecar storage subsystem, Burich used the Power Mac’s internal disk drives in conjunction with a variety of external, stand-alone disk drives. “In that setup I had to back up to either DVD or a separate PC with a tape drive in it,” Burich explains. “I had drives all over the place. It was a mess.”

Compounding the problem in the context of backing up content was the fact that some of Burich’s video files exceeded 25GB. “That makes it very difficult to back up to tape,” he says.

As a primary storage device, Burich uses the Sidecar array for both recording and editing and has the device set up in a RAID-5 configuration. (Sidecar also supports RAID 0, 1, and 10 configurations.) Compatible with the Mac OS X operating system and PCI Express host bus, the Sidecar storage array includes four 500GB, 3Gbps Serial ATA (SATA) II disk drives, and a 4x multi-lane connector cable. It costs approximately $1,299. Burich cites the primary benefits as high capacity (2TB) in a single device, as well as high performance. “I’ve recorded up to 16 24-bit tracks, and I haven’t exceeded what it can do,” says Burich. “And for backup, it’s a dream.”


AMCC’s Sidecar RAID array provides up to 2TB of capacity expansion for Apple Power Mac systems.
Click here to enlarge image

AMCC (which acquired storage controller manufacturer 3ware last year) claims performance of more than 200MBps with RAID-5 read operations and more than 150MBps with RAID-5 write operations on the Sidecar arrays, thanks to a 4-port SATA-II RAID controller (which resides in the Power Mac) and AMCC’s non-blocking switched fabric architecture (which the company refers to as “StorSwitch”).

The 3Gbps (approximately 300MBps) performance of SATA II exceeds the performance of the Firewire (800Mbps) and Hi-Speed USB (480Mbps) interfaces. In addition, using a hardware-based RAID controller frees up the Power Mac CPU, as opposed to software-based RAID approaches that tie up the host CPU and memory. -Dave Simpson


Duboi slashes wait time with NAS

Duran Duboi is one of France’s premier providers of visual effects for feature films and one of its largest postproduction companies. With 60 application computers and 82 rendering computers, Duboi provides postproduction technology that enables rich visual and special effects.

Duboi has more than 300 feature films to its credit, including the international hit, Amelie.

To support a growing workload and the rapid pace of development in postproduction technology, Duboi needed to optimize its existing environment. As an initial investment, the company decided to upgrade its postproduction architecture, including its graphic workstations, rendering computers, and storage systems. The facility needed a storage system that would dramatically reduce wait time, while simplifying the digital effects infrastructure.

After testing a variety of storage systems, Duboi chose Exanet’s ExaStore NAS systems to meet the facility’s performance, capacity, scalability, and file-sharing requirements.


Duboi used two high-speed NAS configurations for some of the work on Amelie.
Click here to enlarge image

Duboi’s ExaStore configuration includes a two-node 12TB system and a four-node 17TB system. Both systems use Intel-based servers from SuperMicro and disks from Newnet, a French storage system manufacturer and solution provider. All components are connected via Gigabit Ethernet.

The two-node NAS system is used for film digitization and color correction, and for customer screenings. The high performance and continuous data availability of the storage systems enable smooth, real-time screenings for the film’s producers, as well as for demonstrating the quality of Duboi’s technology to potential customers.

Once a film is digitized, animators add the postproduction effects and send the film to a farm of blade-based rendering computers, which send it to the four-node ExaStore disk system. The traffic on the four-node system is huge, as the animators go back and forth to follow the progress of the effects.

The ExaStore NAS systems provide an aggregate throughput of 750MBps (compared to 150MBps with the facility’s previous storage system). This performance enabled Duboi’s animation teams to work on more and larger projects concurrently. It also let them complete more iterations within their production deadlines, by decreasing the amount of time that animators spent waiting to access and load images from storage, add effects, and send the original images to the rendering computers. In fact, waiting time was cut in half.

Compatible with a variety of operating systems, the ExaStore NAS arrays support a cross-platform environment with multiple concurrent activities. File-sharing capabilities enable all the animators working on the same film-the last Asterix movie alone employed 150 animators-to go back and forth to check their work.


RAID arrays for digital cinema

Arts Alliance Media (AAM) is a London-based digital content management and delivery company in the film industry, with software and services aimed at businesses and consumers. Arts Alliance Digital Cinema (AADC) is a wholly owned subsidiary of AAM, providing digital cinema software, logistics, integration, service, and support.

AADC was awarded a contract from the UK Film Council (UKFC) last year to build and operate the Digital Screen Network (DSN). The DSN is a key part of the UKFC’s strategy for broadening the range of films available to audiences throughout the UK, as well as improving access to specialized films.

Ray Quattromini, managing director of Fortuna Power Systems (a UK-based storage integrator and reseller in Basingstoke, Hampshire) was tasked with developing technical solutions to meet the requirements of the emerging digital cinema industry. “This ambitious project will consist of a network of up to 250 screens, located in approximately 200 cinemas,” he says. “Each cinema will have state-of-the-art digital cinema projection and presentation equipment. It will be the first full-scale digital cinema network in the world.”

Fortuna Power Systems provided advice on products and technical help to optimize the setup of the digital cinema systems, configuring the systems to AAM specifications. AAM recently began rolling out the systems into the field.

For the storage part of the equation, Fortuna specified EonStor A08U-G2421 RAID arrays from Infortrend, in part because of the arrays’ ability to recover from disk failures, which translates into reduced service costs. “We’ll be able to use the remote administration functions to get an early warning of imminent disk failures before they cause a lost show at a cinema,” says Quattromini. The disk arrays are based on high-capacity Serial ATA (SATA) disk drives.

Each cinema screen will have a RAID subsystem, configured with 750GB to 1.5TB of storage capacity, which equates to up to 24 hours of digital cinema content. “We needed [high-capacity] disk systems with dual-host SCSI interfaces and no more than a 2U form factor because all the equipment had to fit within a small rack-mounting area in the digital cinema projector,” Quattromini explains.

The storage arrays are connected to QuVIS Cinema Players by dual-host VHDCI SCSI interfaces. Each disk array is configured as a RAID-5 volume split into two partitions-one mapped to each SCSI port. Each system includes a hot spare drive in addition to the RAID-5 volume. The arrays are used to store the digital film content at each site.

In evaluating RAID storage systems from various manufacturers, the most important factors for Fortuna were sustained bandwidth, reliability, physical size, and serviceability. Infortrend’s A08U-G2421 was also selected for its SATA-II technology, according to Quattromini.

The single-controller A08U-G2421 storage subsystem includes two Ultra320 SCSI host interfaces and 8 or 12 SATA-II (3Gbps) disk drive interfaces. Other features of the RAID array include tagged command queuing (up to 256 commands), variable stripe size per logical drive, automatic bad-sector re-assignment, and dedicated bandwidth to each drive.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.