Advanced file systems solve some of the problems associated with digital intermediate (DI) environments and postproduction workflow inefficiencies.

One of the three major steps in film production is changing. The three traditional steps include image capture and ingest; intermediate (accepting shot material, production of finished “film” deliverables); and mastering for distribution, projection, and transmission.

Traditionally, a film lab has performed the intermediate step-cutting and splicing negatives, adding optical effects, and printing distribution copies. However, recent advances in technology now allow this intermediate step to be performed digitally, hence, the term “digital intermediate,” or DI. As an alternate to the film lab, DI is not only cleaner, quicker, and more flexible, but is also more practical and convenient.

DI: The basis for workflow efficiency
The output from DI is expected to match, or supersede, the quality of a film intermediary. DI work is performed at high-definition (HD), 2K, and 4K resolutions. From a business perspective, image size costs money. An uncompressed HD image requires about 8MB of data, while a 2K image requires about 12MB of data per 10-bit log RGB frame. A 4K image requires about 48MB of data, quadrupling storage and networking bandwidth requirements.

The main task of a DI infrastructure is to move digital film images between various pieces of equipment in a DI facility. As high-resolution image files predominate, film sequences require extremely large amounts of data, from 200MB to 1.2GB for every second (24 film frames). A DI facility is typically forced to use several types of data networking technology, applied to different areas, to achieve an efficient workflow and avoid bottlenecks. To maintain this performance level, in addition to sophisticated networking technology, applications and storage systems must continuously handle data at the required rate and handle the demands on the network by other users. Therefore, choosing the correct infrastructure hardware and software components and using networking technology advantageously are imperative.

SANs with dedicated Fibre Channel networking are the primary method for providing high-performance shared storage in DI environments. SANs provide applications with direct access to files and provide faster access to large files. A shared file system is a critical component of a DI SAN infrastructure. Shared file systems are cross-platform software packages that support clients and applications on different operating systems (e.g., Mac OS, Windows, Unix, etc.) to access and share the same storage.

Shared file systems also provide a single, centralized point of control for managing DI files and databases, which can help lower total costs by simplifying administration. Shared file systems typically allow administrators to manage volumes, content replication, and point-in-time copies from the network. This capability provides a single point of control and management across multiple storage subsystems.

Shared file systems can accommodate both SAN and Gigabit Ethernet-based NAS clients side-by-side to offer a wide scope for sharing and transferring content. Although NAS does not perform as well as SAN, it is easier to scale and manage and is often used for lower resolution projects.

Shared file systems require metadata servers to support real-time demands of media applications. In large concurrent postproduction facilities, thousands of file requests for video and audio files come from each application. In DI applications, requests could number as many as 24 file requests per second per user. Metadata servers and the networks that support shared file systems must be able to sustain these access demands. Out-of-band metadata networks can provide a significant advantage over in-band servers that share the same network link as the media content because metadata and content are not sharing the same bandwidth.

In a hardware-based RAID storage system, as the number of concurrent users increases, the stripe group must be increased to meet the total bandwidth demand and not drop frames. High-resolution files require significant increases in bandwidth for each additional user, forcing RAID expansion. As stripe groups increase, it becomes increasingly difficult to maintain data synchronization, calculate parity, drive ports, and maintain data integrity.

When concurrent high-resolution content users must rely on large file-based RAID arrays and large network switches, performance is difficult to maintain, and infrastructure problems arise. Spindle contention becomes an issue when multiple users request the same content within a stripe group; available bandwidth is reduced, variable latencies are created, and the file system cannot deliver frame content accurately. If a RAID storage system becomes more than 50% full, content data fragments over time, storage performance drops, and users lose bandwidth.

These infrastructure issues must be resolved before users can take full advantage of shared file systems in a high-resolution digital environment.

The following three shared file systems are the most widely used in post-production facilities today:

  • ADIC’s StorNext
  • Avid’s Unity
  • SGI’s CXFS

(Editor’s note: This article is adapted from a larger report that includes a comparative review of these three file systems. To read the full report, Shared File Systems: Foundation for Digital Post-Production Infrastructure, visit www.margallacomm.com.)

Case studies from two leading postproduction houses illustrate the benefits of shared file systems for DI applications (see “EFilm: CXFS,” p. 36, and “Rainmaker: StorNext,” p. 37).

Future directions in DI storage networking
Shared file systems, for the most part, have matured to address the collaboration requirements of DI environments. Using shared file systems enables multiple users to access DI content without requiring time-consuming file transfers. In addition, shared file systems allow both NAS- and SAN-based DI users to collaborate for a mix of cost-effective and high-performance content access. Finally, with shared file systems, multiple client applications can access the same set of files concurrently without data corruption.

While shared file systems do a good job of providing the facilities for sharing of DI content, a number of infrastructure challenges remain in the way of high performance and reliable delivery of DI data, which will be the focus of the next generation of DI storage networking infrastructure.

The fundamental problem with existing storage architectures now deployed in DI environments is that the storage and delivery of digital video and film images are tightly coupled.

To deliver 1.2GBps, every segment of the data path, from the storage through the data link, to the end workstation adapter, and finally to the application receiving buffers, must meet the necessary quality of delivery requirement at the same 1.2GBps throughput.

Obviously the weakest link in the data path determines overall system performance. In most cases, the storage system is the weakest link. One reason is that storage systems today are based on conventional disk drives, and the I/O performance is closely related to the rotational speed of the disk platter. Despite the rapid increase in disk drive capacity and reductions in costs, the overall I/O performance on disk drives has not been improving at the same rate as improvements in capacity and density.

In addition, disk drive-based storage systems often suffer severe performance degradation when multiple read/write requests are applied to data blocks concurrently, resulting in rapid thrashing of the drive’s read/write actuators. Performance is reduced by as much as 90% when large numbers of concurrent accesses hit the storage systems.

Media switches
Decoupling data storage from the I/O removes the storage subsystem as the primary system for delivering data to client workstations. Instead, the media switch routes data from the storage system to end users. With a large amount of buffering, data movement is no longer performed directly from storage to the users. Instead, the media switch performs data retrieval from the storage system independently from the user data request pattern. For example, a pre-fetch of large blocks of media data can be performed before the user actually requests data. By incorporating large amounts of dynamic cache, the media switch is free to request data from the storage system so that data retrieval is optimized for sustained disk reads. For example, when two data streams are requested from the same disk storage system, both at 1Gbps, the media switch system can retrieve 5GB of data from one stream at a 2Gbps sustained rate before retrieving the other data stream at the same rate. Without using a media switch, the storage system must provide 1Gbps each to two users with tightly interleaved delivery patterns, which substantially degrades storage system performance. (Exavio’s ExaMax 9000 I/O Accelerator is an example of a media switch.) A significant benefit of a media switch can be its dynamic caching capabilities. Specifically, when multiple users or the same user requests repeated access to the same block of content data, the data can be delivered from the dynamic cache buffer in the media switch without repeated retrievals from storage systems.

Saqib Jang is founder and principal at Margalla Communications, which provides strategic and technical marketing consulting services to the storage/server networking markets.

EFilm: CXFS
Located in Hollywood, EFilm LLC is a cutting-edge digital film laboratory. EFilm uses an SGI CXFS-based environment to create digital intermediates that include high-resolution scanning, color correction, laser film recording, and video mastering to create high-resolution digital distribution masters for film output, digital cinema releases, and home video and DVD.

EFilm has been breaking new ground in the DI arena since it created the world’s first 100%, full 2K digitally mastered feature-length film in 2001-Paramount Pictures’ When We Were Soldiers, directed by Mel Gibson. EFilm’s most recent digital mastering breakthrough was the work on Spiderman 2, which was the world’s first 4K, high-definition, digitally mastered feature film.

EFilm’s SGI CXFS environment is spread across six color-timing rooms and serves approximately 100 clients. The environment includes both a Fibre Channel SAN and Gigabit Ethernet LAN. EFilm has more than 200TB of storage spread over multiple SGI TP9400 Fibre Channel and TP9500 Serial ATA (SATA) storage arrays.

In addition to content on the SANs, EFilm has 20TB to 30TB of local storage distributed across five color-timing rooms. Cinematographers view projected, digital 1K copies of movie images and work with colorists in these rooms to correct each film sequence digitally. Images typically have 2K/4K resolution.

Rounding out the configuration are four SGI 3800 servers with 16 processors each and approximately 5TB of directly attached Fibre Channel storage in each color-timing room. When a film is being scanned into EFilm’s systems, the studio uses SGI’s CXFS shared file system software to transfer 1K copies of each frame from the SAN to local storage in one of the color-timing rooms. Final reviews are done at 2K resolution before the final film out.

EFilm uses its CXFS SAN for both 1K and 2K playback in its color-timing rooms. However, because of other loads placed on the SAN, EFilm chose to implement both locally attached storage and SAN storage for reliable real-time 1K and 2K playback This ensures 100% reliable playback speeds-a must for any DI environment.

Over the next two years, EFilm anticipates adding many color-timing bays, with each bay being able to support 2K/4K resolution editing work. This expansion will place even more stringent demands on SAN performance and storage capacity requirements. EFilm would like to transition to an infrastructure capable of allowing editors and colorists in each color-coding bay to access SAN-based 2K/4K resolution content directly, working with SGI’s guaranteed bandwidth product (GRIO). This infrastructure will provide even-greater DI workflow efficiency by eliminating copying content from SAN-based storage to local storage.

Rainmaker: StorNext
Rainmaker is a world-class postproduction and visual effects company serving an international client base with its laboratory, telecine, digital postproduction, HDTV, visual effects, and new media services. Based in Vancouver, Rainmaker employs more than 150 operators, editors, colorists, and coordinators for digital video postproduction projects.

Rainmaker has provided visual effects for thousands of commercial, episodic, telefilm, and feature film projects and has received dozens of accolades-including Emmy nominations in 1998, 1999, 2000, and 2001, in addition to a 2002 Leo Award for Best Visual Effects in a Dramatic Series.

Rainmaker’s ADIC StorNext environment is spread across 29 Windows 2000 systems and six SGI Origin servers connected via a Fibre Channel SAN to more than 4TB of media storage capacity.

Four of the Windows 2000 servers and one SGI Origin200 server have Alacritech Gigabit Ethernet TCP/IP offload engine (TOE) adapters that act as “SAN routers.” This allows more than 100 non-Fibre Channel-equipped workstations and rendering nodes to easily access SAN-based DI content.

Rainmaker’s team of 3D and 2D artists work with various file formats and resolutions, including HD, 2K, and 4K resolution, depending on whether they are creating special effects and animation for motion pictures, television, or HDTV.

With 35 artists working simultaneously, large amounts of graphic images are constantly being pushed and pulled to and from the Fibre Channel SAN, and ADIC’s StorNext shared file system plays a critical role in enabling transparent file sharing among Rainmaker’s artists.

Depending on media resolution and streaming performance requirements, content sharing may also require administrative processes as well as file transfers from the SAN to direct-attached storage (DAS). Specifically, due to SAN bandwidth constraints, informal policies are used to limit the number of concurrent users accessing 2K or 4K content. Or, high-resolution content may be transferred from SAN to local storage during off-hours for work by artists.

Leave a Reply

Your email address will not be published. Required fields are marked *