Creative solutions for creative challenges

Posted on March 01, 2007

RssImageAltText

By Michele Hope

Studios are turning to new storage technologies, products, and configurations to meet stringent quality, budget, and deadline requirements. Among those technologies are shared file systems, high-speed (4Gbps) SANs and RAID arrays, clustered storage architectures, high-speed NAS servers, and even new tape technologies for improved backup and archival of creative content, as shown in the following case studies from some of the leading entertainment studios . . .

NBC: Not your daddy’s telecine

Entertainment industry veterans can relate to traditional media production environments that hum with busy telecines. Such workflows often involve the frequent transport of tape from work-station-to-workstation, spooling back and forth to locate specific, pre-recorded segments or frames, and short-to-extensive wait times while the necessary tape-based ingest and output procedures take place along the way.

In 2005, an opportunity arose to change that scenario when NBC Universal’s Digital Services group moved into its new facility. According to Ron Silveira, vice president of Universal Digital Services, pre-planning for the move presented the group with a new way to achieve more work in less time by moving from a tape-based to a server-based workflow.

In this case, “server-based” means four telecines servicing 10 edit/color-correction suites, and the use of DVS Digital Video Systems’ Clipster, a digital intermediate system with digital disk recording (DDR) functionality to help integrate tape into the digital, server-centric environment. On the back-end, much of the group’s workflows surrounding color-correction, feature mastering, TV re-mastering, and TV episodic post-production revolve around a real-time SGI CXFS shared file system SAN infrastructure with dual SGI Origin 350 metadata servers, and an SGI Infinite-Storage RM6700 storage system with Fibre Channel disks. The storage now hosts more than 100TB of data (and growing), as the Digital Services group continues to address escalating business growth of 25% to 30% per year since 2004.


Scott Garrow, colorist for NBC Universal’s Digital Services group, saves his work on an SGI InfiniteStorage RM6700 storage system with Fibre Channel disks and more than 100TB of capacity.
Click here to enlarge image

“We upgraded our original SGI TP9300 CXFS system to accommodate more bandwidth and even more users on the different volumes,” says Silveira. “The storage is essentially the center of the universe for our workflow. Our color correction, editorial work, duplication, and down-conversion process all revolve around our central server.”

What have he and the group’s director of technical operations, Harvey Landy, noticed since the move? One key to the move was the server-storage system’s ability to support real-time playback and edits of multiple streams at once. This alone has led to an upswing in productivity for the group’s 150 operators who can now perform more tasks at once off the same storage volume.

The difference, explains Silveira, is substantial. “In a tape environment, no more than one person can work on a tape at one time. In a server environment, multiple rooms can work on a process at the same time, shortening your cycle time and making things work faster.”

For example, when it comes to dirt-spec removal on film, or adding titles, operators using a telecine have to work more linearly, which tends to take longer, Silveira explains. The first operator would have to spool back and forth, remove the dirt, then hand off the film to the next operator in another room for title work. While the first task alone might have taken 15 minutes, now both tasks can usually be performed in 15 minutes or less using the SGI system.

Landy says high performance, bandwidth, and throughput are critical to the group’s success with its current SAN-based storage infrastructure. He speaks highly of the way SGI’s shared file system, metadata controllers, and hardware platform handle one critical piece to performance: minimizing the amount of disk fragmentation that can typically cause slower disk seek times and performance slowdowns.

Silveira envisions a future that will someday be fully tape-less. Until then, however, he’s happy with the changes wrought thus far. “We put together a workflow around ‘not your traditional telecine.’ We are now moving and migrating our work directly onto the server.”

SAN replaces sneakernet at Orange TV

Like many non-profit broadcast stations with tight budgets, Orange TV’s 10 program editors had grown accustomed to making do with the current tools and resources at hand. For the Orlando-based Orange County Government broadcast station, that meant making their Windows-based desktop computers work double-duty as edit workstations.

On the software side of the equation, this involved use of Canopus DVStorm editing software to help editors produce and edit about 30 shows per month airing on both Orange TV and its sister station, Vision TV. As part of the production process, editors had to work diligently to add the required intros, credits, bumps, bricks, and other short animation sequences often required to get Vision TV’s pre-recorded arts and education programming ready for TV viewing.


Using Facilis’ TerraBlock 24D Fibre Channel SAN, Orange TV says it can do more-elaborate transitions on TV shows such as World of Dance.
Click here to enlarge image

The real trouble in this equation, however, began to crop up on the storage side. Relying on the use of portable Apple FireWire disk drives, the edit process usually involved editors carrying the drive from one desktop to another when it came time to perform a function like audio edits. In the early days, some projects needed so much storage they were also forced to span multiple FireWire drives.

According to operations manager Michael Seif, edits for an average 30-minute, magazine-based show shot with a single camera might need more than 30GB of storage capacity. Editing longer, multi-camera programming, such as a two-hour opera, could easily consume 150GB of storage space.

Besides adding extra hassle to the process and inhibiting editor collaboration, the station’s reliance on “sneakernet” and its FireWire drives had begun to cause serious issues that could no longer be ignored.

“Sometimes the drive would just spontaneously die after a few months of use. Or someone would drop one, which almost always seemed to happen when they were almost done with a show they’d worked on for months,” says Seif. Other storage issues consuming the editors’ creative time included extended waits for Adobe Premiere Pro’s NLE system to remap drive letter designations from recently swapped FireWire drives.

In preparation for an upcoming move to a new facility, Seif and Orange TV’s general manager began to talk about upgrading the edit environment to networked storage. They were hoping such a change could resolve not just the station’s worst storage issues, but also increase the edit team’s file sharing and collaboration.

They also wanted to better position the station in its ongoing transition to full-digital, high-definition (HD) programming. “Even though most stations emphasize making the transition to HD on the broadcast side, you also have to be able to edit and produce in HD with enough storage that can handle the throughput,” says Seif.

After receiving input from local integrator Profile East about the different options available, the station decided to directly connect each of its new Apple Macintosh FinalCut Pro edit work- stations to a TerraBlock 24D Fibre Channel SAN from Facilis Technology.

Equipped with 12TB of storage, the TerraBlock system seemed an affordable option that could handle the network bandwidth requirements of editing 30 shows per month, while still giving the editors real-time, render-free access to their work.

Although the team’s DV25 video format doesn’t use much bandwidth, throughput becomes more of an issue with 10 editors accessing the same storage simultaneously, says Seif. Accustomed to the throughput capabilities of their former Canopus systems, Seif expected TerraBlock to support the same level of throughput: four to five simultaneous streams needing real-time access. He and the station’s GM also wanted the system to have enough bandwidth to handle the future growth they envisioned, such as the addition of more workstations or video needing to be edited in different resolutions (HD, uncompressed, etc.)

Now, the TerraBlock system resides at the hub of the station’s new edit facility. The edit workstations are direct-connected to the TerraBlock via fast, 4Gbps Fibre Channel connections. A 500GB Ethernet Server, also provided by Facilis, fronts the SAN and allows other Windows-based workstations in the station’s production studio to access the TerraBlock storage system via their current Ethernet connections.

“We wanted to connect the whole building via a network and share files between our Windows and Mac systems,” Seif explains. “We needed our editing systems to talk to our production studio’s digital playback system [a Grass Valley Kayak DDR and playback system] without having to pick up a thumb drive or tape and move it across.”

The subsequent change in workflows has been dramatic. “We can now edit a roll-in for a TV show, edit a bump or brick, and send it over to the TerraBlock. Then, in the studio, you can pull those into the DDR, through the Grass Valley system, without having to pull a tape and nobody ever has to get out of his/her seat,” says Seif.

While the move to a SAN hasn’t necessarily cut down on the time it takes most editors to perform edits, Seif is quick to note the new storage environment has undoubtedly enhanced the creative process: “Whereas before, it would have taken two hours just to do the bare minimum of edits, it now takes half the time. We can now do more animations and more-elaborate transitions, because they render at such a fast pace.”

In the end, Seif maintains the move came down to dollars and cents. “We were losing a lot of money with people editing the same projects twice with FireWire drives. To pay somebody to do the same project twice costs more in man-hours than buying new [storage] equipment. I’d rather buy it once and buy it right than continue to replace breaking drives or having editors take weeks to re-create all the elements they lost.”

Eagle Eye Post: Storage by the numbers

At Burbank-based Eagle Eye Post, president Chuck Spatariu is used to balancing both the creative and business side of his firm’s postproduction work. While he takes pride in the firm’s commitment to exceed client expectations, Spatariu also knows he needs to always be on the lookout for new ways to maintain Eagle Eye’s competitive edge while still allowing the firm to make a decent profit.

An unexpected opportunity arose to do both when Eagle Eye selected storage architecture to support its efforts. An avowed Avid shop, Eagle Eye had already invested in a variety of turnkey Avid edit workstations to meet its growing client demand. But, when it came to the back-end storage used to support them, Spatariu chose to depart from the company’s recommended Avid storage models.

According to Spatariu, this is where business economics played a large role in his choice of storage subsystems from Archion, which specializes in networked storage systems for the professional video market. “To put on a production, everything nowadays is about price, so we were looking for an edge. I could get roughly the same amount of storage that a comparable Avid solution would provide at a significantly lower price per gigabyte.”

On the technical side, he notes that the Archion storage systems seemed to offer more usable storage space than Avid for the price, as well as what he viewed as a better (RAID-5) disk-protection scheme.

Spatariu recalls a recent instance where his choice of Archion’s disk arrays seemed to have paid off for both Eagle Eye and the production team behind the feature film Bobby, which re-enacts much of the 1968 assassination of Robert F. Kennedy.


Archion’s RAID arrays facilitated Eagle Eye Post’s work on Bobby, which re-enacts much of the 1968 assassination of Robert F. Kennedy.
Click here to enlarge image

Eagle Eye was called in to help solve a few challenges the film crew had begun to experience early in the filming process. Equipped with an Avid Adrenaline system with its own internal storage, the firm realized that the amount of raw stock footage, music, and other elements required to make the film authentic would quickly surpass the system’s local storage available.

Another workflow problem-the noise of the local disk drives spinning and the cooling fans going on and off-had begun to disrupt the production team’s concentration.

Eagle Eye proposed creating an Archion shared storage environment. This way, Spatariu reasoned, Eagle Eye’s 4TB Archion Alliance system could be housed in a separate room from where the production team did its day-to-day work. He compares this storage arrangement with what he would have had to offer the Bobby film crew if he’d been using Avid MEDIArray LP shared storage instead. (Archion’s Alliance has eight 4Gbps Fibre Channel ports, which supports HD, that can be expanded to 16 ports.)

Claiming as much as a $10,000 price difference for 8TB of storage, Spatariu reasoned he would have had not only a more expensive cost of entry with Avid, but he also would have needed to use bigger Avid systems to gain the same amount of usable storage he does with Archion’s RAID arrays.

“If we’d put Avid LP storage on Bobby, I would have needed to put an 8TB unit onto the system,” he says, noting that the Avid LP’s use of mirroring tended to consume close to half of the system’s total storage capacity. “With Bobby we used a 4TB Alliance. If I had a 4TB Avid LP system, I would have only had 2TB of usable storage.”

Spatariu continues: “To a certain degree, the storage is like the tires of the car you are buying for the vehicle. The vehicle, in this case, is the Avid editing system. And, while Avid’s are the best, they cost a lot for those kinds of tires. Even if your eyes light up at the mention of Pirelli, your budget may still lead you to Uniroyal-a low-cost, sturdy tire that will still get you where you need to go. If I can save money by putting these kinds of wheels on my car, I’ll do it.”

Fast storage means more billable hours

Another production house familiar with what it takes to be competitive is Crossroads Films. With six subsidiaries and a multinational presence, Crossroads is accustomed to performing production work that spans from TV commercials and music videos to feature films.

According to A.J. Javan, head of information systems at Crossroads, part of the ability to compete comes from ensuring the company’s talented team of professionals spends as much time as possible on their creative work. That means Javan’s team must pay close attention to any technological component that might be hindering the workflow.

After looking more closely at the company’s postproduction business, Javan believed there might be an opportunity to gain back some unproductive time editors had been spending getting the edit suites ready for different client projects.

Each edit suite’s workstation had been attached to a company server but still relied mostly on local storage. When it came time to move onto the next project, Javan estimated it used to take the editors somewhere between half an hour and an hour just to prep the room. This might involve copying the right data set into the workstation’s local storage. Then, when the job was done, editors also used to have to sit in the suite and finish it off by sending it to a tape deck.

After looking at the fact that projects in the suites could be invoiced at $800 per billable hour, Javan realized how much in daily, billable hours was being lost through the ineffective prep process.

That’s when he decided to move forward with plans to network each edit suite workstation to an Isilon IQ 200 clustered storage system with 6TB of storage. The difference, he says, has been significant. Whereas before, project prep work or finishing used to take up to an hour, it now takes just five minutes to copy the whole project out of the Isilon system. Finishing can also be offloaded to an assistant in another room, instead of wasting the editors’ time outputting to a tape deck.

Javan and Crossroads’ CFO have since noticed about a 30% jump in the number of bookings, with a significant increase in net profit.

Says Javan, “We love the fact that we can move projects in no time. The net profit we gained paid for the [Isilon storage system] in the first two months.” Javan now plans to add more Isilon nodes and apply the system to other groups in the company. With a starting price of less than $40,000, he can see the Isilon storage server paying for itself about 10 times over in the next five years.

Intense deadlines require intense storage

Insane deadlines and last-minute changes tend to be common to modern film-making. But, the situation faced recently by Walt Disney Studios’ Feature Animation division brought the pressure to a whole new level.

Well into production for the upcoming animated feature, Meet the Robinsons, the production team faced a sudden expansion in project scope and significantly more work than anticipated midway through the project.

After Disney acquired Pixar Animation Studios last year, leadership at Disney’s new subsidiary was called into the ongoing Disney project to advise on creative direction. This resulted in planned re-designs for scenes that had already been well underway. Adding to the pressure was the fact that the additional work had to be done to meet the original release date scheduled for the film.

According to Jon Geibel, manager of systems for Walt Disney Feature Animation, this meant something of a mad scramble to install new back-end storage they felt could handle what they anticipated would amount to twice the current number of artists and twice the number of shots supported by their current system.

On the storage side, Geibel knew that meant getting twice the performance from their current Panasas storage and clustered file system-a feat he didn’t deem possible given its current configuration. While the prior architecture’s performance had been sufficient to meet their needs up to that point, vice president of technology Jack Brooks admits “it was getting done, but was kind of on the hairy edge as far as the systems’ ability to support our performance requirements.”

They decided to change the under-lying storage infrastructure and began to explore how to streamline artists’ workflow through the use of different price/performance levels (or tiers) of storage to address the different types of data and processing requirements. What led Disney Animation to ultimately decide on a parallel file system from Ibrix for its highest-performing tier of storage was Pixar’s own positive experience with Ibrix in the making of its recent feature, Cars.

Brooks says the deciding factor for which data would reside on which storage tier was how much access to the render farm the data required. Brooks and Geibel ended up implementing three tiers of storage: an Ibrix Fusion parallel file system as its highest-performance, Tier 1 storage; Network Appliance’s FAS6070 clusters for Tier 2, and NetApp’s FAS6030 clusters for Tier 3. Geibel estimates that 200TB to 300TB of data now resides across all three tiers.

On Tier 1, Brooks knew the Ibrix system would be the hardest hit by jobs for the render farm. “The Ibrix system holds the data that is shared among all the shops. It’s getting hit all the time by the render farm, no matter what shots you’re working on,” he says. The mid-tier NetApp systems store data actively under work and are still hit by the render farm, but tend to get hit on more of a per-shot basis. “It’s not likely you’ll have the entire render farm working on one of those,” says Brooks. Data needed mostly for reference, with no need to be accessed by the render farm, resides on the Tier 3 NetApp systems.


Walt Disney Studios used an Ibrix Fusion parallel file system to render characters for scenes in Meet the Robinsons. The file system acts as an NFS server, running on 14 Linux-based Dell servers, connected to a back-end SAN.
Click here to enlarge image

When Brooks and Geibel were debating whether to use the Ibrix configuration, much of it came down to how well they believed the system could handle the often “pathological” I/O spikes in demand required by the render farm. To render a character for a given scene in Meet the Robinsons, the render farm often needed to process as much as 30,000 texture files alone. Then there were the other complex elements in the scene that needed rendering as well. Resolving the prior I/O “hotspots” that used to develop when a character was “getting crushed by the render farm,” according to Brooks, was a top priority if he and his team ever hoped to meet the increased workload demand coming their way.

Since the installation, they’ve noticed renders happening five times faster than before. They’ve also seen processor utilization jump from 70% to more than 90%, with an obvious reduction in hotspotting. Acting basically “as an NFS server,” according to Brooks, the Ibrix file system runs on 14 Linux-based Dell servers with four processors and 32GB of RAM per head. This side of the architecture acts like a NAS head fronting a back-end SAN that consists of two EMC CX380 disk arrays.

According to Geibel, one key that made the Ibrix approach work was the way Ibrix’s Fusion file system uses the memory cache on each NFS head to minimize the need to use disk spindles for much of the up-front processing required. “If we have 2,000 frames that need to be rendered, the first one kicks off and reads textures into the NFS heads in the cache,” Geibel explains. “Then, for the next 1,999 frames, almost all data comes straight out of cache, instead of going to spindles on disk. It’s extremely fast. The reason it works is because there’s so much cache. Because things are coming out of cache, we have a 90% hit rate.”

Storage math behind CGI smoke and mirrors

High-end digital effects studio soho vfx is also no stranger to scrambling to meet very tight deadlines set by its clients. That’s the situation the studio faced last summer when the firm was asked to produce more than 50 CGI and effects shots for the upcoming Christmas-time extended DVD release of the Chronicles of Narnia.

The work would ultimately include soho vfx artists creating an elaborate army of about 11,000 CG characters for the climactic battle scene at the end of the movie. While the firm was excited about the creative aspect of the work, they also knew it was going to be tight pulling it all off in the three months or so allotted to the project. According to Berj Bannayan, one of the firm’s co-owners and a developer of some of the firm’s critical in-house animation software, the studio also knew it couldn’t afford to slip the deadline. “There’s so many things that need to be done ahead of time for a DVD release, including the packaging and the marketing. No one is moving the release of the movie. You can’t slip a deadline,” says Bannayan.

Bannayan and the soho vfx team of animators, compositors, and editors had to resort, instead, to some fairly fancy footwork to meet the deadline. Bannayan also had to perform some detailed calculations to ensure they’d have the storage they needed to fuel the back-end rendering and processing required to keep everyone working.

He set about mapping out the firm’s anticipated storage needs from two critical angles. The first was the amount of total storage capacity they’d need to house all of the interim files (including texture maps and myriad RIB files) during the production process. The second calculation, which Bannayan acknowledges was a lot tougher, was estimating the bandwidth the storage system would need to handle.


soho vfx used BlueArc’s Titan 2200 storage system to create an elaborate army in the Chronicles of Narnia. The disk system is the main storage server on soho vfx’s Gigabit Ethernet network.
Click here to enlarge image

Bannayan explains the bandwidth issue. “One to ten machines can be handled by pretty much any kind of storage system. It’s not that each machine throws around a lot of data from moment to moment; it’s when you have 60 workstations and one render machine all throwing stuff at the storage device.”

When the renders used to go through their hodgepodge of homegrown servers and various RAID storage systems, it didn’t used to slow down the renders that much. But, it could slow everyone else down as the renders took up most of the available bandwidth. The result might involve waiting longer for images to load on the desktop. “If I’m sitting at my desk trying to get work done, that’s really frustrating,” says Bannayan. “We wanted our challenges to be creative, rather than technical.”

Having learned earlier about BlueArc’s Titan storage system from the studio’s Toronto-based hardware reseller, Helios/Oceana, Bannayan decided the Narnia project was the right time to bring in the storage big guns and stop working with it in such a piecemeal fashion, as they’d done on earlier projects. “I learned the hard way that it’s always better to over-expand when it comes to storage. The worst feeling is when you don’t have enough juice to keep things moving,” says Bannayan.

With that in mind, soho purchased a high-end Titan 2200 storage server with an initial 8TB of Fibre Channel storage. As the Narnia DVD project began to heat up, soho followed up with another 8TB of capacity, for a total of 16TB. The Titan disk system now represents the main storage server on the firm’s Gigabit Ethernet network. After the Titan was installed, the studio brought the entire render farm online. Then, in Bannayan’s words, “We threw as much bandwidth as you could throw at it, and it didn’t even hiccup.” While he noticed that the renders seemed to go faster on the Titan, the real benefit came from being able to have more concurrent frames rendering at the same time-with no noticeable strain on the network or storage device. That was a far cry from the previous storage setup, where they would have to throttle back on the number of frames being processed at once.

Industrial Color: Backup takes a front seat

From the back-end storage side, many large studios and postproduction environments tend to look more and more like their enterprise counterparts in other industries. While they support applications and workflows unique to the type of work they do, their underlying storage needs are often the same.

Studios want to know how well the storage will support the most-critical applications in the organization, including how well it can support any extended asset management database or middleware layer whose primary purpose is to track and manage the various pieces of a project. They also need their storage infrastructure to adequately protect everyone’s work via a reliable backup-and-recovery process.


Quantum's StorNext file system software is a key part of Industrial Color's Web-based GLOBALedit application, which allows content access and sharing across heterogeneous operating systems.
Click here to enlarge image

Such was the case for Industrial Color, one of the world’s leading digital capture and imaging companies involved in the production of advertising and billboard campaigns for clients such as Saks, Tommy Hilfiger, and Nike.

On the one hand, Industrial Color’s storage had to support its clients’ ongoing reliance on an extensive Web-based “editorial on-demand” application called GLOBALedit. This application allows anyone, from anywhere around the world, to view and add real-time edit comments to recent campaign shots. Explains Chris Mainor, Industrial Color’s director of technology: “Instead of FedExing the shots to a photographer or director, or putting them on an ftp site so that they could approve them, GLOBALedit gives the photographer or director a chance to see each shot online and either approve it or tell us what they want us to do with it.” When edits are made in the system, Industrial Color’s local editors get automatic, real-time notification of the suggested change that was just logged in the system.

With about 250TB of data in the system, Mainor is the first to admit the back-end storage needed to support GLOBALedit is “a beast.” There are two main pieces to the system: the GLOBALedit database and the images. Industrial Color uses 9TB Network Appliance FAS3050 clusters for the database side, while an 18TB Isilon IQ 6000 GE (a three-node cluster) stores the low-resolution images for the system. But, there’s also a mixture of Dell/Windows servers with internal disk storage, a 14TB Apple Xsan, Apple Xserve RAID systems storing about 35TB, and another 24TB Isilon IQ 6000 IC (a four-node cluster).

As a shop with an abundance of both Apple and Microsoft Windows work-stations, Industrial Color decided to use Quantum’s StorNext file system software to allow the two operating systems to work better together when it came to accessing data in GLOBALedit. According to Mainor, “In the past, when GLOBALedit might try to grab files off the Xsan, it would be hit or miss. Sometimes it would work, sometimes it wouldn’t. The user might get errors that the file couldn’t be found, or permissions weren’t granted. With StorNext, it creates that perfect bridge between Windows and Mac, so GLOBALedit can go grab the files it needs and get out with no problem.”

Backing up a system this huge was also a challenge Mainor and his team decided to resolve by implementing a new Quantum PX720 automated tape library, which provided a big improvement over their early backup process, says Mainor. Back when the studio only had about 75TB of data to handle, the team had been forced to use a couple of stand-alone Quantum LTO-3 tape drives for backups.

“You can imagine the backup process for that much data on two stand-alone drives,” says Mainor. “When we looked at manpower, it cost too much to have two people sitting around plucking and chucking tapes, when you could have a robot do that for pennies on the dollar.”

The Quantum library provides five Fibre Channel-connected tape drives that can run five backups at the same time. “Right now we have 320 tapes that amount to about 120TB of data,” according to Mainor.

Mainor also has big plans to feature Quantum’s tape library more prominently in an upcoming archive on-demand system that Industrial Color plans to call GLOBALvault, which will be made available from within the GLOBALedit interface. Using a combination of BakBone Software’s NetVault backup software and the Quantum tape library, Mainor envisions clients being able to request, view, and edit images online that have been shot in prior campaigns, even from a few years back. Mainor says, “The NetVault backup software and GLOBALedit system will be able to talk to each other. When GLOBALedit says, ‘I need this image,’ NetVault keeps track of where the images are. The Quantum library will then just click and retrieve the archive tape it needs.”

Michele Hope is a freelancer writer. She can be contacted at mhope@thestoragewriter.com.


Rhythm & Hues upgrades digital archive

By Dave Simpson
Solving storage problems in digital content creation studios doesn’t always entail upgrading to costly or complex technologies such as 10Gbps Ethernet or 4Gbps Fibre Channel SANs. Sometimes the solution lies in something as simple as upgrading a tape-based digital archiving system and data management software.

That was the case at Rhythm & Hues Studios, a computer-generated imagery (CGI) studio in Los Angeles. For its digital archiving, the studio was using an antiquated (10-year-old) tape library from ADIC that was based on IBM 3590 tape technology and hierarchical storage management (HSM) software. For content storage and retrieval, the setup was painfully slow-about 20MBps aggregate throughput, according to Mark Brown, Rhythm & Hues’ vice president of technology. In addition, the HSM software was cumbersome, requiring manual intervention, and was prone to data loss.

To solve those problems, Rhythm & Hues late last year upgraded to a Sun/StorageTek SL500 tape library based on LTO-3 tape drives and cartridges. According to Brown, aggregate throughput on the tape library is now about 200MBps (four tape drives at approximately 50MBps per drive)-a 10x performance increase in file-transfer rate. (The studio relies primarily on NAS storage devices and Sun Fire servers running the Solaris operating system, connected via Gigabit Ethernet.)

Rhythm & Hues stores all of its content-including its early work on films such as Babe and Titanic and more-recent work such as the creation of the Fortress of Solitude for Superman Returns, animation for Garfield: A Tale of Two Kitties, and images from Night at the Museum-in the digital archiving system that holds more than 20 years’ worth of CGI content.


Rhythm & Hues stores all of its content, including images from Night at the Museum in a digital archiving system from Sun Microsystems.
Click here to enlarge image

The entertainment and advertising studio also replaced its old HSM software with Sun’s SAM-FS software, a file system that provides a disk-type view of content stored on tapes. The software is directly mapped to the studio’s workflow, and no modifications to the existing workflow were required. SAM-FS automatically handles all data migration, according to administrator-defined parameters, between the tape library and a tightly integrated disk array (based on Serial ATA, or SATA, disk drives) that functions as a high-speed cache. The tiered storage architecture includes about 1TB of data on the SATA disk array and more than 100TB of content on the tape library.

Other features of the SAM-FS software include data classification, centralized metadata management, policy-based data placement and migration, and backup and recovery. The software suite also includes a digital content archive that provides a content repository (or digital vault) based on Sun’s Digital Asset Management Reference Architecture (DAM RA), which enables digital workflow.

According to Rhythm & Hues’ Brown, the new digital archiving hardware and software eliminated bottlenecks and other performance-related issues, as well as data-loss issues, and significantly improved archive management. In addition, the studio realized a space savings of more than 15x, claims Brown.

All production studios need the utmost in performance and capacity from their storage devices, but most shops also need ironclad data protection. That was the case at Thirteen/WNET New York, as the number of shows it produces increased rapidly, along with its storage requirements. The public TV station produces a series of shows-including American Masters, Great Performances, and Nature-seen by millions of viewers in New York, New Jersey, Connecticut and, via the Public Broadcasting Service (PBS), across the country.

When Jeff Dockendorff, Thirteen/WNET’s associate director of engineering, needed to upgrade an Avid Nitris system used for HD online editing, which was attached to a SCSI-based RAID-0 JBOD (non-RAID) storage array, he focused on data protection (to eliminate data loss in the event of a failed disk drive), as well as high performance and capacity.

“High-definition uncompressed non-linear editing requires large amounts of storage,” says Dockendorff. “Each second takes about 160MB at 10-bit uncompressed. That expands to about 10GB per minute and 600GB per hour.” Thirteen/WNET’s existing 24-drive storage array could only record about four hours of material. In addition, the station keeps clients’ materials for about two weeks, and they could only store two clients’ materials at a time on the existing disk system.

The company needed the ability to expand storage capacity on-the-fly and to easily disconnect one client’s storage and connect another’s over a Fibre Channel connection. The company also required fast performance for high-speed access to shows being edited, and it wanted to keep its existing storage devices.

The studio integrated Atto Technology’s FastStream SC (Storage Controller) 5300 into its existing Avid storage environment. The FastStream device provides data protection via a variety of RAID levels, including RAID 0, 1, 5, 10, and 50 (to protect data in the event of a drive failure), as well as what Atto calls “Digital Video RAID.” DVRAID provides parity redundancy, is optimized for digital video environments, and supports editing of multiple streams of uncompressed SD and HD video and 2K film. Thirteen/WNET primarily uses a RAID-5 configuration for data protection and has more than 50TB of video content at its facility.

The FastStream controller includes dual 4Gbps Fibre Channel host connections and dual Ultra320 (320MBps) SCSI drive connections. Other features include storage provisioning, expandable RAID volumes, mirroring, and on-the-fly capacity expansion.

Atto claims that the FastStream can support dual streams of 10-bit uncompressed HD video with alpha titles and 11 streams of SD video. An Audio Latency Management feature provides parity-based RAID protection while managing latency to support multiple tracks of audio editing. For intense production environments, video professionals can remotely edit up to 192 tracks of audio. -DS


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives