Toronto-based DKP Studios is used to dealing with the challenges involved in sending large data sets over its network as part of the digital rendering and compositing process. In 1985, DKP became one of the first production houses in North America to go to full digital production, according to Terry Dale, DKP’s vice president of production.

The company’s increasing work in high definition for TV, video, 3D animation, and special effects projects-including Imax 3D film work and the creation of special effects for the 2004 MTV Movie Awards-recently required DKP to triple its storage capacity just to keep up with the growing data sets. “The big projects are the feature films and Imax projects that chew through huge amounts of data,” says Dale. “A production can easily take up 30TB of data very quickly.”

In the early days, DKP’s efforts to process such huge volumes of data (typically requiring 24GBps of throughput) often resulted in servers going down, artists left waiting for massive data pulls to complete, and frequently dropped frames when transmitting composited work over the network. “Serving the data to and from users, or to and from the render farms, without bottlenecks was a real challenge,” says Dale.

About a year ago, Dale and his team set out to find a storage solution that could help them avoid these types of data storage and transmission problems. They ended up with two Titan SiliconServers from BlueArc, which they assigned to two of DKP’s most demanding areas of production: rendering and compositing portions of the digital pipeline. “These things just motor,” says Dale, referring to the speed at which the Titan storage systems now allow DKP to serve and write data. “Even with our render farm running full blast, [the Titans] still accept the data with no problem.”

One 20TB Titan SiliconServer disk array now assists DKP’s 400 to 500 dedicated CPU render nodes in storing renders. This data is then pulled from the first Titan for further compositing. Composites are written back to the second Titan array, which is approaching 10TB of capacity.

According to Dale, the Titan storage systems’ ability to change data flow rates on-the-fly has been critical-something Dale refers to as the ability to “throttle I/O” to different departments as their need for rapid data transfers increases. I/O pipes on the back of the Titan storage systems can be aggregated (or trunked) when needed to essentially create one larger pipe.

“Within minutes, we can change the aggregation of the trunks in order to give departments [e.g., rendering, compositing] the bandwidth they need to get access to the data,” Dale explains.

All of this results in faster iteration times and faster time to completion. For example, load times for large files have dropped from 10 or 20 minutes down to 2 to 3 minutes, according to Dale.