ABSTRACT

High-end media projects require geographically diverse talent as well as powerful hardware and software.

Fluctuating workloads, unpredictable budgets, and globalized production teams have prompted many companies to adopt hybrid workflows where parts of the production pipeline, such as long-term storage and processing, are moved to the cloud.

Many cloud providers have on-demand, pay-as-you-go services that map to hybrid production models and variable workloads, but this solves only part of the problem.

Many media companies still buy expensive hardware to handle parts of the pipeline on-premises. This includes synchronizing large media files, using high-performance storage, and providing employee workstations.

This paper addresses these barriers to fully embracing the cloud by presenting an end-to-end media production pipeline built entirely in the cloud.

Learn how to combine the latest advances in application streaming, high-performance storage, clustered media processing, and high-speed file delivery to form a complete pipeline in the cloud that lowers costs and meets the needs of today’s professional media projects.

INTRODUCTION

Many media companies are moving some of their video and visual effects workflows to the cloud, but leaving some components—such as primary data storage and editing workstations—on-premises.

Recent developments, including accelerated file transfers; high-quality application streaming for GPU-accelerated software; high-performance, shared storage in the cloud; automation of cloud-based, distributed media file processing jobs; and flexible licensing are removing most barriers to adopting end-to-end media solutions in the cloud.

This paper explores techniques for building cloud-based, high-quality media workflows that are designed to lower costs, reduce operational complexity, and increase productivity.

TRANSFERRING DIGITAL MEDIA TO AND FROM THE CLOUD

File sizes continue to grow every year as new technologies and consumer trends emerge. These files often have higher bitrate and video resolution requirements, such as 4K and more recently 8K or FUHD.

A five-minute HDTV video can result in a file size of roughly 2.5 GB when packaged as MXF OP1a with MPEG-2 and PCM encoding.

This size is modest compared to files of similar lengths that have scenes with complex visual effects.

Large file size and the associated time and cost of file transfer present significant challenges for organizations that want to adopt cloud-based media workflow, particularly if their editing workstations, primary storage, and other components are located on- premises.

For example, a visual effects artist may use compositing software on a local workstation, but then want to perform the final render on a remote cluster of multiple virtual machines.

Traditionally, the FTP protocol would typically be used to transfer the required assets from the artist’s workstation to the remote render farm; however, given the large file sizes associated with today’s media projects a significant amount of network bandwidth would be required to transfer the assets quickly enough to meet production deadlines.

FTP also performs poorly when transferring files over long distances due to the overhead associated with how TCP protocols handle window sizing and packet retransmission. Security is also a concern because FTP transmits passwords in clear text.

A number of alternatives to the FTP protocol have emerged in recent years in response to the need for higher throughput plus more reliable and secure data transfers.

They employ a mix of techniques to improve performance and security: encryption, parallel data transfer streams, data compression, disabling congestion control, de-duplication, and the use of the UDP protocol.

Multi-Part, Parallelized Uploads

Amazon Simple Storage Service (Amazon S3) is an example of a cloud-based data storage service that facilitates large file transfers with a multi-part upload API (1).

Clients and SDKs that support this API automatically open a number of parallel HTTPS connections to upload a single file as a set of unordered parts. Retransmission of failed parts is handled transparently.

This results in higher data transfer throughput, quick recovery from any network issues, and the ability to begin an upload before you know the final file size.

TCP Alternatives

Because HTTPS is a TCP-based protocol, it can suffer from some of the same performance issues as FTP when used over long distances or poor network connections.

Parallel transfers, properly tuned TCP window settings, removal of congestion control, and other techniques can mitigate this; however, these modifications are beyond the control or comfort level of many users.

As a result, protocols built on UDP have grown popular in the media industry for transferring large digital files.

There are many solutions available, but for those users who want to move assets to Amazon S3, there are only a handful of commercially supported offerings that integrate with its APIs: Aspera (2), Signiant (3), File Catalyst (4), and Data Expedition (5).

DOWNLOAD THE FULL TECH PAPER BELOW