Camera to Cloud (C2C) could represent the biggest change in video production since the shifts from film to tape and from tape to digital filmmaking.

It’s a workflow that enables filmmakers to export original footage into a post environment as soon as it is recorded. Rushes can be reviewed, edited or otherwise manipulated, and fed back to the set in minutes, saving time, and therefore money, and enhancing creative decision making.

shutterstock_1793697799

Exporting footage from studios or locations could revolutionise the creative process

C2C completes the move from a physical workflow to one that’s completely digital. It’s been technologically possible for several years but has remained dependent on the speed of internet connections and hampered by general reluctance to change.

Internet access still constrains adoption but the need to cater for Covid-19 safety protocols has shaken the industry’s inertia. In recent months, remote workflows have become a staple of editorial for editing, VFX and colour grading, review and approval, in which craft talent is located out of fixed premises with the freedom to work from anywhere.

Now the industry can go a step further and open up collaborative connected workspaces live from location.

“Camera to Cloud breaks down the barriers of time and distance,” says Michael Cioni, Global SVP of Innovation at Frame.io. “What was once a linear process of shooting and waiting for footage to be processed is now a parallel process. With Camera to Cloud, your creative team can work together collaboratively without waiting to exchange any physical media.”

Chuck Parker, CEO, Sohonet, says: “The concept of a cloud-based platform which can enable even an iPhone to operate as an editing suite has gained even greater urgency with Covid-19. Camera to Cloud can keep remote teams truly connected and in sync whether they’re shooting on a second unit down the road or sitting in an edit suite thousands of miles away.”

What’s Camera to Cloud good for?

There are several benefits being discussed in the Camera to Cloud mantra. Perhaps the most important element is the ability to get shots into the hands of editorial without waiting for the dailies process. This typically implies a DNX 36 and a DNX 115 or similar - something that the editor can slot into their workflow.

“Dailies have been the quickest route to on-set creative decision making for decades but it is called dailies for a reason,” says Parker. “Footage is processed often overnight and returned to set – sometimes you’re waiting 24 hours. That’s no longer efficient for the pressures of modern production, especially when there is an alternative on tap. The cost of reshoots and additional travel to and from location can easily be 15-20% on the bottom line. Anything that helps reduce the cost of pick-up shots by enabling instant creative decision making is of tremendous value.”

During production of 2020 action feature Unhinged starring Russell Crowe, director Derrick Borte and DP Brendan Galvin used a C2C solution to slash the shoot time allocated for a major car chase sequence in half.

“Normally, the crew would have to go back to the video village to get notes then reset the scene,” notes Hugh Calveley, Co-founder, Moxion. “Using Moxion they were able to reference the footage of their run as they were resetting, along with notes, and cut the schedule in half.”

“Camera to Cloud breaks down the barriers of time and distance. What was once a linear process of shooting and waiting for footage to be processed is now a parallel process,” Michael Cioni, Frame.io

He continues: “A key part of keeping the art of filmmaking fluid and creative is having the ability to receive immediate feedback, to make a decision, refine judgements and to keep going. The value will show on the screen with better pictures and better stories.”

Similar workflows can be applied to any key production head. Art Directors or Executive Producers, for example, unable to get onto set because of limitations in the number of people permitted, can review progress remotely. C2C further enables them to work at a location of their choosing, or on the go, maximising their own time while reducing travel costs.

“Off-set creatives can be in direct contact with those on-site, providing live feedback that negates the need to go back and forth later down the line,” says Parker. “So your teams can feel connected to life on-set with an over-the-shoulder collaboration experience.”

What local bandwidth connection speeds are necessary?

Frame-io-C2C-Launch6

Frame io: Says camera to cloud workflows break down barriers

According to Cioni, if you can make a phone call from where you’re shooting, you can probably shoot C2C. He explains that the way Frame.io’s system works is that you can throttle the quality of files up or down based on network bandwidth availability.

“At 2Mbps, 1080/24p, one hour of content is about 1GB of total footage. Since crews typically shoot between two and four hours of footage a day (or 2-4 gigabytes of C2C proxies), they can easily upload all the media spread across the shoot day,” he says.

“Ideally, having anything more than 10Mbps upload will result in offsite collaborators having access to clips within a minute or two of the take. When higher bandwidth is available, takes are available within seconds. Even a one-hour interview with 10-20Mbps of upload bandwidth can be fully transmitted in less than six minutes, so the post-production and transcription processes can begin while the crew is still wrapping.”

For higher quality files like H.265 4K, the same principles apply. 10-20Mbps is going to be enough for C2C to work “extremely efficiently – enough to eliminate any sense of delay”, Cioni says.

Sohonet is on the same page, recommending a minimum of 1Gbps both up and down to accommodate small and medium-sized productions, with larger tent pole shows often deploying 10Gbps and beyond for their workflows off set.

While there is a lot of buzz around the camera itself pushing directly to the cloud, it’s not an efficient model. According to Parker, this is more likely to occur in small budget productions, B units off-lot, or production of commercials where there are only one or two cameras.

“Most meaningful productions for episodic and features have multiple cameras and are typically using WiFi spectrum to push the data from the camera to the video village where the footage is cached,” he says. “The director, producer, DoP, DIT, etc, all want to have a look at the take(s) before agreeing to move forward with the next scene or shot.

“Additionally, large shoots employ video assist software to provide for better shot management relative to metadata, multiple camera feeds, scene in/out, etc. Pushing all of the video through a common shot manager is the most common practice for medium and large productions.”

RAW workflows - not so fast

This is all fine for the vast majority of workflows which work with proxy video, but anyone wanting to push camera RAW (Original Camera Files/OCF) faces an uphill task today.

OCF requires more like 1000Mbps before it’s reliable enough to move. OCFs are not only the largest data payload but the least time sensitive. Today, OCFs do not come directly from the cameras, but rather are being pushed to a local staging environment (on-set or near-set storage as a part of video village).

“When it comes to transmission, the ability to get OCF directly to the cloud is limited today, but we are starting to see it happen,” Cioni says. “Companies like Sohonet are working with studios to install ultra-high-speed network connections that enable DITs to upload OCF right from set. Currently, those transmissions can’t be done wirelessly because wireless networks still lack the appropriate bandwidth.”

But there is one more thing we need to solve before the wireless networks catch up: today’s cameras are not yet designed to get OCF up to the cloud without first downloading it. Cioni predicts that cameras will be developed to allow access to OCFs so that it can be uploaded from the camera itself.

“The first step is that camera manufacturers will have to create that technology, and as they make headway, the telecom solutions will continue to increase bandwidth (hardline or wireless) to allow for connections that move OCF right to the cloud,” he says.

Both Parker and Cioni agree that we’re 5-7 years away from average bandwidth utilised on set being suitable for RAW transfers to the cloud, with shooting OCF to the cloud becoming the norm by 2031.

The impact of 5G

Camera to Cloud is not predicated on the rollout of 5G. “Today, LTE and general WiFi hotspots allow ample bandwidth to move files compressed as H.264 up to the cloud,” Cioni says.

5G will, however, help boost adoption of C2C by decreasing bandwidth dead zones and increasing internet access points. Satellite internet will also further widen the network reach to reduce dead zones in more remote locations.

“5G will be great for reality TV shows where the camera(s) are following actors in a major city,” Parker says. “This happens today with 4G/LTE and 5G will make this experience much better for filmmakers on the move in major metropolitan areas.”

“But 5G only has a range of 1000-2000ft and the millimetre wave technology is disrupted quite easily - even the metal walls of a sound stage create a problem,” he says.

For 90% of productions, WiFi-6 is a more likely development, with all of the throughput promises of 5G without a telco in the middle of the business model, Parker thinks.

The increased bandwidth of 5G though increases the data, and therefore the quality, of C2C workflows. With 5G, 4K 10-bit files become possible (and eventually OCF). Using this same network availability with a higher bandwidth signal means that recipients can receive higher quality files. 

Frame-io-C2C-Launch7

Frame.io has an all-you-can-use business model with cloud egress fees included.

C2C vendors

Camera to Cloud workflows are not the domain of any one vendor but a suite of interlocking technologies. In March this year, Frame.io launched Frame.io C2C, which certifies a number of products to work with its central asset management software. Other vendors market similar technology, one of which is Sohonet, which offers ClearView Flex for streaming live, encrypted video with sub-100ms of latency from camera via HDMI or SDI to up to 30 viewers.

Sohonet presents this as a C2C tool along with Immediates from Auckland-based developer Moxion. Immediates offers a way to view HDR and Dolby Vision footage off-set as a non-real-time solution.

Cloud storage developer LucidLink is also promoting its ability to transfer OCF – not proxies – direct to the cloud where the media is available to users with a LucidLink client installed on their machine.

The number of technology partners providing product to plug into these workflows is growing rapidly. It includes wireless camera encoders/transmitters from Teradek, video assist tools like QTake, portable mixer-recorders from Sound Devices and grading/edit software from Blackmagic Design, Adobe and Colorfront.

Who’s handling the network on set? 

Even when the internet and network are provided by the location – whether that’s a stage, office, or practical location – someone on the crew needs to make sure that the on-set devices stay online.

Equipment like modems, routers, meshpoints and antennas must be set up and maintained. Security needs to be established and monitored. Client devices like computers, streaming boxes and cameras need to be connected and provisioned.

Production needs its own versions of modems and routers that fit the unique and specific needs of the set,” Robert Loughlin, Frame.io

Managing that network is a big job. All of these responsibilities would normally fall under the role of an IT department – but there’s no dedicated IT department on set.

Frame.io technology specialist Robert Loughlin discards the two obvious choices, the DIT in the camera department and VTR in the video department, as being too busy to add on more responsibilities.

“That’s why it might be time to think about having a dedicated production network manager on the set,” he says. “They could be involved with the production from the scouting stage to advise on what the possibilities, limitations and requirements are for a given shooting location and can then put together the right package of tools to ensure the production has what it needs to reliably work on the internet.”

Then, once production starts, they can maintain and monitor the network, making sure the right devices get connected and stay connected.

Frame-io-C2C-Launch3

Teradek: Supports camera to cloud workflows

New on-set gear

Regardless of whether that on-set network manager falls into an existing role or becomes a new one, they’ll need the right tools to get the job done.

“Production needs its own versions of modems and routers that fit the unique and specific needs of the set,” says Loughlin.

Heat management, battery power (based on standards like Gold Mount and V-Lock), mountability, portability, durability and reliability are all important factors.

“This also opens up new opportunities for production gear manufacturers to grow and develop a new segment of the market and is something rental houses could explore,” he says.

Who is using C2C already?

The first notable production to use Frame.io C2C was Catchlight Studios’ Songbird, the first Union-crewed film to go into production during Covid, in July 2020. Use cases also include red carpet coverage of the 63rd Grammy Awards transmitted from Los Angeles to London; and documentary filmmakers using C2C for quickly deriving transcripts from the set so the editing process can begin immediately.

Sohonet claims up to 20 projects have used ClearView Flex on set with overall adoption of Camera to Cloud workflows slow but gaining speed.

Gaps in the workflow

Nonetheless, there are gaps in the workflow. Perhaps the most tricky is the ability for a production to access media assets regardless of which cloud it is stored on. This would fulfil MovieLabs’ principle of creative applications moving to the data and not the other way around. Right now, though, different facility and technology vendors have a preferred cloud partner.

“It is very likely that productions will utilise multiple cloud service providers,” Parker says. “The VFX team might use Google, the editorial use Azure and the dailies platform might live in AWS. Each of the major CSPs charges a similar egress fee.

Frame-io-C2C-Launch.4

“The reality is that until CSPs soften their approach commercially (by not charging major players), this will continue to slow cloud adoption because of both the absolute cost and the unpredictable (budget blowing) nature of the egress fees.”

Cioni believes the CSP market will have to shift from “exclusivity to malleability”. He explains: “The current problem is that the user wants to leverage their own storage deals (cloud service provider A) at the same time as leveraging a cloud processing service (cloud storage B).

“Getting A storage on B processing means both parties have to integrate, and both parties will want to earn on the exchange. This is similar to how ATMs work: there’s no fee when you withdraw money from your bank’s ATM, but if you go to a competitor’s ATM and withdraw money from your bank there’s a fee on the exchange.”

The hope is that as more enterprises use cloud that will push cloud companies to produce an experience that is essentially storage agnostic. This means processing can happen with the service and the media can be stored wherever it’s most convenient for the customer. But what remains to be seen is what the costs will be to leverage that kind of flexibility.

Cloud costs and data movement

Sohonet, Frame.io and others have an all you can use business model with cloud egress fees included that may ease the cost problem. For example, if you use ClearView to stream from set, you pay no per minute or per GB fee, whereas other providers on set charge $0.10 per GB streamed, according to Sohonet. The Frame.io model is based on a flat rate in which there’s a monthly set cost per terabyte with no added ingress/egress fees, regardless of use.

 

An additional hazard is potential loss of data, delays and even security when moving content between cloud providers. It’s an issue that MovieLabs, which represents the big Hollywood studios, is keen to see solved.

“We expect workflow to continue to span multiple cloud infrastructure and we encourage that choice — but the crucial point is that sharing work across various clouds should be seamless so it acts like one big cloud,” says Mark Turner Program Director, Production Technology, MovieLabs.

As per MovieLab’s 2030 vision, this means building a Camera to Cloud strategy that moves the video modification process (editing, VFX, colour, sound) to the cloud where the video data sits. Then the only egress happens when it is “streamed out” for a live review and approval session – which is 1/100th to 1/1000th of what is being stored in the cloud at that point.

“This is a big opportunity for cloud companies and I expect there will be a major breakthrough over the next four years given that cloud storage processes have been relatively flat for the last three years,” Cioni says. “It’s likely that this will change by 2025, when costs per terabyte will begin to decrease.”