In a previous article (Object-based audio – immersive experiences and personalisation) we explored the possibilities that working with object-based audio affords in the areas of immersive experiences and personalisation.

Before these wonderful experiences can reach audiences, they must be created. And for that to happen, creators need approaches, workflows, tools, and containers. As with any bleeding-edge technology, user-friendly tools are limited, but that is starting to change.

20101_lawocontrolroomimmersiveaudiosochi_557435

Dolby Atmos: supported by the Adobe Atmos Content Creatio Solutions

Dolby Atmos Production Suite tools

Dolby Atmos authoring is supported by their own Adobe Atmos Content Creation Solutions and industry standard platforms such as Pro Tools, Logic Pro, DaVinci Resolve, Ableton Live and Nuendo have included Atmos authoring in their latest versions.

Fraunhofer IIS has released an MPEG-H Authoring Suite and Salsa Sound demonstrated their MixAir AI-driven audio mixing solution at the IBC2022 Show in September.

Dolby Atmos: Production Approach

Once these new tools get into the hands of production, the hard work begins – creating a compelling experience for the audience.

As discussed in Part One, capturing sound correctly is essential. There are a wide variety of approaches from the Hamasaki System to mid-side arrays, and as Charlie Morrow of Morrowsound emphasised, “with a lot of the capture methods, whether it’s using a DPA microphones in your ears or whether you’re using a special soundfield microphone, each one of those devices is like a musical instrument, it creates a certain amount of data that can be unpacked in different ways.”

Audio consultant and producer, Tony Churnside, emphasised the importance of planning how you want to unpack that data in terms of creating an immersive mix for music. “Produced pop music is purely written for the stereo space,” he said. “It’s an artificial environment, you couldn’t stand in a practice room or a venue and recreate that space. It’s made up in the mind of the recording engineer and the producer.

“And if you’re going to try to create something is fully immersive, you need to start from that producer head and not think about things in terms of bass drum guitar keyboard, overheads, you’ve got to start thinking in terms of a whole space. It’s easy for orchestras because you’re just going to make it sound like you’re in an amazing concert hall.”

Rupert Brun, a technical consultant who works with Fraunhofer IIS among others pointed out that creating an immersive experience can be done through simple enhancements to existing approaches. “We have found that even when you’re producing a broadcast with sound that you don’t have control over, like at a live concert or in a football stadium, you can create immersive sound with just four more microphones to capture the height,” he said. “We’ve done this at the European Athletics Championships and at the Royal Albert Hall.” 

Watch more Changing state of video and audio technologies

Dolby Atmos: Existing workflow integration

Brun discussed the systems on the market that can be integrated with existing workflows. “If you’re using MPEG-H, there are currently two different hardware devices that can be used for authoring,” he said. “They have big friendly broadcast-type buttons on them. There are also plugins available for several workstations, and bespoke authoring tools available free of charge. Salsa Sound’s MPEG-H authoring plugin is drag and drop. They’ve got under the bonnet of the MPEG-H standard and have removed all of the complexity from the user interface. The software was written with the idea that the user should never ever see an error message, because the user interface should guide you to do things in the right way and only do the things you’re allowed to do.”

Dolby Atmos: Delivery

Creating a good mix for audiences of today is a significant challenge thanks to the wide variety of systems that your material will play on. However, in the current landscape of streaming, with its huge demand for what was once called “archive,” creators also must plan their mixes for audiences of the future.

Read more Immersive audio: consumer tech catches up with production potential

Churnside described it as a process of refining a mix down to its most important elements. “The beginning is massive and there’s loads of channels,” he said. “As you move through the production chain and limit the scope of experiences that you’re designing for, it gets slimmer and slimmer until your final container format doesn’t have enough data to recreate the original multichannel representation, but enough to allow you to create different experiences, whether they’re interactive or immersive. It’s potentially future-proofing some of the content you’re creating for when people get better listening environments.”

dolbyatmosstreamingaudiocreditdolby_883945

Dolby Atmos Streaming Audio

Source: Dolby

Morrow spoke about his experiences working at the Record Plant in New York, and a trick they would use to test the quality of a mix across multiple devices. “They would use a radio transmitter that could transmit for a block or two and they drove around in traffic in an open convertible,” he said. “If it sounded good in traffic and an open convertible in New York City, they figured, that’s the one we’re going to put on the lathe, and we’ll cut that cut that tune. So, what I learned in my early years in this business was the concept of translatability.”

Dolby Atmos: Flexibility and creative controls

Brun addressed the concerns many creators have about adding interactive elements to their work. “A lot of producers are worried about letting the audience mess with their finely crafted sound balance,” he said. “It’s important to understand that with MPEG H, the audience can only change things you’ve decided they can change. There are software tools that allow you to quite simply say, these three things are language tracks - the user can switch between them, but you can’t have more than one at a time.”

He also discussed the budgetary benefits this approach brings. “This doesn’t necessarily increase in cost and complexity for the content creator,” he said. “Although it undoubtedly takes a little bit more thought and effort to work with objects, it does make reversioning a lot easier. If you’ve authored something where the language track has been kept separate all the way through, and you want to produce a version with a different language, then it becomes comparatively trivial to do that, because you just add another language to the MPEG H that you’ve already created. You can easily and efficiently add a different language if you wish to or make changes to the dialogue for editorial reasons without having to remix everything else, because you’ve kept it separate.”

Finally, Brun emphasised the use of open standards as a way to future-proof productions. “Of course, if you author in MPEG-H, then there’s no guarantee that in 50 years’ time, you’ll be able to open that,” he said. “But the chances are good because it is an open standard. Also, if you have archived in in an ADM format, then it’s extremely likely you’ll be able to open it because that’s a completely open standard.

“It might be very tempting to create it all in ADM, but what might be a better approach would be to author and create the content using MPEG-H and the tools that come with it because then you’ve got a set of off the shelf tools that will work together and then convert it to ADM for archive so that you’ve got your future-proof format. If you’ve created it using MPEG-H, you will be able to store it as ADM without loss. The opposite isn’t necessarily true.”

Watch more Dolby Atmos: The evolution of audio from mono to Dolby Atmos