GenAI is increasingly being used to create new types of media content, personalise media content for users, and improve the efficiency of the media production process. However, there are many challenges to be overcome, both on the technical and ethical side, writes Mark Mayne.
While GenAI is the flavour of the month, potentially offering considerable improvements in the speed and ease of a creative workflow, there are many emerging challenges around protecting IP and wider ethical questions too, as an expert panel from Microsoft and Paramount unpicked in some detail at IBC2023.
Watch the session GenAI as a growth engine for content innovation now, or read on for some of the highlights…
Anthony Guarino, EVP Global Production and Studio Technology, Paramount was keen to place context around the conversation at an early stage. “Generative AI isn’t a standalone, certainly the way we look at it. It’s a capability to assist humans, to help augment the creative process. It’s all about putting those AI tools in the hands of our creatives, our operators are artists and allowing them to work with the tools, get familiar, figure out what they like, what they don’t like. Then our role is here to support them, make sure that they have those capabilities available, and that we have an infrastructure, we have good partnerships around that to really enable that ecosystem.”
Gen AI: Powering Creativity
Andy Beach, CTO Media & Entertainment, Microsoft Industry Solutions contributed a view from the workflow trenches, drilling down into the key emerging applications for Generative AI in broadcast and beyond: “Where you can tag together multiple pieces of AI technology to create a pipeline that does something specific like taking natural language processing so that you’re getting a transcript of a live video announcer talking about something who can then, as you’re feeding that into a system, you can bring up highlights from an archive and drop them into a vision mixer in real time. That was all work that was lots of effort and [required a lot of] pre planning and staging and now you could make it something that was happening automatically, off the cuff.”
Beach continued to elaborate on the potential benefits of GenAI: “As part of production, whether it’s live or a movie or something else, you’re creating tons of data. Often that data has lived in silos or it’s been unstructured in a way that you haven’t been able to tap into it. What generative AI is allowing us to do is pull insights and connect those silos together, which is enhancing both the internal workflows and the things that you can offer up to the customer. For example, if you could suddenly tie the metadata from the archive, to the usage and the behaviours and what they like you can you can get much more granular about what you offer the customer - a better level. All of that existed before but it was too much effort to try and organise the data to get it done. And now AI is helping us solve that problem…”
Guarino added some additional context from a Paramount perspective: “We look at generative AI of two camps. You have your creative assist tools. It’s not necessarily about efficiency. It’s about exploring what’s possible creatively, and sort of speeding up some of that iterative process. On the other hand, you have productivity enhancement tools, and this uses a lot of the traditional AI that’s been around for years. It’s already embedded in things like colour correction systems, editorial systems, but I think we’re seeing we’re going to be seeing a lot more of that now.
“We’ve found that when you apply a generative AI capability to a repetitive manual task, you’ll see significant savings, for example, in an editorial process, when we’re trying to insert a character into a scene. [Inserting a] digital character into a scene would normally take five or six hours of editorial - we can now do that in about 10 minutes. That’s the scale of those productivity enhancements we’re seeing.”
GenAI: The Question of IP Risk
However, as with any informed conversation around AI, the panel was keen to make clear that there are risks that need to be mitigated both today and tomorrow around the use of Gen AI.
“It’s a real challenge. You know, we’re in the content creation business. Most everybody here has some in some way, directly and indirectly connected with that. Our product is our intellectual property, and intellectual property needs to be safeguarded. There are tons of opportunities for it to be pirated.”
A central challenge for users of the many public AI models is that data inputs will be stored and re-integrated into the training dataset to improve future results. This may prove problematic if sensitive IP is involved.
Beach was especially aware of the issue: “Introducing our intellectual property into public models is somewhat problematic, because then that is part of the training set of those public models. While it may be one of a trillion data points in some cases, and other cases, it may represent a significant mass of what eventually is generated in the final and synthesised into the final output. That’s a problem for IP, and intellectual property owners.
“We’re trying to take a proactive view of making sure we understand that [scenario] fully, slowing our employees down from just going out and acquiring public tools on their own, then uploading our content into it. This involves establishing guidelines and policies within the company, and what we’re looking at is how do we establish private models or some hybrid form of private models so that at least our inputs that go into the models to train them are kept private, and then maybe they’re supplemented with the public data after that, to create the final output.
“It’s the guidelines, the policies, control over your data models, and sort of privatising that to the extent you can, but I think we’re all eager to learn what are the limitations of a privately trained model. Our common understanding is you need something like 100 million points to train a model effectively. Well, how many [private] libraries have 100 million data points? So that’s something we’re all going to find out over the next few months…”
GenAI: Ethical Debate Just Beginning
It is certain that Beach is right, the ethical debate around GenAI and IP is only just beginning, as a current US court case demonstrates. Google has been hit with a class action lawsuit that claims the company’s scraping of data to train generative artificial-intelligence systems violates millions of people’s privacy and property rights. Google told the court that the lawsuit could potentially “take a sledgehammer not just to Google’s services but to the very idea of generative AI.”
“Using publicly available information to learn is not stealing,” Google told the court. “Nor is it an invasion of privacy, conversion, negligence, unfair competition, or copyright infringement.”
The case - and the wider debate - continues…