A raft of new laws are being drafted in a bid to deliver restraint to the increasingly rapid rollout of AI. But what do media technology developers think should be the priorities for legislation, and is it realistic to hope that some of the bleaker prognoses can still be avoided, writes David Davies.

After what may yet prove to be a perilously slow response by governments around the world, the wheels of action have begun to turn on AI legislation. Spearheaded by the EU with its proposed AI Act, but with many individual country laws also in prospect, it appears that legislators have finally woken up to the enormity – and enormous potential dangers – of the AI revolution.

Perifery Hero 1

Hopes for an agreement on new legislation by the end of this year

But as government departments attempt to wrangle the many implications of AI into workable regulations, there is one question that has not been widely-voiced to date: namely, what do technology developers think should be the priorities for legislation? And by extension, what are their own hopes – and fears – for the future of AI?

With AI already beginning to impact upon multiple facets of media production and distribution, it’s as valid to be asking these questions in M&E as in any other industry. Moreover, it’s one in which the malign possibilities of AI – especially in facilitating the spreading of disinformation and deepfakes – is particularly acute.

Voicing a sentiment shared by many in the industry, MediaKind principal technologist Tony Jones remarked: “From a media production POV, content verification is the single most important issue in the industry. Disinformation is already widespread, as deep-fake videos become a complex challenge to tackle. Validating that a piece of content is genuine is perhaps the greatest area that regulation can help with, as it can underpin trust in the output of media organisations that are obliged (via regulation) to verify their content.”

Legislative outlook

Although legislation is now being developed in many countries, it is the proposed EU AI Act – described as “the world’s first comprehensive AI law” – that has generated the lion’s share of headlines so far. Currently out for consultation by EU member states, it is hoped that an agreement on the new legislation will have been reached by the end of this year.

Perifery Hero 2

In a way that no other proposed legislation has done to date, the EU AI Act foregrounds an ethical and sustainable approach to AI regardless of sector or industry. The Parliament’s priority, it states, is “to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. All systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

The Act proposes to achieve this vision for AI by dividing systems into four main categories. Those deemed an ‘unacceptable risk’ – including real-time and remote biometric identification systems, such as facial recognition, or those which involve the ‘cognitive behavioural manipulation of people or specific vulnerable groups’ – will (with a few possible exceptions) be banned outright.

Read more: Panel Discussion: The opportunities and limitations for AI

The second tier, ‘high risk’, will involve an assessment of such systems before they can be made available and, in certain areas of activity, they will have to be registered in an EU database. The cited areas likely to be most relevant to M&E environments are the management and operation of critical infrastructure, education and vocational training, and systems for employment, worker management and access to self-employment.

There will also be a tier for ‘limited risk’ systems – which will be subject to ‘minimal transparency requirements’ that allow users to make informed decisions – but it is arguably the measures regarding Generative AI that will be of greatest immediate importance to media customers. Here, the proposed legislation will necessitate the disclosure that content was generated by AI, and require the modelling to be designed in order to prevent the generation of illegal content.

Perifery Hero 3

Outside of the EU, it is surely the US where the most activity around AI regulation is currently taking place. At the end of October, President Biden issued a far-reaching Executive Order that – among other measures – “establishes new standards for AI safety and security” while also promoting “innovation and competition”. Meanwhile, at a regional level, 10 states have included AI regulations within broader consumer privacy laws this year, and more are expected to follow. One specific new regulation in New York City, governing the way that AI is employed to make hiring and employment decisions, could set a significant precedent that other states will follow.

Blessing and curse?

IBC365 asked a diverse selection of vendors and service providers for their opinions on AI regulation, and while there was a consensus view that this powerful new technology has to be effectively governed, there was also concern about the potential impact of poorly-drafted legislation.

Rick Allen is CEO of end-to-end video platform developer ViewLift. “The promise of AI and the risks from ill-considered regulation are two sides of the same coin,” he said. “AI is a blessing when it speeds a task and/or more comprehensively provides answers. In the streaming world, personalisation of content leads to longer viewing times and a more satisfied consumer.

“AI can be very useful in delivering more accurate personalisation, but can run afoul of privacy laws intended to protect against exploitation of data without disclosure to or value for the consumer. It will take further legislation and judicial review to determine the boundaries for these actions.”

MediaKind, noted Jones, has already started to incorporate AI across its product lines, including the application of AI to improve video encoders “by enabling decision-making that would otherwise be very compute-intensive” and the use of Generative AI to create content detail during image up-conversion. As well as his aforementioned concerns about content verification, Jones highlighted the need for measures to protect creatives and intellectual property.

“One area that stands out as a potential concern is the role and rights of creative talent, as well as the need to ensure that the creative intellectual property of humans is protected,” he said. “In addition to this, there is the potential for AI to effectively ‘bake in’ prejudices in decisions because of the potential for dependency on the content of the training sets – it merely reflects what it has been trained on.”

MediaKind Tony Jones

Tony Jones, MediaKind

But despite the undoubted pitfalls over authenticity and ownership, Jones shares the excitement of many about Generative AI’s capacity to create new content that might have otherwise been prohibitively expensive, including more comprehensive captioning, translation services, and region-specific signing for the hearing-impaired. “There are many potential opportunities where Gen-AI can make a difference, and I think we’ll continue to be surprised with the new capabilities it’ll be able to help us tackle in the future,” said Jones.

Reviewing the roadmap

A similar note of enthusiasm and caution was struck by Carol Bettencourt, Vice-President of Marketing at live production solutions provider Chyron, who highlighted the company’s use of AI to increase workflow efficiency – one notable example being the setting-up of Virtual Placement technology that allows virtual elements, including adverts, to appear to be part of live event video content such as sports.

“Through AI, the calibration necessary for Virtual Placement takes only moments, instead of hours,” said Bettencourt, who said that Chyron’s focus is on “developing AI that helps technicians, operators and designers do more so that they can share the stories that matter. […] At Chyron, we are looking at every aspect of our product roadmap, especially the most time-consuming or repetitive tasks, and asking, ‘Is this something that AI could or should be doing?’”

Chyron Carol Bettencourt

Carol Bettencourt, Chyron

So it wasn’t surprising when she asserted that “operational efficiencies created by AI should not be impeded any more than mechanical efficiencies that replace manual tasks in any industry.” But like virtually everyone, Bettencourt also has anxieties about AI’s less benign possibilities:

“Using AI to create fake or deep-fake content as propaganda should definitely be regulated. In-between is a more grey area – the training of AI to create something in the style of a particular person’s work. When is this plagiarism? Because good journalism and good live content are powered by creativity, it is important for governments to mindfully consider how to protect the work of creatives in a world with AI, just as they would have otherwise.”

There is also an awareness of AI’s potential to further exacerbate the dangers of an already deeply unstable planet. “There is plenty of political turmoil in the world right now, much of it fuelled by driving emotions through distorted messages,” said Bettencourt. “AI certainly has the power to contribute to this, especially in broadcast and media. [Therefore] responsible use of AI is just as important as responsible use of anything else that could be dangerous.”

Visibility & control concerns

Of those interviewed for this article, it is arguably Jason Perr whose concerns about the current proliferation of AI – and the prospect for effective regulation – are most explicitly outlined. And Perr certainly has a comprehensive grounding in AI, having developed an AI-focused company, Workflow Intelligence Nexus, that was acquired in October 2023 by DataCore Software and now promises to expand its reach in industries including M&E.

Perifery-Abhi_Dey

Abhi Dey, Perifery

“Acquiring WIN extends DataCore’s Perifery business unit, which specialises in managing data across the core to the cloud and edge for high-growth markets – including media and entertainment and healthcare,” explained Abhijit Dey, GM and Chief Operating Officer of DataCore Software division Perifery.

For Perr, who is now CTO of M&E Solutions at Perifery, there are several key areas that need to be tackled with considerable urgency. “From a technical perspective, a lot of the companies building these AI systems don’t have a tremendous amount of visibility or complete control over what the AI necessarily does in some of these circumstances,” he said. “The nature of AI in general is that it does things for itself in some ways; you just feed it a ton of data and it learns from that data, and then decides what it’s going to do with it. So it’s an interesting inflection point we’re at because there’s a tremendous amount of capability and features that everybody wants and that are going to be beneficial for a lot of different industries around the world – but the cost on a lot of occasions can be the need to provide access to data that many people didn’t previously want to provide access to.”

Perifery - Jason Perr

Jason Perr, Perifery

The conversation turns to the current lawsuit by 17 leading authors – including John Grisham, Jodi Picoult and George R.R. Martin – against OpenAI alleging copyright infringement during the data training of their ChatGPT system. Perr evidently thinks that this is an area in which well-honed legislation should be able to make a difference: “Being able to put in regulations so that any data online can’t just be grabbed and considered as available to use in your own model is good. [There needs to be a focus on] how models are trained and what someone has to do to validate and prove that the training data that went into a model was vetted and followed a proper chain of ownership.”

Voicing concerns that are at the heart of ongoing strikes in the US film and television production industries, Perr also said that “people need to have protection for their likeness and their image so that [if someone uses it in an unauthorised way] there would be serious consequences to help prevent bad actors from doing things.” And it’s by no means an issue that’s limited to the production of fiction: “In the political arena, we had a lot of insanity happen [during the last US] election with people creating fake ads and things.”

With potentially seismic elections on the cards next year in the US, Germany and elsewhere, the imperative to put effective controls in place to stop AI exacerbating the spread of disinformation – itself already given a terrifying new lease of life in the social media age – could hardly be more apparent. It is to be hoped that legislators are taking note of the priorities expressed by creatives and those in other key industries because the implications of ineffective laws are the stuff of the bleakest dystopias.

Read more: How can you use AI responsibly?