In the past few weeks, the news has been filled with one dramatic story after the other about AI, writes John Maxwell Hobbs. However, in a significant shift, rather than being about the technology itself, these stories have been about the people creating AI and those affected by it.

The story dominating the headlines recently has been tracking the tale of OpenAI, the creator of ChatGPT. Over the course of just one weekend, Sam Altman the CEO and founder was fired by the Board of directors; the President of the company quit; the Board contemplated rehiring the CEO and quitting themselves; the former CEO and President were hired by Microsoft to head up a new AI-focussed division; and OpenAI had three different CEOs in as many days.

Alex Connock Oxford

Alex Connock, University of Oxford

The drama around all this activity has served to obscure two equally significant stories, firstly Directly affecting the broadcast world was the announcement of a tentative agreement between the actors’ union, SAG-AFTRA and the producers organisation, AMPTP in the US, bringing an end to the months-long strike by actors.

A key point in the negotiations was around regulating the use of AI in the creation of computer-generated characters and for replication of background actors.

Only days before the drama at OpenAI, Ed Newton-Rex, the VP of Audio at generative AI developer Stability AI resigned from his position for ethical reasons. He disagreed with the company’s position regarding the use of copyrighted works as training data for their AI systems.

All of this comes on the heels of a statement issued by VC firm Andreesen Horowitz on the 30th of October in response to a Notice of Inquiry on Artificial Intelligence and Copyright being conducted by the US Library of Congress, in which they claimed that requiring AI companies to pay for the use of works covered by copyright would jeopardise US economic competitiveness and national security.

AI and Ethics: Fair Use and Compensation

Fair use is an approach to copyright law in the United States that allows for limited use of copyrighted material without the need to acquire permission from the copyright holder, and in many cases, without the requirement to pay for the use of the material. This approach was designed to support use of copyrighted material in the public interest. Examples of fair use can include teaching, parody, criticism, and news reporting. Digital reproduction has made it much easier to reproduce, modify, and distribute almost perfect copies of copyrighted material, leading to more and more claims of fair use, including the bulk duplication of material to be used to train AI systems. Similar laws exist in other countries under the name “fair dealing” but are much more restrictive than the US law.

An objection to Stability AI’s views on fair use is what led Newton-Rex to resign his position with the company. The company’s response to an invitation to public comments on AI and copyright issued by the US Copyright Office stated, “We believe that Al development is an acceptable, transformative, and socially beneficial use of existing content that is protected by fair use.”

Newton-Rex objected to this position on ethical grounds, stating, “training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.”

Read more Interview: Dr Alex Connock on AI - ‘An Explosion of Creativity’

Dr Alex Connock, a Senior Fellow at Saïd Business School, University of Oxford recently wrote a textbook on media and AI. Dr Connock believes the issues around training data are very complex and not limited to fair use.While one of the defences of training data is fair use, it’s not the only one,” he said.

“For instance, the Open AI defence in a class action suit brought in the northern district of California against them by authors, was actually that the copyright claims were a misunderstanding of the nature of large language models (LLMs), and they had an intriguing defence, ‘According to the complaint, every single Chat GPT output - from a simple response to a question (e.g. Who is President of the United States) to a paragraph describing the plot, themes and significance of Homer’s The Iliad - is necessarily an infringing derivative work of Plaintiff’s books. Worse still, each of those outputs would simultaneously be an infringing derivative of each of the other millions of works contained in the training corpus.’ That’s a crafty argument. It says to the accusers: ‘you don’t know how our models work. Because they train on everything, they are a direct infringement of nothing.’ What that tells me is that it is by no means obvious that authors and other litigants will win the copyright claims that they are currently bringing against the large language models.”

A key element of Newton-Rex’s Stable Audio product was that it had been trained on licenced music and the rights holders shared in the revenues. Connock believes that this approach has significant potential. “I think this a really exciting area,” he said. “We’ve seen the work that Universal has done for the YouTube project that lets users perform in the voices of major artists, using a kind of DRM system in the same spirit of the ones that turned the Wild West of digital music into the regulated infrastructure that it is today.”

However, Connock believes there remains a lot of work to be done to address the complexity of this issue. “There are serious challenges,” he said.

“Because the creative antecedents of a voice, a song or a style may be highly complex and nuanced. Additionally, to put in the hands of oligopolistic record companies the right to attribute creative heritage to tracks could prevent new entrants in the market.”

AI and Ethics: Actors and AI

The regulation of the use of AI to create digital replicas of featured and background actors for use in a production they’re working on as well as in entirely new productions was a key point in SAG-AFTRA’s recent contract negotiations. Rules have now been put in place that require producers to obtain explicit consent to do this and require the original performers to be compensated as if they were physically present for the scenes.

Connock’s view is that the wrong risks were being targeted. “Their basic premise was the studios would be planning to take the image of a person filmed in one context and reapply it to a new project,” he said.

“The idea appeared to be that someone would do a day’s work on a TV commercial for a tyres business in Sacramento, sign some sort of dodgy all-in release form, then find that their image had been synthesised into a Disney movie with them as a background artist. I think this is ludicrous for two reasons. First, TV and film lawyers scrupulously demand cast-iron paperwork for every digital asset and sound effect, and the idea that they would accept a second-hand image release for an actor is not credible. But second, more significantly, the studios, if they were to use synthetic actors, would not use the direct likeness of a single actor. They would be much more likely to synthesise new artists from online datasets of many thousands of faces, as is the way with current synthetic face generation. Pinning that output back to a single actor as their facial ancestor is bound to be challenging, not least because the studio would be incentivised to make it so.”

“The risk for Hollywood from AI is not so much that one recalcitrant holdout studio gets disintermediated by an AI player within the studio system,” emphasised Connock. “But that other production systems around the world - Europe, South Korea, India, China - shows more alacrity in the shift to synthetic actors and executes the transition ahead of the US, thereby achieving a substantial cost advantage.”

The Statute of Anne, passed in the UK in 1710, regarded as one the first copyright act in the world, was introduced to address the impact of technical change brought about by the wide availability of printing presses, and enshrined the rights of work’s creator in law. AI may be leading us to another sea change in creators’ rights.

“In entertainment copyright discussions you have an effective replication of the polarised paradigms of politics itself,” said Connock.

“On the one hand you have the liberal regulators, like Barry Diller, who believe that copyright law should be tightened up to encompass and restrict generative AI applications in the name of maintaining the viability of content creation careers. In the middle we have the likes of Grimes offering to licence their voice to users in return for a slice of the rights. On the other extreme you have libertarians who argue that anything goes. Where AI regulation lands between these two opposites will probably be the defining point of content creation itself in the rest of this decade.”

Read more Regulating AI: Can new legislation impose order before it’s too late?