While discussions around the ethics of AI are raging in boardrooms across the globe, there are still many misconceptions about the technology. John Maxwell Hobbs takes on the task of separating fact from fiction…

It’s almost impossible to avoid news about Artificial Intelligence (AI) right now, even in the world of broadcast media. Regulation of the use of AI is a major sticking point in negotiations between film studios and the Writers Guild of America and online forums for the production community are full of posts by people convinced that everyone from writers to camera operators will soon be replaced by computers. Essentially the theme is that of playing out a script from the latest season of Black Mirror (which definitively wasn’t written by AI.)


AI in media and broadcast: the myths

To add fuel to the fire, a group of technology leaders have signed a 22-word statement that says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, there are sceptics like the novelist Cory Doctorow, who feel that such warnings are less about a clear and present danger and more about a “self-serving commercial boast.”

Wherever the truth may lie on this continuum, it’s clear that AI technologies, particularly generative systems like Midjourney and ChatGPT are making a significant impact on the world of broadcasting. Because of this, it’s important to separate the myth of AI in broadcast from the reality.

What exactly is Artificial Intelligence?

The word ‘intelligence’ was first applied to computers in a 1950 article by the English mathematician Alan Turing, however the first use of the term ‘Artificial Intelligence’ is attributed to John McCarthy of MIT who used the term at a workshop at Dartmouth College in 1956. His colleague at MIT, Marvin Minsky defined AI as, “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organisation and critical reasoning.”

AI 2

So, what is Artificial Intelligence?

An important element of AI is Machine Learning (ML), which is a subfield of AI that focuses on the development of algorithms and statistical models that enable computers to learn from and make predictions or decisions based on data. ML algorithms are designed to automatically identify patterns and extract insights from large datasets, without being explicitly programmed. Through iterative training on data, machine learning models can improve their performance over time and make accurate predictions or decisions in various domains.

Read more Generative AI in Broadcasting: A rising tide

Uses of AI in media

Although it’s only now receiving wider public attention, a number of AI tools are already commonly in used in media production. Most of these tools make use of what is referred to as ‘narrow AI’ and ML which are designed to excel at very specific tasks.

  • Text editing: This is something that it so commonplace that it’s not even thought of as AI now. Word processors use AI and ML to implement grammar checking and text auto-completion, so most scripts written in the past ten years have been done with the assistance of AI.
  • Satellite navigation: This has become as commonplace as the word processing example above. Production teams no longer need to rely on paper maps and dead reckoning to find their way to location shoots.
  • Dialog/script synching: This is probably the most widely used AI tool used in video editing today. Several editing systems make use of AI-based speech recognition to match recorded dialog to the text of a script to make it possible to locate video clips via text-based searches.
  • Facial recognition: The latest Indiana Jones film features an 80-year-old Harrison Ford playing a thirtysomething version of himself via a CGI de-aging process accomplished using AI-based facial and object recognition.
  • Recommendation engines: The ultimate goal of any production is to get the programme in front of an audience. With the massive amount of content available today via traditional channels and streaming, that task is getting more and more difficult. Streaming services use AI to analyse viewing patterns to recommend new programmes to their subscribers.

Myth 1: AI is sentient

Sentience refers to the capacity for consciousness, self-awareness, and subjective experience, which are qualities attributed to living beings, particularly humans.

Reality 1: No it isn’t

Let’s let ChatGPT answer this one. When asked if AI is sentient, the system answered,

“No, as an AI language model, I am not sentient. I do not possess consciousness, self-awareness, or subjective experiences. I operate based on algorithms and statistical patterns to generate text and provide responses to user inputs. While I can simulate human-like conversations and provide information, it is important to understand that I am an artificial intelligence program created by OpenAI, and my responses are generated based on patterns and examples in the data I have been trained on.”

While AI can exhibit impressive capabilities in areas like natural language processing, image recognition, and problem-solving, these abilities are based on mathematical models and computational processes rather than conscious experience. AI systems lack the inherent qualities of human sentience, such as consciousness, emotions, intuition, and the ability to reflect on their own existence.

Myth 2: AI systems exhibit general intelligence

Unlike narrow AI, which is designed to excel at specific tasks, Artificial General Intelligence (AGI) aims to replicate the broad cognitive abilities and flexibility of human intelligence. An AGI system would be capable of reasoning, problem-solving, understanding natural language, learning from experience, and adapting to new situations and tasks. They would possess the ability to transfer knowledge and skills from one domain to another, exhibit common sense, and engage in creative and abstract thinking.

Reality 2: AI is designed to perform specific tasks

Artificial General Intelligence (AGI) does not currently exist, and many researchers believe it never will. While significant advancements have been made in narrow AI, which excels at specific tasks such as image recognition or natural language processing, the development of AGI requires addressing fundamental questions and technical challenges related to cognition, reasoning, common sense, and the ability to learn and generalise from limited data, and the breakthroughs necessary to enable AGI have always remained ‘just around the corner’ since the 1950s.

Myth 3: AI will take over the world

One of the most pervasive myths about AI is that it will inevitably lead to a dystopian future where machines become superintelligent and surpass human capabilities. This notion is often fuelled by science fiction movies and literature, but lately, some of the biggest names behind AI have started to promote this concept as well.

Reality 3: AI as a tool, not a replacement

AI systems are designed to perform specific tasks, and lack the self-awareness and consciousness required for world domination. AI operates within the bounds of its programming and cannot autonomously acquire motivations or intentions. AIs are only active when executing a task initiated by a human being, otherwise they are dormant, and not engaging in any self-directed activities.

Myth 4: AI will cause widespread job loss

Another common fear associated with AI is that it will lead to mass unemployment as machines take over jobs traditionally performed by humans.

Reality 4: AI-powered job transformation

AI will transform jobs, rather than replace them entirely. AI can automate routine and mundane tasks, freeing up human workers to focus on more meaningful and creative aspects of their work. This transformation of jobs has been a recurring theme throughout history, as technological advancements have consistently shaped the job market.

As with other technologies, AI is simply a tool designed to augment human capabilities, not replace them. It excels at automating repetitive tasks, processing vast amounts of data, and recognising patterns that humans might miss (a fact that the BBFC is exploiting to help classify content). AI systems are developed to complement human skills, freeing up time for individuals to focus on higher-level tasks that require critical thinking, problem-solving, and empathy.

Myth 5: AI is biased

Concerns about biased AI algorithms and unethical applications have garnered significant attention. It is true that AI systems can inherit biases from the data they are trained on, reflecting societal prejudices and perpetuating discrimination.

Reality 5: Addressing bias in AI

It is important to recognise that bias in AI is a human problem rather than an inherent flaw in the technology itself. Biases can emerge from the human-generated data used to train AI algorithms. Historical imbalances and societal prejudices can be reflected in the data, leading to biased outcomes. Recognising this, data collection methods need to be carefully designed to ensure representativeness and inclusivity. Diverse datasets that encompass different demographics, cultures, and perspectives can help minimise bias.

Additionally, diversity within AI development teams is essential. By bringing together individuals from various backgrounds and experiences, biases can be identified and mitigated effectively. Different perspectives can challenge assumptions and foster a more comprehensive understanding of potential bias.

Ethical considerations are also paramount in AI development and deployment. Establishing ethical frameworks and guidelines can guide responsible AI practices. These frameworks should address issues such as privacy, consent, accountability, and transparency. It is crucial to ensure that AI systems are developed with a focus on human well-being and adhere to ethical standards.

Reality: AI is here to stay

Whether referred to as ‘AI’, ‘ML,’ ‘expert systems,’ or simply ‘algorithms,’ AI systems of one kind or another have been in active use for several decades. John McCarthy defined a phenomenon he called, ‘the AI effect,’ in which once an AI task makes it into mainstream use, it’s no longer viewed as AI, but just ‘computation.’ As people recover from the shock of the new generated by the latest technologies, they incorporate them into their everyday lives, and use them to find new ways to be creative.

To learn how these technologies will impact the future of broadcast, media and entertainment, get up close at IBC2023.