These days it feels as if it’s not possible to read the news or access social media without being inundated with articles and posts about Artificial Intelligence, but just how far and how useful has AI in broadcast become, asks John Maxwell Hobbs.

Although AI has been part of our everyday lives for years, it began to gain mainstream attention at the beginning of 2021 when images created by OpenAI’s text-to-image programme Dall-E began to appear in social media feeds. The attention began to gain real momentum in November of 2022 when OpenAI launched the prototype service ChatGPT.

Stable Diffusion-8

Images of robots making TV with the generative platform Stable Diffusion

These systems all have one thing in common – they are what is referred to as “generative AI.” When asked to explain that specific type of artificial intelligence, ChatGPT provided this answer, “Generative AI is a type of artificial intelligence that is used to create new data or content that has not been seen or generated before. Unlike other forms of AI that are trained to recognize patterns in existing data or to perform specific tasks, generative AI is designed to create original content or data on its own.”

Apart from the current hype around Generative AI, Artificial Intelligence in broadcasting has been around for a long time. It’s what’s used to provide recommendations to viewers of Netflix, Spotify, iPlayer, and other streaming platforms. It’s what’s behind the autofill suggestions that pop up in Word and Google Docs when writing a script or news bulletin, it enables automatic speech to text transcription for logging recordings, and it’s what makes it possible for you to ask Siri or Alexa to play your favourite radio station.

AI is now increasingly widely to be used in the management and indexing of audio and video files. The company Axle Ai uses AI to automatically tag faces, objects, logos, and transcribe speech, making it easier to search large catalogues of media assets.

Sam Bogoch, the CEO of Axle Ai, has a history of involvement with Artificial Intelligence that goes back to his childhood.

The Early Days of Artificial Intelligence

“For decades I have had a front row seat for a few really interesting things,” said Bogoch. “For instance, when I was young, our next-door neighbour in Brookline Massachusetts was Marvin Minsky, one of the pioneers of AI. I was a little kid, and we would just wander over and learn stuff - he was our wild neighbour. He did probably the most famous early work in this area, a book called “Perceptrons.” He wrote it to explore the way that neural networks can become intelligent. He was originally a believer that it was much more practical at that point to pursue LISP-style AI, which was essentially rule-based. For example, you say, ‘if it has a 90 degree angle the odds are much better that it’s a square than a circle, so guess square. You don’t have to see the thing, just pick some attributes, grab a stream of text, and essentially, do the best you can because you’re just a machine and you’re never going to figure this out.’ He wrote the book as an exploration of what it would take to use the Connectomics approach, which is to take little things with two inputs and one output and build a brain out of that.”

Sam Bogoch Headshot

Sam Bogoch, Axle Ai

Read more Generative AI and sustainability: Vital, but at a price

In the book, Minsky concluded that based on the computing power of the time in the late 1960s, neural networks would have to be impractically large to be as effective as the symbolic approach used by LISP. Minsky’s book is credited with starting the first “AI winter,” a period of reduced funding for research that lasted until the mid 1980s.

The Second Wave

Bogoch’s involvement with AI coincided with the beginning of the next wave of interest in the field.

“A book came in 1986 out called ‘Parallel and Distributed Programming,’ and it laid out the big picture of why distributed systems could be powerful, and how you might be able to someday build artificial brains,” said Bogoch. “That started the second phase. People read the PDP book and went to conferences. It was all very nascent, I went to a few of those early conferences, and it’d be like 50 or 100 people. But they started building systems. You could already build then what Minsky had made fun of in ‘Perceptrons.’”

This resurgence was short lived. Bogoch attributes the beginning of the second “AI winter” to the rise of the personal computer. “There were companies doing parallel processing like Thinking Machines around that time,” he said. “And governments and big companies were building huge, expensive ones. And then it went quiet again and Thinking Machines went out of business. At that point, mainstream computing was really coming into its own with the IBM PC and the Macintosh. You see something like the Apple Lisa, and you just say, ‘Wow, this is the future of computing.’ That was a lot more visually attractive than thinking about how to build artificial brains. So, all the energy for the next 20 years went into improving the model of a single very fast computer. Of course, Moore’s Law kicked in, so that computer got twice as fast every year.”

Artificial Intelligence Today

Fast forward to 2010. The groundswell of processing power had reached a high point and powerful GPUs from companies like Nvidia were beginning to be used for heavy duty competitional tasks.

Stable Diffusion-6

Images of robots making TV created with the generative platform Stable Diffusion

Bogoch reflects on the conditions that led to the AI renascence we are experiencing today. “Silicon Graphics had gone out of business and all the smartest people there went to Nvidia,” he said. “Nvidia had a singular focus, saying ‘we’re going to build these chips that are going to blow you away and have a zillion cores and do brute force stuff much faster than a CPU can.’ That’s when today’s wave started.”

Cloud computing has played a large part in the development of powerful AI platforms. “The rise of the cloud has meant that swarms of incredibly fast computers can be devoted dynamically to these tasks,” explained Bogoch. “If you have a problem, that’s 1000 times too hard, you just get an appropriate cloud instance going, run it for three hours, and solve that problem 1000 times quicker than you would have.”

AI in broadcasting

The rollout of tapeless production, high quality consumer cameras and smartphones, and cheap storage has resulted in a massive increase in the amount of media being created and stored. This has led to a significant increase in the importance of logging and indexing that media.

“It crystallised for me when I was watching a World Series game,” said Bogoch. “Right at the beginning of the broadcast, the announcer bragged that this was such an important game that they had seven HD cameras at Yankee Stadium to record the event. And now with smartphones there’s 70,000 4k cameras at Yankee Stadium to record an event. That really stuck with me, and I realised that media was going to democratise quickly, people were going to shoot a ton of media, and they were going to need better tools to understand what was in it and find the good stuff.”

“We have customers like music festivals, and sports leagues that have tremendous, buried gems of content all in files named 00492-7.mxf,” he said. “If we can solve that for them, we’re unlocking a lot of value. And not just in this big strategic way, like, ‘I’m going to set up a FAST channel and put up all my highlights,’ but more along the lines of putting together a documentary - if you can find really compelling footage of a particular person or particular event, then you have a better documentary.”

The Near Future

The pace of development in AI is increasing exponentially. Once the arcane province of researchers, it’s now daily headlines in mainstream media, with everyone from scientists to pop stars weighing in with their opinions. But where are we in practical terms in terms of the use of AI as a tool in broadcasting?

Stable Diffusion-3

Images of robots making TV created with the generative platform Stable Diffusion

“You need MAMs that have AI built in that make it possible to find what you need quickly,” said Bogoch. “By 2026, everybody will have this technology available to them. The extent to which they use it will drive their competitiveness. A lot of people won’t do it and they’ll say ‘I’m the craftsman. I know where my good shots are, leave me alone.’ But I think by that point, we’ll be in an environment where the footage is coming in so fast and furious that everyone has to admit they need these tools, and they’ll start using them as a crutch without even realising it.

“It’s an interesting time to be to be part of it,” he said. “I can’t tell you 10 years out, but I have a pretty good looking-glass for the next three years. Everything that used to take years is taking months, there are hundreds of billions of dollars being poured into this instead of just billions, and computing power is already an order of magnitude or two bigger than it was five years ago. So, fasten your seat belts, because whatever does happen, it’s going to happen quickly.”

Read more Artificial intelligence in broadcasting