This IBC Accelerator Challenge explores how to exploit AI to fully automate the process of producing raw and edited content shot lists for news agencies and broadcasters. 


Associated Press (Project Lead, Sandy MacIntyre), Al Jazeera Media Networks, ETC


Vidrovr, Metaliquid, Limecraft

Challenge & Innovation

Championed by world leading news agencies and broadcasters, this Accelerator Challenge aims to address and deliver a proof of concept solution to what is today a manual and very costly process for media organisations; producing shot lists of raw and edited content, with written depictions stored alongside the video in the archive.  

The Challenge is to fully automate that process by exploiting some key capabilities of AI, using face, object and voice recognition plus sentiment analysis and machine learning to replace the labour-intensive process of producing shot lists of raw and edited content.  

AI is already applied to video - tags assist search and discovery, semantic and sentiment analysis enrichment is widespread - but no single solution exists to replace the human effort of creating a shot by shot, narrative depiction of the asset, thus freeing producer time for more creative work.  

This project requires leading AI technology expertise to combine resources and knowledge with the media organisation ‘Champions’ to help emancipate video producers from tens of thousands of hours spent manually creating shot lists in written narrative form. 

Vital creative resources are diverted to this semi-skilled, though nonetheless essential task.  

The Associated Press (AP), as one of the Champions of this Challenge, is already working with two vendors and now - under the IBC Accelerator banner - we seek others to join the quest to solve this industry-wide problem. 

Key Deadlines

Participant application deadline: 7th May 2020

Proof of concept development: April - August 2020

IBC Showcase: September 2020

Why AI for automated shot list creation?

Without a proper shot list there can be no understanding of what is contained in raw and edited video assets and therefore no ability downstream to retrieve or commercially realise the value of footage stored in archive. 

Marshalling the full suite of AI layers from speech-to-text transcription to facial, object and voice recognition, alongside semantic and sentiment analysis; then applying those to specific training sets of video assets, can allow us to build models that will help solve this challenge.  

Join Associated Press, Al Jazeera and BBC in defining this solution, and solving one of the biggest headaches for news agencies and broadcasters. 

For further information: