IBC2017: Although AI might be the future, exhibitors at IBC2017 are demonstrating more immediate uses for the fabled technology

AI and machine learning are without doubt the buzzphrases of the year, and encountering them as key technology elements in the future of media has been a feature of IBC2017.

From packed auditoriums for the conference tracks dealing with AI, to a wide range of interest from all disciplines, AI is clearly here to stay.

Sally Buzbee, Executive Editor, Associated Press told delegates earlier in the week: “As a journalist what most excited me is the potential for Artificial Intelligence (AI) to help us in deeper reporting.

”That could be something as simple as sorting through reams of data to find a campaign finance story or report more deeply on pollution, for example, it helps us with fact checking too, and vet user generated content.

”There is a tonne of opportunity for us to tell deeper and more fact-checked stories with AI. But it’s important that it is in no way substitute for journalists, it’s potentially a fantastic tool for journalists to deepen their work, not a cost-cutting measure.”

Maximising monetisation

However, exhibitors at IBC2017 are on the front foot, with a range of ingenious uses of AI technologies for today, not just the future. At the IBM Watson Media stand the company is demonstrating its vision for optimising video performance, maximising monetisation opportunities, and unlocking new value for video content through their AI tools.

David Kulczar, Offering Manager, IBM cloud video said: “We announced Watson Media as a brand two weeks ago at the US Open, for which we delivered highlight clipping.

”We took their live feeds, clipped them, then fed them back to editorial with a excitement factor rating. The editorial-approved results were shared on social channels, within the venue, and players had iPads to view their own results.”

“Watson video enrichment looks at thematic elements within video files themselves, we do speech to text against each scene, to generate a transcript, which is in turn fed through a natural language processing API, which looks at all the characteristics to build a taxonomy.

”From this we look through to find the most relevant keywords, which we then categorise so you can search against them.”

“That said, text-to-speech closed captioning is probably the thing we talk about the most, because it’s relevant to everyone at IBC2017”, finished Kulczar.

Meanwhile, over at Axle Ai in Hall 14, a system for managing and tagging video now has an AI plugin that recognises items in scenes, such as famous individuals, objects and products.

Neil Blake, EMEA Business Development Manager, Axle AI said it had been a busy show for the company, which recently rebranded from Axle Video.

“Our AI module means that rather than me having to go to some files and add metadata manually, it does it for me. Once we’ve ingested the content, and created a browse file, we then analyse the content and it then populates custom metadata fields with what it has found.”

We’re different because of our simplicity – we have plenty of complex competitors, but they’ll probably take six months to setup and it’ll cost a fortune.

”Barring rendering time we’ll be up and working on the same day. We’re not doing anything proprietary too, so if somebody decides it’s not right for them anymore then it’s easy to export the data out again…”

According to a recent study by Tata Consultancy Services, 80% of executives across 13 industries currently invest in AI and almost 100% plan to invest by 2020.

Media, entertainment and information services are set to see a 292% rise in investment over that period.