Social media firms report that the use of AI and digital fingerprinting to screen harmful content has led to an increase in its removal as they double down in their efforts to make sites safer – but are they doing enough or will the duty of care soon become a legal requirement?

At IBC2019 YouTube’s EMEA chief Cécile Frot-Coutaz stated that protecting users from harmful and extremist content was the video-sharing platform’s “number one priority”.

Cecile Frot-Coutaz_YT_Keynote_1090992

Cécile Frot-Coutaz

Her comments follow a surge in efforts by social media firms to ensure that their platforms are safer places following a string of controversies that have dogged the main sites over the last couple of years.

Policing content, however, is a complex business and one that currently relies largely, although not exclusively, on self-governance.

All platforms have their own rules about what is unacceptable and the way that users are expected to behave towards one another.

This includes content that promotes fake news, hate speech or extremism, or content that could trigger or exacerbate mental health problems.

Detection and removal
The identification and removal of such harmful content rely on a combination of technology, human review teams and flags from external bodies and agencies.

Recent statistics - available through quarterly transparency reports - show that firms are succeeding in screening and removing huge percentages of harmful content within a 24 hours period - and much of it before a single view has been made.

In Q2 of this year, YouTube removed over 9m videos that breached its community guidelines – a spike of almost 1m since the first quarter.

Of this content, 78 per cent of videos were found via automated means and 81 per cent of it had not been viewed yet (compared to 70 per cent in Q1 of 2019).

Facebook too claims to have made significant investments to ensure it does a better job of removing content that shouldn’t be on its platforms.

It estimates that in Q1 2019, for every 10,000 times people viewed content on Facebook, less than 3 views contained violating content.

The latest metrics from Facebook’s Community Standards Enforcement Report also shows that in Q1 of this year the network took action on 5.4m piece of content relating to child nudity and exploitation.

Almost all of these pieces came via its internal flagging procedures rather than through user reporting.

In addition, Facebook also took action on 4m pieces of hate speech and 33.6m pieces relating to violence and graphic content.

Social media firms are also doubling down on raw manpower and review teams.

Facebook has now tripled the size of its safety and security team to 30,000 people worldwide, while Coutaz adds that YouTube has “vastly increased” the number of human reviewers it employs.

Digital fingerprinting
In recent years there has also been a growing reliance on automated screening technologies to weed out harmful content and there are two main methods.

The first involves automated matching by machine and relies on identifying content that is already known to be harmful.

YouTube, for example, applies hashes (digital fingerprinting) to harmful material that has previously broken its community guidelines so that it doesn’t get re-uploaded.

This technology is also used to prevent the upload of harmful images that have appeared on other sites, by enabling YouTube to interface with shared industry databases to increase the volume of content that its machines can catch at the upload stage.

Microsoft’s Photo DNA tool meanwhile - which is used by Snapchat Gmail, Twitter, Facebook, OneDrive, Adobe and many others – also breaks down video into keyframes and creates hashes for those screenshots.

Unlike AI it does not use facial recognition technology nor can it identify a person or object in the image, but it’s good for finding harmful content that’s been edited or spliced into video that might otherwise appear harmless.

According to Fred Langford, deputy chief executive and CTO of the UK-based Internet Watch Foundation, what used to take 30 minutes or several hours of work, now only takes a minute or two.

“It’s made a huge difference for us,” says Langford, whose organization collaborates with sexual abuse reporting hotlines in 45 countries around the world.

“Until we had PhotoDNA, we would have to sit there and load a video into a media player and really just watch it until we found something, which is extremely time-consuming.”

Other technologies in this vein include Videntifier – a powerful visual search engine from a small Icelandic startup which combines computer processing and database technology.

The speed and the scale allows for the processing and identification of tens of thousands of hours of video per day.

The patented technology has been used by law enforcement bodies such as UK’s Counter Terrorism Unit, Interpol and the National Centre of Exploited Children in the US to name a few.

Last year the firm also licensed its technology to Facebook with one of its co-founders - Friorik Asmunsson leaving shortly afterwards to join the social media giant.

AI
Digital platforms are also starting to harness AI and machine learning in their bid to eliminate harmful content.

Facebook has said that advancements in AI mean that “this kind of automation is getting more accurate and impactful all the time.”

Snapchat meanwhile also confirmed in a statement that it is developing machine learning-driven tools to help it identify keywords and account behaviours that suggest abusive accounts or other suspicious activity.

A spokesperson from Snapchat added: “We intend to use these signals to flag high-risk accounts for suspicious activity review and will continue to aggressively develop this capability.”

”Machine automation simply cannot replace human judgment and nuance” YouTube 2019 Transparency Report

YouTube first started using ML technology in 2017 to flag violent extremist content for human review. Buoyed by positive results, this has since expanded to include other content areas such as child safety and hate speech.

Like automated matching, it relies on a corpus of videos already reviewed and removed to train it to flag new content that might also violate its community guidelines.

Detailing the use of AI in its Q2 2019 Transparency Report, YouTube makes the point that AI works best when used alongside its human review teams.

“They are most effective when there is a clearly defined target that is violative in any context. Machine automation simply cannot replace human judgment and nuance,” the report says.

“Algorithms cannot always tell the difference between terrorist propaganda and human rights footage or hate speech and provocative comedy. People are often needed to make the final call,” the report concludes.

Videntifier’s cofounder and former CEO, Herwig Lejsek believes that both AI and automated matching technology have their merits and should be viewed as complementary technologies.

“The AI-based method allows you to identify things in a contextual and semantic way – with a probability – it will say that something is 80% likely - but you will always still have to look and check the picture because of that element of doubt,” he says.

“Matching technology applies more exact methods and so is far more accurate - but is not as powerful,” he adds.

While AI might not harness 100% accuracy rates, the UK Government claims that it is almost there on one of its projects.

Last year the Home Office joined forces with London artificial intelligence company ASI Data Science – now called Faculty - to release a tool that claims to have detected 94 per cent of Jihadist content with a 99 per cent accuracy rate.

This means that only 50 out of one million randomly selected videos require human review.

The platform-agnostic tool, which was trained by processing thousands of hours of content posted by the Islamic State group, can be used to support the detection of terrorist propaganda across a range of video streaming and download sites.

The software was initially developed for smaller platforms, but the then Home Secretary Amber Rudd said that the government would “not rule out” legislative action, forcing the bigger tech companies to use it if they do not take measures to make their sites safer.

Age restrictions
YouTube and other social platforms have also come under criticism for not fully embracing their duty of care towards young people.

Greg Childs

Greg Childs, CMF: Online Harms Paper

Greg Childs, director of the Children’s Media Foundation (CMF), argues that it isn’t just about detecting and screening harmful content but acknowledging that even when they are not logged onto YouTube the site must recognise that young people are using the platform, and it needs to act accordingly.

The CMF recently consulted on The Online Harms White Paper, launched by the UK Government earlier this year, which proposes a wider regulatory framework for online platforms.

In its response the Foundation called for clearer age restrictions to be placed on video-sharing site content, and for more family-friendly algorithms to be deployed.

The CMF also notes that children are eschewing YouTube Kids - the wall gardened platform set up in 2015 in an attempt to create a safer space for kids– in favour of searching for videos via the main platform.

“YouTube Kids has failed because it is not a natural destination for all children’s content,” says Childs

He adds: “So much material that is valuable for kids is also designed for adults – life hacks or natural history. All this means you effectively have two sites – one for cartoons and one for everything else.”

The CMF argues in its response to the Whitepaper that it is more effective to create universally safe spaces with specific areas that are restricted for more adult content, rather than the other way around.

“This is also the accepted societal norm in the offline world,” the response concluded.

“So much material that is valuable for kids is also designed for adults – life hacks or natural history” Greg Childs, CMF

While the Whitepaper still talks about voluntary self-regulation – with laws coming as a last resort - elsewhere in and around Europe the duty of care that social media firms owe their users is becoming a statutory requirement.

NetzDG
Since January 2018 Germany’s Network Enforcement Act - known as NetzDG – requires social media firms to remove hate speech and other postings within 24 hours or face fines of up to €50m (£44m).

This has led to Facebook, Twitter and Google fitting their German websites with additional features for flagging up controversial content, and the hiring and training moderators to cope with these demands.

Facebook now has 1,200 people reviewing flagged content from “deletion centres” in Berlin and Essen, which make up a sixth of its global moderation team.

Despite these efforts, however, in August the social media network was fined two million euros in Germany for underreporting hate speech complaints.

EU legislation
The EU’s Audio Visual Media Service Directive (AVMSD) is also in the process of being updated and, from September 2020, will hand powers to local regulators to ensure compliance of video-sharing platform services.

This includes significant powers to investigate and protect kids from harmful content violent and enforce strong age verification checks.

What’s interesting about this law is that regulation does not solely depend on where the tech companies are registered (as with current tax laws) but on where the firms have a “significant presence”.

The bigger platforms such as Google, YouTube Facebook might well fall under the Irish regulators’ jurisdiction, but tech firms with a substantial presence in the UK - including Snap, Tumblr and TikTok might well fall under the UK’s remit.

While it’s not clear whether the AVMSD will apply to the UK because of Brexit (it still might, depending on the deal and the will of the UK Government) media regulator Ofcom has confirmed that it is “assuming that this regulation will become law next year” and is “preparing for its accordingly”

An Ofcom spokesperson added: “We’re scoping for the AVMSD and looking at what extra resources would be needed.”

Self-regulation vs Law
Childs acknowledges that while social media platforms owe their users a greater duty of care he doesn’t know how this is achievable beyond creating adult content walls and developing “family-friendly” algorithms.

“It’s beyond my pay grade,” he says, “but there are some enormous pay grades in Silicon Valley so they ought to be able to work it out.

Last month one Silicon Valley resident - Facebook’s chief executive and founder Mark Zuckerberg - attempted to address public and legal concerns by unveiling plans to create an independent ‘oversight’ board.

Dubbed ‘Facebook’s Supreme Court’ it will comprise of between 11 and 40 members who will make decisions on how the social network is monitored.

Critics have dismissed the move as a bid by the world’s most powerful social network to shirk the scrutiny that its power attracts and to stall regulation.

But in a statement issued by the social media site, Facebook appears to welcome a combination of both regulation and self-governance to improve its long-term duty of care.

“We know there’s more work to do and we’ll keep investing and innovating to keep people safe on our platform,” a spokesperson said.

“But we also believe that new regulations are needed in this area, so we have a standardised approach across different internet platforms, and so that private companies aren’t making so many important decisions alone.”