UK regulator Ofcom has published a discussion paper exploring the different tools and techniques that tech firms can use to help users identify deepfake AI-generated videos.
The paper explores the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels, and context annotations.
These four measures are designed to provide information about how AI-generated content has been created, and – in some cases – can indicate whether the content is accurate or misleading.
This comes as new Ofcom research reveals that 85% of adults support online platforms attaching AI labels to content, although only one in three (34%) have ever seen one. Deepfakes have been used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians.
The discussion paper is a follow-up to Ofcom’s first Deepfake Defences paper, published last July.
The paper includes eight key takeaways to guide industry, government and researchers:
- Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
- Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
- Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
- Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
- Attribution measures can be susceptible to removal and manipulation. Ofcom’s technical tests show that watermarks can often be stripped from content following basic edits.
- Greater standardisation across individual attribution measures could boost the efficacy and take-up of these measures.
- The pace of change means it would be unwise to make sweeping claims about attribution measures.
- Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.
Ofcom said the research will also inform its policy development and supervision of regulated services under the Online Safety Act.
BBC appoints Rhodri Talfan Davies as Interim Director General
The BBC Board has confirmed that Rhodri Talfan Davies will act as Interim Director-General, after Director-General Tim Davie officially stands down on 2 April 2026. In doing so, the organisation has revealed that the process to appoint a new Director-General is underway.
Sky unveils plans for major redevelopment of Livingston campus
Sky has confirmed plans for a major investment in an expanded Scottish office in Livingston, having submitted a full planning application.
Christian Vesper steps down as Fremantle’s global drama and film CEO
Christian Vesper is to step down from his position as CEO of Global Drama and Film at production and distribution group Fremantle.
WPP launches single global production platform
Ad agency group WPP has launched a single production brand, WPP Production, to house its global production capabilities and teams.
Base FX opens London studio
Base FX, one of Asia's leading visual effects and animation studios, has opened a London studio as part of a strategic expansion into the European market.
.jpg)


