UK Tech Firms and Child Protection Officials to Examine AI's Capability to Create Exploitation Content
Technology companies and child safety agencies will receive permission to evaluate whether AI systems can produce child abuse material under new British legislation.
Substantial Increase in AI-Generated Harmful Material
The announcement coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will permit approved AI developers and child safety groups to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the danger in AI models early."
Tackling Regulatory Challenges
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to averting that problem by enabling to halt the production of those materials at their origin.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI models developed to generate exploitative content.
Practical Consequences
This recently, the official visited the London base of Childline and listened to a mock-up conversation to advisors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a cause of extreme frustration in me and rightful anger amongst parents," he said.
Concerning Statistics
A leading online safety organization stated that instances of AI-generated abuse content – such as online pages that may contain numerous images – had significantly increased so far this year.
Instances of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are released," stated the head of the online safety foundation.
"AI tools have made it so survivors can be targeted repeatedly with just a few clicks, providing offenders the ability to make possibly limitless quantities of advanced, lifelike exploitative content," she continued. "Material which additionally commodifies survivors' trauma, and renders children, especially female children, more vulnerable both online and offline."
Counseling Interaction Information
The children's helpline also released details of counselling interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
- Employing AI to rate body size, body and appearance
- Chatbots discouraging young people from talking to safe adults about harm
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy apps.