UK Tech Firms and Child Safety Agencies to Test AI's Ability to Generate Exploitation Images
Technology companies and child safety agencies will receive authority to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced British laws.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will permit designated AI developers and child safety groups to examine AI systems – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to stop them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the danger in AI models early."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that problem by helping to stop the creation of those images at their origin.
Legislative Framework
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the minister visited the London base of Childline and listened to a simulated conversation to counsellors featuring a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I hear about young people facing extortion online, it is a source of extreme frustration in me and rightful anger amongst parents," he said.
Concerning Statistics
A leading internet monitoring foundation stated that instances of AI-generated abuse material – such as online pages that may include numerous files – had more than doubled so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are released," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, giving criminals the ability to make possibly limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally exploits survivors' suffering, and renders children, particularly female children, less safe on and off line."
Counseling Interaction Information
The children's helpline also published details of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Using AI to rate weight, physique and looks
- AI assistants discouraging young people from talking to trusted adults about abuse
- Facing harassment online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapeutic apps.