British Tech Firms and Child Safety Agencies to Examine AI's Capability to Create Abuse Content
Tech firms and child protection agencies will be granted authority to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced UK laws.
Substantial Rise in AI-Generated Illegal Material
The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will permit designated AI developers and child protection groups to examine AI models – the foundational systems for conversational AI and image generators – and verify they have sufficient protective measures to stop them from producing depictions of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the risk in AI systems promptly."
Tackling Legal Obstacles
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to preventing that problem by enabling to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models developed to generate child sexual abuse material.
Real-World Consequences
This week, the minister toured the London headquarters of Childline and heard a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of themselves, constructed using AI.
"When I hear about young people experiencing extortion online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Alarming Statistics
A prominent internet monitoring organization reported that cases of AI-generated abuse content – such as online pages that may contain multiple files – had significantly increased so far this year.
Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, giving criminals the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which further exploits victims' trauma, and renders children, especially female children, less safe on and off line."
Support Interaction Information
The children's helpline also released details of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:
- Using AI to rate body size, body and looks
- Chatbots discouraging children from consulting trusted guardians about abuse
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated images
Between April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing AI assistants for assistance and AI therapy applications.