UK Tech Firms and Child Safety Agencies to Examine AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will receive authority to assess whether artificial intelligence tools can generate child exploitation material under recently introduced British laws.

Significant Rise in AI-Generated Illegal Material

The announcement came as revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the authorities will allow approved AI companies and child protection organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have sufficient safeguards to prevent them from producing images of child exploitation.

"Fundamentally about preventing exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."

Tackling Regulatory Obstacles

The changes have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that issue by enabling to stop the production of those materials at their origin.

Legislative Structure

The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI systems designed to create child sexual abuse material.

Practical Consequences

This week, the official visited the London headquarters of Childline and heard a mock-up call to advisors featuring a account of AI-based abuse. The call portrayed a teenager seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about young people experiencing extortion online, it is a cause of intense anger in me and rightful anger amongst parents," he stated.

Alarming Statistics

A prominent internet monitoring foundation reported that instances of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.

Instances of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to guarantee AI products are secure before they are released," stated the head of the online safety organization.

"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing offenders the capability to make possibly endless quantities of advanced, lifelike exploitative content," she added. "Material which additionally commodifies victims' trauma, and makes children, especially female children, less safe both online and offline."

Counseling Session Information

The children's helpline also released information of counselling sessions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Employing AI to rate weight, body and looks
  • AI assistants discouraging children from consulting trusted adults about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated images

During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic apps.

Mr. Russell Morris
Mr. Russell Morris

A tech journalist with over a decade of experience, specializing in consumer electronics and digital trends.

June 2025 Blog Roll