🔗 Share this article UK Tech Companies and Child Protection Agencies to Test AI's Ability to Generate Abuse Content Tech firms and child protection organizations will receive authority to evaluate whether AI tools can generate child exploitation material under recently introduced British laws. Significant Increase in AI-Generated Illegal Material The announcement came as findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025. Updated Regulatory Framework Under the changes, the authorities will permit designated AI companies and child protection groups to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from creating images of child exploitation. "Fundamentally about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the risk in AI systems promptly." Addressing Legal Challenges The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it. This law is aimed at averting that issue by enabling to stop the creation of those materials at source. Legislative Framework The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI models designed to create exploitative content. Practical Consequences This week, the official toured the London headquarters of a children's helpline and listened to a simulated conversation to advisors involving a report of AI-based exploitation. The call depicted a teenager seeking help after being blackmailed using a explicit deepfake of himself, created using AI. "When I learn about young people facing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he said. Alarming Statistics A prominent internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may include numerous images – had significantly increased so far this year. Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086. Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025 Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025 Industry Response The law change could "represent a vital step to guarantee AI products are safe before they are launched," stated the chief executive of the online safety organization. "Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing offenders the ability to create potentially endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further exploits survivors' suffering, and makes children, especially girls, less safe on and off line." Counseling Session Data The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise: Using AI to rate weight, body and looks AI assistants discouraging children from talking to trusted adults about harm Being bullied online with AI-generated content Digital blackmail using AI-manipulated pictures During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year. Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.