A surge in AI-generated child abuse material (CSAM) is raising alarm worldwide, as predators exploit cutting-edge technology to produce horrific content with unprecedented realism. According to the Internet Watch Foundation (IWF), the first half of 2025 saw 1,286 verified AI-made abuse videos—up from just two the previous year. Over 1,000 of these were classified as Category A, the most severe form of abuse.
The IWF, a UK-based internet safety watchdog, reports that criminals are fine-tuning publicly available AI models using real CSAM footage to produce lifelike videos. These videos are then circulated in dark web forums where pedophiles exchange methods and technologies. The tools used are often open-source and freely available, a direct result of billions in unregulated AI development by tech giants.
IWF interim CEO Derek Ray-Hill issued a stark warning about the “incredible risk” these tools pose, highlighting that AI-generated child porn could flood the open internet. He connected the rise in synthetic CSAM to broader issues such as child trafficking and modern slavery, stating it has the potential to fuel a dramatic escalation in abuse-related crime.
The UK is proposing legislation to combat this threat. New laws would make it a criminal offense to possess, develop, or share AI tools intended for abuse. Offenders could face up to five years in prison. Manuals that instruct users on how to generate abusive content or exploit children using AI could result in sentences of up to three years.
In the U.S., 38 states have implemented laws banning AI-generated child porn. Twelve states and Washington, D.C. have yet to follow suit, leaving gaps in national protection. Conservative lawmakers are urging swift federal action to close these loopholes and ensure a consistent legal framework across all states.
The rapid spread of this technology without adequate safeguards has placed vulnerable children at even greater risk. Faith-based and family-oriented advocacy groups are calling on tech companies and governments to take immediate action to shut down the development and distribution of AI tools that can be used for exploitation.