Press Release
Tuesday 10 February
- A growing global coalition, currently 107 major organisations, has united to fight AI nudification tools.
- The global coalition includes the European Commission, NSPCC (UK), Amnesty International, INTERPOL, Save the Children and Safe Online
- Elon Musk’s AI system, Grok, and many other AI image tools are used to generate generate non-consensual nude images
- AI nudifying tools are increasingly linked to coercion, blackmail and child sexual abuse material.
- Most victims are women and children.
Elon Musk’s free and accessible AI system, Grok, has generated an estimated three million non-consensual nude images, triggering an urgent global response. A coalition of 107 leading child-protection and humanitarian organisations has united to confront what they describe as an unacceptable threat to human dignity and child safety.
The global coalition, which includes Safe Online, Child Helpline International, Offlimits, the National Centre for Missing & Exploited, We Protect, Internet Watch Foundation, In Hope, the European Commission, NSPCC, Amnesty International, INTERPOL and 96 others, brings together regulators, child-protection experts, human rights advocates, and international law enforcement.
Nudifying tools allow users to digitally undress individuals using ordinary photographs. While often marketed as “adult” applications, they are increasingly used to target women and girls in particular and to generate illegal sexual imagery of children without consent, accountability, or effective barriers.
“Between 2023 and 2024 there was a 1,325% increase in AI-generated child sexual abuse imagery.” Marija Manojlovic, Head of Safe Online, a US$100 million global fund dedicated to protecting children online. “The same technology that should expand human potential is being weaponized against children.”
She added that the framing of these harms obscures their severity. “We minimize harm by calling it ‘online,’ as if it is somehow less serious than what happens in the physical world, but the trauma is real,” Manojlovic said.
“Nudifying tools have created an unprecedented threat to our children. AI – the technology that should expand human potential, is being weaponized against children.
“Tech companies have the ability to detect and block nudified content of children. The distribution of child sexual abuse material is illegal in every jurisdiction and tech platforms should be brought in line with other creation and distribution channels.
“It’s frankly shocking that these platforms are monetized and aren’t required to report offenders, or work with industry partners to cut off payment flows – these are safeguarding tools that are used in the real world and need to be applied to online platforms.”
The coalition is mobilising immediate tools and coordinated action to block access to nudification technologies, hold developers and platforms accountable, and accelerate protections to prevent further harm.
With AI abuse accelerating, the coalition is seeking broader global support and is opening membership to new organisations via https://forms.gle/uvYwAyDVQFCnAN3v7
See more of our recent updates
UNICEF Ethiopia
Our grantees UNICEF Ethiopia Strengthening Child Protection in Ethiopia to prevent and respond to violence against children, including online child sexual abuse and exploitation Countries involved:October 2017 – October 2019 UNICEF Ethiopia’s project will focus on strengthening child protection systems, including social welfare and justice systems, to prevent and respond to
Swansea University
Our grantees Swansea University Dragons+ : Developing Resistance Against Grooming Online Stories – Strengthened Safeguards Countries involved:United Kingdom of Great Britain and Northern Ireland DRAGONS+ builds on Swansea University’s expertise in anti-grooming technology to pioneer research at the intersection of offender and child interaction. This research considers development of perceived-first-person
EPCAT International (Disrupting Harm)
Our grantees ECPAT International (Disrupting Harm) Evidence From 14 Countries on the Context, Threats, and Children’s Perspectives of Online Child Sexual Exploitation and Abuse. Countries involved:Thailand Disrupting Harm is a large-scale data collection and research project to better understand online child sexual exploitation and abuse across the world. This study