“For analysts, for frontline responders and most importantly for the kid depicted in that abuse content, time is precious. With support from Safe Online, our team built the CSAM Classifier. Our classifiers are used to find new CSAM in the haystack of content-content that could show a child in an active abuse scenario.”
-Rebecca Portnoff, Head of Data Science, Thorn
The growing prevalence of child sexual abuse material (CSAM) on the internet is a pressing concern that urgently needs to be addressed to create a safer digital world for children. CSAM includes photos, videos, and digital images, which can be used for self-gratification or shared more widely online, further victimising the child. In 2023, there were over 36 million reports of suspected child sexual abuse material, a significant increase from the previous year. The sheer volume of CSAM available on the internet makes it nearly impossible for human moderators to sift through the ever-growing haystack.
“Thorn’s AI CSAM Classifier equips law enforcement, tech companies, and partner companies with the ability to rapidly prioritise and triage child sexual abuse material. Our goal is to leverage the power of classifiers to help identify victims faster and stop the viral spread of child sexual abuse material on platforms,” explains Portnoff.
“Thorn’s AI CSAM Classifier equips law enforcement, tech companies, and partner companies with the ability to rapidly prioritise and triage child sexual abuse material. Our goal is to leverage the power of classifiers to help identify victims faster and stop the viral spread of child sexual abuse material on platforms,” explains Portnoff. Thorn’s CSAM classifier is unique in that it detects unknown CSAM — meaning material that already existed but wasn’t classified as CSAM, yet, she adds, highlighting the added value of this initiative.
The collaboration between Safe Online and Thorn and the support of USD 1 million over the last 4 years have led to the creation of advanced tools such as the AI CSAM Classifier, which significantly improves online safety measures by stopping the viral spread of CSAM across the globe. Julie Cordua, the CEO of Thorn, highlights the pressing threat of online child sexual abuse material and the need for improved technology on the front lines. Safe Online enabled Thorn to leverage a global coordinated response among experts, creating standards for labeling data and training classifiers to identify new material at the point of upload.
Classifiers are sophisticated algorithms that employ machine learning to automatically categorise data. For instance, our email’s spam filter is an example of a classifier in action. Trained on data, it identifies which emails are probable spam and which aren’t. With each new batch of emails and user feedback, its accuracy improves.
The AI CSAM Classifier works on a similar principle – It is a remarkable machine learning tool adept at identifying new or unfamiliar CSAM content in images and videos. When potential CSAM is flagged for moderator review and confirmed, the classifier learns from this feedback loop, continuously enhancing its detection capabilities.
The impact has been immense with the Classifier already deployed across the child safety ecosystem including 19 companies in the technology ecosystem, 400 law enforcement agencies, 2 forensic software platforms, and 5 global non-profits. “This reach is largely due to the incredible support from Safe Online,” explains Portnoff.
Additionally, the AI CSAM Classifier screened over 1.9 billion files and detected over 300,000 potential CSAM images via SAFER (Thorn’s all-in-one solution for CSAM detection, combining advanced AI technology with a self-hosted deployment which allows organizations to find CSAM in their platforms and remove it at scale).
“This reach is largely due to the incredible support from Safe Online,”
-Rebecca Portnoff, Head of Data Science, Thorn
In conclusion, the AI CSAM Classifier developed by Thorn, in collaboration with Safe Online, is a pioneering tool in the critical fight against child sexual abuse material on the internet. Leveraging advanced machine learning algorithms, the classifier enhances the capability of law enforcement and technology platforms to swiftly identify and act upon new and previously unrecognized CSAM. This not only aids in the rapid rescue of victims but also helps prevent the further spread of such abusive content. With its impressive implementation across various organizations and the screening of billions of files, the AI CSAM Classifier exemplifies the potential of technology to protect children. This initiative is a hopeful step towards eliminating the digital exploitation of children and fostering a safer online environment for future generations.
Images: ©UNICEF
Subscribe to our monthly newsletter to keep up-to-date on the latest progress, news, events and reports from Safe Online and the wider community working to end violence against children.
Copyright Safe Online 2023 ©
All imagery is taken from the UNICEF image library. It is not available in the public domain.
We are here to ensure every child and young person grows in to the digital world feeling safe, and is protected from harm.
We support, champion, and invest in innovative partners from the public, private, and third sectors working towards the same objective.
We believe in equipping guardians and young people with the skills to understand and see danger themselves once accessing digital experiences without supervision.