In response to the escalating risks children face in the digital world, Safe Online launched a global call for proposals in 2024 to identify and support the most promising solutions.
Following a rigorous selection process from a pool of over 300 applications, 20 grantees have been selected to receive a combined USD 6.7 million in funding – USD 1.7 million more than originally allocated. This significant increase reflects the urgent, growing need to strengthen global efforts to protect children from digital harms. These new projects, from across the world, are designed to deliver maximum impact where it’s needed most.
MindShield: Enhancing Mental Health and Resilience for Romania’s Child Sexual Abuse and Exploitation Professionals
MindShield enhances Romania’s capacity to address OCSEA by reinforcing mental health support for professionals and preventing re-victimization. Through training, digital tools, and systemic coordination, it delivers a scalable model to better protect children.
The Artemis Survivors Hub (ASH) is intended to give a voice to victim/survivors of online child sexual abuse and to ensure that they know their abusers are being targeted by law enforcement, that their images are being removed from circulation and when an offender is prosecuted their voice is heard.
ACT-PILOT creates the first global framework for interventions focused on preventing the perpetration of sexual harms. Combining this with specialist training and strengthening implementation systems globally, we envisage more programs that ultimately protect children by addressing risks before harm occurs.
The project will integrate digital violence prevention, community-based protection systems and advocacy in local policies, to protect children online, empowering them as agents of digital safety and access to non-punitive restorative practices, contributing to the right to a life free of violence.
The CLEAN project proposes a groundbreaking approach to measuring, identifying, and preventing online child sexual exploitation and abuse in Lebanon following the latest WeProtect Global Threat Assessment and Safe Online Disrupting Harm Initiative.
The COR Sandbox is a first-of-its-kind mechanism for cross-border, cross-sector collaboration to advance child online safety. Through the participation of youth, platforms and regulators, this regulatory sandbox creates a blueprint for consistent systemic online care based on children’s rights.
This project strengthens national systems to prevent digital harms to children, enhances survivor support with trauma-informed services, and empowers families in prevention and recovery—filling critical gaps in coordinated child protection and community resilience.
The project is building the world’s first global research data hub and international cohort study focusing on the prevention of child sexual abuse. With these initiatives, we’ll improve data management, empower prevention efforts, and transform child protection worldwide.
Our project combines AI-powered detection systems with cultural research to identify and filter inappropriate ads on children’s video content. By analyzing 30+ countries and developing scalable solutions, we aim to protect vulnerable young users worldwide from exposure to violent and sexual content.
The project tackles the overlooked area of text-based child sexual abuse material (CSEAM) by adapting a global classification system by INHOPE and identifying potential behavioural links to visual CSEAM. Thus, we aim to provide an evidence-based foundation for detection, prevention, and policy decisions.
There is a dire need for trauma-informed responses to childhood victims of “capping” (non-consensual screen captures of sexual acts which may not surface until adulthood). The project aims to create practical research-informed response frameworks that minimize harm to all capped and unknowing victims of CSAM.
Project Lens aims to generate robust evidence to inform a system-wide response to image-based sexual violence against children. It addresses critical gaps through a multi-faceted approach that includes landscape analysis, survivor experiences, family perspectives, and professional attitudes.
This project aims to transform current knowledge, understanding and practice around the impact of TA-CSA on children and young people by adopting a novel, multi-perspective methodological approach, and developing a person-centred framework for responding to and supporting victim-survivors.
Child LENS leverages AI to combat Online Child Sexual Exploitation and Abuse (OCSEA) in Indonesia. Through child-led research, it examines online risks and AI’s role in enabling Child Sexual Abuse Materials (CSAM), bridging gaps between children’s experiences and solutions to shape policies for their digital well-being.
In the global fight against OSEC, the Philippines presents a unique opportunity to protect hundreds of thousands of children. This project empowers financial institutions to stop payments fueling abuse and generate intelligence to safeguard communities.
Apgard safeguards children in the AI era by flagging CSEA risks in AI systems. As a policy-based AI evaluation platform, we help organizations catch AI issues early—enabling responsible AI adoption that prioritizes children’s safety.
Parent Protect aims to create a unified mobile app and chatbot that equips parents and caregivers of children aged 2–17 with tools to prevent online abuse. Co-designed with families, it delivers engaging, localised content to build digital safety, trust, and resilience in South Africa and Mexico.
The project will deliver a tool that assists law enforcement in verifying the suspect’s identity in cases involving child sexual abuse imagery through knuckle and fingernail bed biometrics. The project will also develop best practice guidelines for taking photographs of suspect’s hands for the biometric comparison.
SafeGate School Plus by MSD and SCS equips 65 schools with co-created technology and digital literacy tools to protect 32,000+ students from online harms, while empowering children, parents, and teachers to build safer digital environment
Law enforcement and industry Trust & Safety teams rely critically on the effectiveness of perceptual hashing tools, such as PhotoDNA, to identify known child sexual abuse materials (CSAM). Perceptual hashing is key for processing CyberTips, law enforcement forensic searches, and industry content moderation.
The project will develop and release a new machine learning-based method for detecting similar images, known as semantic hashing. We will provide free, open-source prototypes for generating, storing, and searching these semantic hashes. Our fingerprints will offer capabilities on par with existing digital fingerprints, while delivering enhanced performance. Additionally, these fingerprints will support text-based search by capturing the semantic meaning of images. The project will also create tools for generating hashes from images sampled as sequences. All tools will be built to leverage publicly available machine learning models (such as CLIP), ensuring that their performance continues to improve alongside advancements in these models.
Subscribe to our monthly newsletter to keep up-to-date on the latest progress, news, events and reports from Safe Online and the wider community working to end violence against children.
Copyright Safe Online 2023 ©
All imagery is taken from the UNICEF image library. It is not available in the public domain.
We are here to ensure every child and young person grows in to the digital world feeling safe, and is protected from harm.
We support, champion, and invest in innovative partners from the public, private, and third sectors working towards the same objective.
We believe in equipping guardians and young people with the skills to understand and see danger themselves once accessing digital experiences without supervision.