Safe Online awards

$6.7 million to 20 new grantees to advance innovative and high-impact solutions to fight online child sexual exploitation and abuse (CSEA).

Meet the new Safe Online Grantees

In response to the escalating risks children face in the digital world, Safe Online launched a global call for proposals in 2024 to identify and support the most promising solutions.  

Following a rigorous selection process from a pool of over 300 applications, 20 grantees have been selected to receive a combined USD 6.7 million in funding USD 1.7 million more than originally allocated. This significant increase reflects the urgent, growing need to strengthen global efforts to protect children from digital harms. These new projects, from across the world, are designed to deliver maximum impact where it’s needed most.

Networks and Systems

MindShield

MindShield: Enhancing Mental Health and Resilience for Romania’s Child Sexual Abuse and Exploitation Professionals 

MindShield enhances Romania’s capacity to address OCSEA by reinforcing mental health support for professionals and preventing re-victimization. Through training, digital tools, and systemic coordination, it delivers a scalable model to better protect children. 

Artemis Survivors Hub

The Artemis Survivors Hub (ASH) is intended to give a voice to victim/survivors of online child sexual abuse and to ensure that they know their abusers are being targeted by law enforcement, that their images are being removed from circulation and when an offender is prosecuted their voice is heard. 

ACT-PILOT (Advancing Capacity for Treatment - Prevention through International Learning and Operational Training)

ACT-PILOT creates the first global framework for interventions focused on preventing the perpetration of sexual harms. Combining this with specialist training and strengthening implementation systems globally, we envisage more programs that ultimately protect children by addressing risks before harm occurs. 

Safe Schools and Protective Communities for Children: Preventing CSEA and Digital Violence through Empowered Families and Educators

The project will integrate digital violence prevention, community-based protection systems and advocacy in local policies, to protect children online, empowering them as agents of digital safety and access to non-punitive restorative practices, contributing to the right to a life free of violence. 

Child Online Safety through Local Empowerment and Appropriate Networks (CLEAN)

The CLEAN project proposes a groundbreaking approach to measuring, identifying, and preventing online child sexual exploitation and abuse in Lebanon following the latest WeProtect Global Threat Assessment and Safe Online Disrupting Harm Initiative. 

COR: Sandboxing and Standardizing Child Online Redress

The COR Sandbox is a first-of-its-kind mechanism for cross-border, cross-sector collaboration to advance child online safety. Through the participation of youth, platforms and regulators, this regulatory sandbox creates a blueprint for consistent systemic online care based on children’s rights. 

Strengthening Partnerships to Accelerate Action on Child Online Protection in Pakistan

This project strengthens national systems to prevent digital harms to children, enhances survivor support with trauma-informed services, and empowers families in prevention and recovery—filling critical gaps in coordinated child protection and community resilience. 

Research and Data

Building a Global Data Infrastructure for Child

The project is building the world’s first global research data hub and international cohort study focusing on the prevention of child sexual abuse. With these initiatives, we’ll improve data management, empower prevention efforts, and transform child protection worldwide. 

Shielding Children from Violent and Sexual Video Ads: AI-Driven, Research-Informed Solutions in Global Contexts

Our project combines AI-powered detection systems with cultural research to identify and filter inappropriate ads on children’s video content. By analyzing 30+ countries and developing scalable solutions, we aim to protect vulnerable young users worldwide from exposure to violent and sexual content. 

WRITE: Written Risk Indicators Triggering Exploitation

The project tackles the overlooked area of text-based child sexual abuse material (CSEAM) by adapting a global classification system by INHOPE and identifying potential behavioural links to visual CSEAM. Thus, we aim to provide an evidence-based foundation for detection, prevention, and policy decisions. 

There is a dire need for trauma-informed responses to childhood victims of “capping” (non-consensual screen captures of sexual acts which may not surface until adulthood). The project aims to create practical research-informed response frameworks that minimize harm to all capped and unknowing victims of CSAM. 

Project Lens: Understanding and Preventing Emerging Forms of Image-Based Sexual Violence Against Children

Project Lens aims to generate robust evidence to inform a system-wide response to image-based sexual violence against children. It addresses critical gaps through a multi-faceted approach that includes landscape analysis, survivor experiences, family perspectives, and professional attitudes. 

The development of a multi-agency, victim-centred framework for responding to and supporting individuals who have experienced technology-assisted child sexual abuse (TA-CSA)

This project aims to transform current knowledge, understanding and practice around the impact of TA-CSA on children and young people by adopting a novel, multi-perspective methodological approach, and developing a person-centred framework for responding to and supporting victim-survivors. 

Child LENS: Leveraging AI to Empower Children's Talents and Safeguarding against Online Child Sexual Exploitation and Abuse (OCSEA) in Rural and Urban Indonesia

Child LENS leverages AI to combat Online Child Sexual Exploitation and Abuse (OCSEA) in Indonesia. Through child-led research, it examines online risks and AI’s role in enabling Child Sexual Abuse Materials (CSAM), bridging gaps between children’s experiences and solutions to shape policies for their digital well-being. 

Tech Tools

SPAC (Stop Payments Abusing Children)

In the global fight against OSEC, the Philippines presents a unique opportunity to protect hundreds of thousands of children. This project empowers financial institutions to stop payments fueling abuse and generate intelligence to safeguard communities. 

Automated AI CSEA Safety Evaluation

Apgard safeguards children in the AI era by flagging CSEA risks in AI systems. As a policy-based AI evaluation platform, we help organizations catch AI issues early—enabling responsible AI adoption that prioritizes children’s safety. 

PARENT PROTECT - (Parents Respond to New Threats: Preventing Online Technology-facilitated abuse and Exploitations of Children)

Parent Protect aims to create a unified mobile app and chatbot that equips parents and caregivers of children aged 2–17 with tools to prevent online abuse. Co-designed with families, it delivers engaging, localised content to build digital safety, trust, and resilience in South Africa and Mexico. 

Suspect Hand Biometrics Tool: Verifying the Identity of Suspects in Child Sexual Abuse Material

The project will deliver a tool that assists law enforcement in verifying the suspect’s identity in cases involving child sexual abuse imagery through knuckle and fingernail bed biometrics. The project will also develop best practice guidelines for taking photographs of suspect’s hands for the biometric comparison. 

SafeGate School Plus: protecting children from online risks and harms

SafeGate School Plus by MSD and SCS equips 65 schools with co-created technology and digital literacy tools to protect 32,000+ students from online harms, while empowering children, parents, and teachers to build safer digital environment

Semantic Hashing: New Technologies for Enhancing Methods of Identifying Known CSAM for Law Enforcement and Industry

Law enforcement and industry Trust & Safety teams rely critically on the effectiveness of perceptual hashing tools, such as PhotoDNA, to identify known child sexual abuse materials (CSAM). Perceptual hashing is key for processing CyberTips, law enforcement forensic searches, and industry content moderation. 

  

The project will develop and release a new machine learning-based method for detecting similar images, known as semantic hashing. We will provide free, open-source prototypes for generating, storing, and searching these semantic hashes. Our fingerprints will offer capabilities on par with existing digital fingerprints, while delivering enhanced performance. Additionally, these fingerprints will support text-based search by capturing the semantic meaning of images. The project will also create tools for generating hashes from images sampled as sequences. All tools will be built to leverage publicly available machine learning models (such as CLIP), ensuring that their performance continues to improve alongside advancements in these models. 

Stay in the loop.

Our purpose in detail

We are here to ensure every child and young person grows in to the digital world feeling safe, and is protected from harm.

We support, champion, and invest in innovative partners from the public, private, and third sectors working towards the same objective.

We believe in equipping guardians and young people with the skills to understand and see danger themselves once accessing digital experiences without supervision.

We'd love to have a chat

We're thrilled you're interested in donating to Safe Online - pop in the details below and we will get back to you to set up a discussion.