70 Million Warnings
Proof We Can Stop This Before It Starts
I want to start this one with something I don’t experience often enough.
I want to start with hope.
That doesn’t come easily in this work. After eight years investigating digital crimes involving children, years that cost me more than I knew at the time, I’ve got a fairly low threshold for optimism when it comes to what’s happening online. But every now and then, something happens that makes me think, right, this is proof that the fight is working.
This is one of those things.
But I’m also not going to let anyone off the hook, because buried inside the hopeful headline is a number that demands a conversation.
What Actually Happened
In the past two years, more than 70 million warning messages were sent to people who were searching online for illegal images of children.1
Seventy million. Let that number breathe for a second.
These warnings are part of something called Project Intercept, run by the Lucy Faithfull Foundation, a UK charity that has been doing critical child protection work for decades. The way it works is straightforward in principle, and genuinely impressive in practice. When someone uses search terms online that suggest they’re looking for illegal content involving children, or clicks on a link previously flagged as containing such material, they receive a message, not a block, not an arrest. A message that says what they’re doing is illegal, explains the harm, and offers them a route towards help.
The tech partnership behind this includes Meta, TikTok, Google, and platforms spanning gaming, streaming, dating and AI. Twenty-two different warning messages, reaching users in 131 countries. At its peak, more than 95,000 alerts were triggered every single day.2
That is an extraordinary thing, I want you to know it exists.
The AI Factor
I’ve spent a lot of time talking to parents, teachers, and anyone who’ll sit still long enough, about what AI has done and continues to do to the threat landscape for children.
Not because I want to scare people. Because understanding what we’re dealing with is the only way to fight it properly.
The Lucy Faithfull Foundation’s chief executive said it best: “The need has never been more urgent, particularly as new AI technologies accelerate the spread of online child sexual abuse.”
I’ve been saying a version of this since I started blogging. The technology that used to make the creation of illegal imagery difficult, requiring real-world access to victims, has been fundamentally disrupted. AI has lowered the barrier so dramatically that the scale of what’s being produced and circulated is growing in ways that are genuinely hard to track, let alone address.
And here’s what I find both infuriating and galvanising in equal measure.
The same AI capabilities being used to create and spread harm are also being deployed to stop it. Project Intercept is living proof of that. Intelligent systems, running quietly in the background on the platforms your children use every day, identifying the patterns, catching the searches, stepping in before harm can progress.
This is what prevention at scale looks like. And I want more of it.
The Number I Can’t Walk Past
I always try to be straight with you; this occasion is no different.
Of those 70 million warnings sent, approximately 700,000 people clicked through to access Stop It Now self-help resources. That is around 1%.
Professor Sonia Livingstone from the London School of Economics was measured but clear, given the scale of the problem, a 700,000 click-through from 70 million warnings is disappointingly low.3
She was also fair; four in five people who engage with those resources engage meaningfully. So the system works for the people who are ready to seek help.
But here’s my honest thoughts.
One per cent is not nothing. 700,000 people diverted towards support is real harm reduction; I won’t diminish that.
But 70 million warnings tell me we are looking at an iceberg. The visible tip is being addressed. What sits beneath it, in terms of the scale of demand that exists online, is still largely beyond reach.
The messaging isn’t landing for 99% of people who trigger these alerts. That is a design problem, not an inevitability, but I believe the platforms have the resources and, more importantly, the capability to do better.
⚡Please don’t forget to react & restack if you appreciate my work. More engagement means more people might see it. ⚡
What This Means for Our Children
Here is what I want you to actually take from this, beyond the statistics.
This story is proof that technology can be deployed proactively, before harm happens, rather than just reactively after the damage is done. We spend a lot of time, rightly, holding platforms accountable when they fail. Project Intercept is what it looks like when they commit.
There is a version of the internet where the architecture itself pushes back. Where a journey towards harm is intercepted before it reaches a child. We are not there yet. But we are moving in the right direction.
It also confirms something I’ve believed since this blog began, the people who pose a risk to children online are not a fringe problem happening in places the rest of you will never see. They are on the same platforms your children use. They are triggering 95,000 warnings a day. That number is uncomfortable, and it is meant to be.
A Word on Accountability
The Lucy Faithfull Foundation has called on more tech companies to join and scale what works. I’m adding my voice to that.
Google, Meta, TikTok: you’re on the right side of this particular line. The partnership with Project Intercept represents a genuine commitment, and the results, imperfect as they are, represent real harm reduction.
But 1% is not a finish line. The click-through rate tells you the messaging isn’t reaching the people who most need to change their behaviour. The infrastructure exists. The intervention points exist. The follow-through needs work, and you have the capability to improve it.
Every platform not on that partnership list, under the Online Safety Act, you have obligations. Ofcom has powers it has not yet fully exercised. The question is whether it does.4
What You Can Do Right Now
I’m not going to pretend this is light reading. It isn’t. I know that.
But I want to leave you with something concrete.
Keep the conversation going at home. A child who feels they can talk to a trusted adult about anything they’ve seen or been asked online is safer. It sounds simple. It is, in my view, the single most protective thing you can do. Not monitoring software. Not content filters. Open, honest, non-judgmental conversation.
Know the resources exist. The Lucy Faithfull Foundation’s Stop It Now service (stopitnow.org.uk) is for anyone concerned about their own thoughts or behaviour, or about someone else’s. It is confidential. It exists because early intervention works.
Report what concerns you. If something on a platform worries you, report it through the platform’s own tools. If the response is inadequate, escalate to Ofcom. These systems only work when people use them.
Talk to your children about AI. Not to frighten them. To arm them. People online are not always who they claim to be, and AI has made that harder to spot than ever. That conversation needs to be happening now, if it isn’t already.
If any child is worried about something they’ve seen or been asked online, they don’t need to face it alone.
As always, thank you for your support. Please share this across your social media, and if you do have any comments, questions, or concerns, then feel free to reach out to me via the Social page, as I am always happy to spend some time helping to protect children online.
Remember that becoming a paid subscriber means supporting a charity very close to my heart and helping it do amazing things for people. Childline, I will donate 100% of paid subscriptions collected every six months, as I don’t do any of this for financial gain.
If you or a child you know needs support:
Childline: 0800 1111 | childline.org.uk
Available 24/7, 365 days a year. Free, confidential, and here for every child.
Sky News, “More than 70 million warnings sent to people searching for child sexual abuse content,” 13 May 2026. https://news.sky.com/story/more-than-70-million-warnings-sent-to-people-searching-for-child-sexual-abuse-content-13543484 (accessed 13 May 2026). Note: verified via multiple secondary sources
Lucy Faithfull Foundation, Project Intercept data release, May 2026, as reported by BBC/Sky News/AOL. Figures cover 2024-2025. https://www.lucyfaithfull.org.uk (accessed May 2026).
Professor Sonia Livingstone, LSE Digital Futures for Children centre, quoted in Project Intercept coverage, May 2026. Via Sky News / BBC reporting.
UK Online Safety Act 2023. Ofcom enforcement framework for illegal content duties. https://www.ofcom.org.uk






