Banning the Feed: Will It Work?
The global conversation about online child safety reached a major turning point this year, highlighted by Australia’s new approach, a blanket ban on social media accounts for children under the age of 16.
This policy, which aims to shield children from the predatory algorithms and pressures of platforms such as TikTok, Instagram, and YouTube, is being closely watched by many of us around the world. For those of us in the UK, where the Online Safety Act (OSA) has set a precedent for platform accountability, Australia’s efforts may soon shape the future of our own online safety rules, leveraging valuable datasets that were previously unavailable.
Here is what you need to know about the Australian ban, the lessons it offers, and how it impacts the safety discussions we have with our children and students.
⚡Please don’t forget to react & restack if you appreciate my work. More engagement means more people might see it. ⚡
Australia’s Bold Experiment: A Digital “Time-Out”
Beginning December 10, Australia will enforce its landmark Online Safety Amendment (Social Media Minimum Age) Act 2024, requiring specific social media companies to take “reasonable steps” to prevent anyone under 16 from creating or maintaining accounts.
The law targets platforms whose sole or primary purpose is online social interaction. The initial list of banned services includes some of the big players, like:
Facebook, Instagram, Threads, X, Snapchat, and TikTok.
YouTube (account access will be restricted for under-16s, although viewing content without an account is still permitted).
Reddit and live-streaming platforms Kick and Twitch.
The primary motivation behind this ban is clear: a wish to protect the mental health and well-being of young people by giving them valuable time to learn and grow away from design features such as vicious algorithms and the endless doom scroll. The Australian government has stated that platforms use technology to target children with “chilling control,” and they must use that same technology to ensure safety.
Importantly for parents and children alike, the responsibility and penalty fall entirely on the tech companies, not the parents or the children themselves, with potential civil fines reaching up to A$49.5 million (about £23 million) for breaches.
The UK’s Stake: Watching the Enforcement Data
Australia’s eSafety Commissioner plans to evaluate the impact of the ban, gathering evidence on whether the restrictions lead to changes in children’s sleep, physical activity, or interactions. This real-world data is exactly what other nations, including the UK, will use to guide their next steps.
The UK already has the OSA, which became law to make the internet safer, especially for children. The OSA mandates that any platform likely to be used by children must enforce robust age checks to prevent under-18s from seeing content related to pornography, suicide, self-harm, and eating disorders.
However, some UK campaigners have called for the UK to go further and implement an outright ban for under-16s, similar to Australia’s approach. The data emerging from Australia will be fundamental in determining whether a total access ban is a viable and effective strategy.
I realise I am often negative about many aspects of social media, no, that is not because I don’t have any friends! I actually do see many benefits, but also far more negatives, especially when considering mental health and also the ability it offers predators to target children and teens. I eagerly await the data from the Australia experiment to see how it will shape the global effort to protect children in the technological age.
The Cautionary Tale: Privacy, Workarounds, and Unintended Consequences
Whilst many parents support the intent behind the ban, the devil is, as always, in the details of the enforcement, leading to significant risks that both parents and the UK government must heed:
A. Age Verification and Privacy Risks
To comply, platforms must use “age assurance technologies”. Options include:
Providing government IDs (like passports).
Facial age estimation (facial scans/video selfies).
Age inference (using vast amounts of data—likes, groups, interests—to determine age).
Testing of age-estimation technology, as with all technological solutions, is not foolproof, particularly for users near the 16-year-old cutoff. For example, one test showed an estimated 73% error rate for 15-year-olds when determining if they were 16. These technical limitations mean that many users—including adults who are wrongly flagged—may be forced to submit privacy-intrusive government IDs. Privacy campaigners warn that this mandatory age verification for all users threatens the privacy of every Australian. I know a few privacy people in the UK and can already hear them screaming, but you all know I place child safety much higher up the list of things I care about (SorryNotSorry).
B. The VPN Exodus and Darker Corners
A major concern is that banning children from mainstream platforms does not eliminate the demand; it simply makes access more risky.
Critics warn that the ban, even if properly implemented, may isolate teens or push them into “far less regulated corners of the internet,” including encrypted networks or the dark web, to stay connected. These hidden spaces are far harder for parents, educators, and regulators to monitor.
Teenagers are already discussing workarounds, such as using VPNs (Virtual Private Networks) to hide their location or creating accounts using fake ages.
The UK has already seen this phenomenon: when mandatory age checks under the OSA took effect, VPN apps became the most downloaded apps on the UK Apple App Store, as users sought to circumvent the new rules. This demonstrates that enforcement based purely on geographical location or basic age checks is easily circumvented by tech-savvy users, potentially exposing them to even greater risks.
I wrote a blog post on that very topic (see below). I get it, but this is where I fall back to my golden advice about communication and having the chat with them about why these restrictions are necessary, as there will always be a subset of children/teens who will look for workarounds where they exist.
A Call for Safety-by-Design
Whilst the Australian experiment tests the effectiveness of a total ban, the core issue remains the design of the platforms themselves. Many experts emphasise that the true solution lies in “Safety-by-Design,” rather than reactive bans and I tend to agree with this with my Cyber Security head on.
Safety-by-Design principles that parents and teachers should continue to advocate for include mandatory requirements for platforms to:
Disable algorithmic profiling and recommender systems by default for children, protecting them from falling into harmful “rabbit-holes” related to suicide, self-harm, or eating disorders.
Mandate privacy by default, requiring features like public displays of “likes” or friend counts to be private for young users, mitigating pressure and bullying risks.
Ensure rapid action on serious incidents, taking down content and suspending accounts immediately after a single serious infringement involving potential harm to a child, rather than waiting for repeat offences.
The Australian ban represents a massive political statement and a powerful legal stick aimed at Big Tech, which I am fully behind. However, regardless of where governments draw the line, your ongoing role as parents and teachers remains the foundation of child online safety: maintaining open communication, fostering trust so children feel comfortable reporting harm, and actively discussing digital literacy and boundaries.
We need to make sure children are educated on navigating the online world safely, because no government regulation, even a world-first ban, will ever replace the need for critical thinking and resilience online.
As always, thank you for your support. Please share this across your social media, and if you do have any comments, questions, or concerns, then feel free to reach out to me here or on BlueSky, as I am always happy to spend some time helping to protect children online.
Remember that becoming a paid subscriber means supporting a charity that is very close to my heart and doing amazing things for people. Childline, I will donate all subscriptions collected every six months, as I don’t do any of this for financial gain.









