News Page

Main Content

Australia’s Under-16 Ban Forces Meta Into Tough Age-Verification Choices

Libby Miles's profile
By Libby Miles
November 23, 2025
Australia’s Under-16 Ban Forces Meta Into Tough Age-Verification Choices

When Australia announced that it would be placing a national ban on social media for citizens who were under the age of 16, many people rejoiced, while others, including social media platforms, highlighted potential issues. As the day of the ban draws closer, Meta, the owner of some of the largest platforms, has shared concerns about privacy, compliance, and more.

The company says it will begin notifying suspected under-16 users in early December and provide verification and appeals pathways, but it also acknowledges that its systems cannot perfectly distinguish minors from adults on a large scale. That blunt admission comes as regulators, parents, and privacy advocates watch closely to see how far platforms will go to comply without overreaching.

The debate focuses on technology, ethics, and law. Age verification checks, which are designed to protect children from online predators and bullies, clash with ongoing concerns about data collection and digital privacy. Meta, the parent company of Facebook and Instagram, is trying to balance these concerns with the December deadline quickly approaching. How Meta navigates age verification in Australia could influence regulatory strategies and platform behavior around the world.

Why Enforcing the Ban Presents Some Challenges

Australia’s parliament announced the Online Safety Amendment on November 7, 2024. While the amendment is multifaceted, at its core, it’s about preventing citizens who are under 16 from being on social media platforms. The parliament passed the bill only three weeks later, and gave social media platforms until December 10, 2025, to enact the ban. With that date looming, Meta, the largest social media company in the world, has given some insight into the challenges posed by the ban.

The biggest problems seem to involve the fact that any single verification method risks wrongly blocking accounts and failing to identify teen users. For example, systems that estimate age based on facial recognition have been shown to misestimate age by several years. Platforms have also tried to launch automated behavioral inferences, which have high false-positive and negative rates when utilized across different demographics.

Testing in Australia’s pilot programs showed that no single approach solved the problem. Government and industry trials have shown that robust safeguards require layered systems with multiple fallback options. For example, combining a preliminary selfie with document verification when there’s still uncertainty appears to check the necessary boxes. However, even with these layers, the margin of error is sizable enough to be a concern.

The technical hurdles are compounded by demographic bias and accessibility issues. Age-estimation models can perform unevenly across skin tones, ethnic backgrounds, and nontraditional gender expressions. This has the potential to lead to a range of problems, especially when dealing with groups that are already marginalized. Additionally, relying solely on IDs risks excluding young people who lack access to government documentation.

What Meta is Doing, and What Could Go Wrong

Credit: Meta’s verification plan mixes automated scans and document uploads—yet risks false positives, appeals backlogs, and trust issues as the deadline nears. (Adobe Stock)

Meta has announced plans to notify users who are suspected of being under 16 during the first week of December. This period is designed to give them time to verify that they are of legal age to use their platforms or remove their accounts. The company has shared a layered verification process that will start with low-friction checks. Then, more complex systems that include facial recognition and document uploads will be used if necessary. In the name of data privacy, Meta also announced that document uploads will only be used if automated checks are deemed inconclusive.

Even with these safeguards in place, Meta has also acknowledged the possibility of over-blocking or under-blocking users. False positives could halt access for users who are 16 or 17, while false negatives could continue to allow access for 14 and 15-year-olds. Both outcomes carry some potential risk, as legal users may lose social connections, while underage users could continue to face the issues that the government is trying to protect them from.

Meta has also shared that it expects the appeals process to become overwhelmed. Not only will users who should be able to use the platform file appeals when they lose access. Meanwhile, it’s also a foregone conclusion that users who are rightly banned will try to fight the ruling to stay on the platforms. Any missteps during the appeals process will not only compound technical issues but could also erode user trust and Meta's compliance claims.

Bigger Risks for Meta

The Australian parliament has been clear about how it plans to punish noncompliance, and most of the burden will fall on the platforms. Social platforms face fines running into the tens of millions of Australian dollars if they fail to take “reasonable steps” to block accounts of users who are under 16. Not only does the threat of those fines place some pressure on platforms like Meta, but so too does the vagueness of the language in the law. There remains a lot of uncertainty around what is considered “reasonable steps,” which has left Meta and other platforms with more questions than answers.

A large-scale problem with misclassification has the potential to damage Meta’s reputation. If the programs put in place don’t work as designed, it won’t take long for users around the world to start looking for alternative platforms. An over-reach will result in bad press, as young people lose social connections online, while failing to adequately address the Australia social media ban will pose concerns about Meta’s regulatory and security capabilities.

Finally, it’s also possible that the law in Australia will result in changes in other countries. If Meta’s approach in Australia is seen as a workable model, other countries could adopt similar rules, forcing platforms to scale age verification globally. Conversely, if the implementation of these security measures is seen as chaotic, it could result in increased calls for international standards and upgrades to existing technological systems.

What’s Next For Regulation And Tech

Right now, it appears that the social media ban on users who are under 16 is only going to impact Australia, but it’s safe to assume that other nations are at least discussing what policies would look like. That means that Meta, along with other platforms, is likely also having discussions about how to scale the measures that will be used in December, should other nations decide to enact similar laws. What’s next for tech and regulation? Only time will tell.

Looking for stories that inform and engage? From breaking headlines to fresh perspectives, WaveNewsToday has more to explore. Ride the wave of what’s next.

Latest News

Related Stories