According to Neowin, Australia’s teen social media ban now includes Twitch, requiring the platform to prevent users under 16 from creating accounts starting December 10. Existing accounts belonging to teens will be deactivated from January 9, 2025. The ban covers major platforms including Facebook, Instagram, TikTok, Snapchat, YouTube, Reddit, Kick, Threads, and X. Companies must use government IDs, facial recognition, or age inference technology to verify users. Failure to comply could result in fines up to $49.5 million Australian dollars (roughly $32 million USD). Meta is already taking action by closing teen accounts from December 4 to avoid penalties.
The Twitch surprise
Twitch being included in this ban is actually pretty significant when you think about it. Most people associate social media dangers with platforms where kids might encounter cyberbullying or harmful content directly. But Twitch? It’s primarily a streaming platform where gamers watch others play games. The government’s logic seems to be that any platform where the “main purpose is online social interaction” qualifies. That’s a pretty broad definition when you consider that Twitch’s core function is broadcasting, not necessarily social networking in the traditional sense. It makes you wonder where they’ll draw the line next.
The age verification reality check
Here’s the thing about these age verification requirements – they’re incredibly difficult to implement effectively. Government IDs? Many teens don’t have them. Facial recognition? That raises massive privacy concerns. Age inference through online behavior? Basically guessing based on how someone types or what they click. And we all know how accurate those algorithms tend to be. Companies are stuck between facing massive fines and implementing systems that either don’t work well or create new privacy nightmares. It’s a classic case of well-intentioned regulation meeting technological reality.
What’s notably absent
The most interesting part of this whole situation is what’s NOT on the banned list. Pinterest gets a pass because it’s considered more about “idea curation” than social interaction. But more importantly, artificial intelligence platforms aren’t included at all. Think about that for a second. While Australia is busy building walls around traditional social media, kids could potentially be forming relationships with AI chatbots that have zero safeguards. Is talking to an unfiltered large language model really safer than watching someone play Minecraft on Twitch? That seems like a massive oversight in an otherwise comprehensive approach.
Broader implications
This Australian move could become a template for other countries grappling with youth online safety. We’re already seeing similar discussions in the UK, US, and EU. But the enforcement mechanism here is what really stands out – those $49.5 million fines get companies’ attention fast. Meta’s preemptive action shows they’re taking this seriously rather than waiting for the deadline. The bigger question is whether this actually protects kids or just pushes them to less regulated corners of the internet. Will Australian teens suddenly start playing outside more, or will they just find new digital spaces that haven’t been regulated yet? Only time will tell, but this is definitely a watershed moment in how governments approach youth digital safety.
