According to Mashable, newsletter platform Substack will require UK users to verify their age to comply with the Online Safety Act that went into effect in late July. The platform published a blog post on October 20 outlining its position, stating it “cautions against regulatory measures like these” while committing to comply with local laws. Substack’s Help Center was updated on Tuesday with details about the verification process, which includes facial scans and using government ID as backup. The platform will retain age estimates after verification, and content including chats, DMs, comments, and Notes may be blurred or blocked if deemed “potentially harmful content.” Paid subscribers are automatically verified through banking information, while others may need to complete additional steps.
The age verification wave hits mainstream platforms
Here’s the thing – we’re seeing age verification requirements spread far beyond adult sites. YouTube’s already doing it, and now Substack joins the club. Basically, any platform that might host what regulators consider “potentially harmful content” is getting swept up in this global trend. The UK’s approach is particularly aggressive – they’re demanding either facial recognition scans or banking information just to read content. That’s a pretty high bar for casual browsing.
And honestly, Substack’s resistance is telling. Their position statement makes it clear they think these laws “come with real costs to free expression” and “introduce friction” to reading online. But they’re complying anyway because, well, they have to. It’s that classic tech company dilemma – do you fight the regulation or just implement it and move on?
What this means for Substack’s content problems
Now, this gets really interesting when you consider Substack’s recent controversies. Remember all those users leaving because the platform was hosting alt-right and Neo-Nazi content? Well, guess what’s definitely going to get blurred under the UK’s new rules? That content will now require age verification, which might actually reduce its visibility and impact.
But here’s my question: Is age verification really the right tool for combating harmful content? I mean, someone being 18 doesn’t make racist propaganda less harmful. It feels like regulators are using age verification as a catch-all solution when the real issue is content moderation. Substack’s own content policy has been under scrutiny for years, and this might be a way to offload some responsibility to users and regulators.
The privacy trade-off nobody’s talking about
Let’s talk about that facial scan requirement. Substack’s Help Center actually recommends having government ID ready “in case the selfie verification fails.” And they’re keeping the age estimate afterward. That’s a lot of biometric data floating around for a newsletter platform.
We’re basically training users to hand over facial data for everyday internet activities. What starts with adult content verification could easily expand to other uses. And while Substack links to their age verification help page, I don’t see much discussion about what happens to all this data long-term. It’s another case of privacy being the casualty in the name of safety.
So where does this leave us? More friction for readers, more data collection by platforms, and questionable effectiveness at actually addressing harmful content. The age verification trend is accelerating, and mainstream platforms are getting swept up whether they like it or not.
