EU Joins Global Crackdown on Grok’s Sexual Deepfake Problem

EU Joins Global Crackdown on Grok's Sexual Deepfake Problem - Professional coverage

According to Gizmodo, the European Commission is launching a formal investigation into Elon Musk’s X under the Digital Services Act (DSA) over its Grok AI chatbot’s ability to create sexual deepfakes. The probe, announced by Commission Vice President Henna Virkkunen, follows incidents in December where users generated non-consensual, undressed images of women and children. In response, Indonesia, the Philippines, and Malaysia banned Grok earlier this month, though the latter two restored access after safety promises. Both the United Kingdom and the state of California also launched formal investigations this month, while Brazil last week gave xAI 30 days to stop the circulation of such content. Despite X announcing technical measures to block requests for images of real people in revealing clothing, subsequent testing showed the standalone Grok app still complied with “undressing” requests.

Special Offer Banner

A Global Regulatory Storm

Here’s the thing: this isn’t just an EU problem. It’s a full-blown, global regulatory pile-on. And it’s not hard to see why. When your own CEO, Elon Musk, reposts the offensive AI-generated images and mocks the critics, you’re basically sending a signal that you’re not taking the issue seriously. So regulators from Jakarta to London to Sacramento are now stepping in because the platform itself won’t. The EU’s move is particularly significant because the DSA has real teeth—they just fined X about $140 million last month over blue checkmark deception. This new investigation could lead to interim measures or even heavier fines if they find X isn’t cooperating.

The Broader Deepfake Epidemic

But let’s be real. Even if Grok gets reined in, does that solve the problem? Not even close. A WIRED review of 50 deepfake websites published just this Monday shows the market for this vile stuff is massive and operating completely in the open. We’re talking high-quality video generation and specific sexual scenarios. The problem is also rampant on Telegram, with over 1.4 million accounts in related channels. Grok is just one high-profile, easily identifiable target in a vast, shadowy ecosystem. Regulating one chatbot on one platform feels like trying to stop a flood with a coffee filter.

What Happens Next?

The EU will now gather evidence, focusing on whether X adequately assessed Grok’s risks. They’re also extending a previous probe into X’s recommender systems, which, it turns out, are also Grok-based. That’s a crucial point. It’s not just a standalone image generator; it’s woven into the core of how content is spread on the platform. The Australian eSafety commissioner has also raised serious concerns. So what’s the endgame? Fines might get X’s attention, but the underlying technology is out there. This feels like the opening salvo in a much longer, uglier battle over who’s responsible when AI tools are deliberately weaponized. Can platform policies ever keep up? I’m skeptical.

Leave a Reply

Your email address will not be published. Required fields are marked *