According to The Wall Street Journal, the European Commission launched a formal investigation into Elon Musk’s X on Monday. The probe focuses on whether X’s Grok AI chatbot is violating the EU’s Digital Services Act by generating sexualized deepfake images. This action follows a significant public outcry over the AI’s output. The DSA requires platforms to mitigate risks from illegal content, with potential fines reaching up to 6% of a company’s annual global revenue if they fail to comply. The commission’s investigation will specifically examine how X assesses and handles the systemic risks linked to how people use Grok. X did not immediately provide a comment on the new probe.
The DSA Meets AI
Here’s the thing: the Digital Services Act was built for a different internet. It was designed to police human-generated content—hate speech, counterfeit goods, and election disinformation posted by people. Now, it’s being tested against AI systems that can generate that same harmful content at an unimaginable scale and speed. The EU isn’t just asking if X is taking down bad posts fast enough anymore. They’re asking if the company’s own product, Grok, is fundamentally designed in a way that creates systemic risk. That’s a much deeper, more technical, and frankly more expensive question to answer. It goes straight to the architecture of the AI itself.
The Impossible Moderation Task
So what’s the real challenge here? For X, it’s a brutal trade-off. Grok, like many AI models, is trained to be “unbiased” and less restricted than its competitors—that’s its supposed selling point. But that very lack of guardrails is what makes it prone to generating harmful deepfakes when prompted. Tightening those controls might make Grok safer for the EU, but it could also make it more like ChatGPT or Gemini, undermining Musk’s “anti-woke” branding. And technically, can you even “assess and mitigate” this risk in real-time? Filtering text prompts is one thing, but verifying that every image output isn’t a non-consensual deepfake is a computational nightmare. The company is basically being asked to solve one of AI’s hardest content moderation problems, or face a fine that could run into hundreds of millions.
A Warning Shot For Everyone
Look, this isn’t just about X. This probe is a massive warning shot to every AI developer operating in Europe. The EU is making it crystal clear that the DSA’s “systemic risk” provisions absolutely apply to generative AI tools. If your model can be used to easily create illegal content, you are liable. It doesn’t matter if the user typed the prompt; you built the tool. That shifts the burden of compliance way upstream, from content moderation teams to the AI research and safety teams. For businesses relying on industrial computing and hardware to develop or deploy AI, ensuring your systems are designed with these regulatory frameworks in mind is no longer optional. It’s becoming a core requirement for market access, just like any other safety standard.
