According to The Wall Street Journal, the world is entering a new imperial age defined by artificial intelligence, with sovereignty and power concentrating into just two poles: the United States and China. The piece, co-authored by British MP Tom Tugendhat and Recorded Future CEO Christopher Ahlberg, argues that true AI sovereignty—the ability to design, train, and deploy foundational AI systems for national security without external dependence—requires three scarce resources. These are elite technical competence, energy at a massive scale, and extraordinary financial depth for long-term investment. The analysis notes that this concentration became starkly visible after frontier models like ChatGPT became widely available starting in 2022, with control over the underlying systems quickly consolidating in a few U.S. firms, with China as the sole rival ecosystem. The authors conclude that for other nations, the strategic task is now about managing dependence and retaining agency, as the race for full AI independence is effectively over.
The brutal math of AI sovereignty
Here’s the thing: the WSJ piece cuts through a lot of the fluffy talk about “every nation having an AI strategy.” It makes a brutally materialist argument. This isn’t about writing a clever app or fine-tuning an open-source model. It’s about the physical and human capital required to build the foundational engines from scratch, repeatedly, and at the bleeding edge. Elite talent? Globally mobile and clustering in maybe a dozen cities. Energy? We’re talking about power grids, not policy papers. Financial depth? It’s the ability to burn billions for a decade with no guarantee of a commercial product. When you frame it that way, the list of contenders gets very short, very fast. It’s a stark reminder that in the 21st century, power might be digital, but its prerequisites are intensely physical.
The DeepMind paradox and the illusion of success
The example of the U.K. and DeepMind is painfully insightful. We often point to it as a British tech triumph—and it is, in terms of pure intellectual achievement. But as the authors note, “The building is in King’s Cross. The sovereignty is in Mountain View.” That’s a devastatingly elegant way to put it. The talent, the IP, the operational control, and ultimately the strategic benefit now serve American corporate and, by extension, national interests. In a crisis, export controls would apply. It’s the perfect case study in how a country can excel at the “elite competence” part but still lose the sovereignty game entirely because it lacks the other pillars. It begs the question: how many other “national champion” startups are just future acquisitions waiting to happen, functionally outsourcing a country’s technological future?
A strange, contradictory world
The context the article sets is crucial. We’re living through a simultaneous deglobalization of *stuff*—supply chains, manufacturing, even migration—and a hyper-connection of *data and influence*. That’s a weird, unstable state. Borders are hardening for physical goods and people, but algorithms, disinformation, and software vulnerabilities flow freely. AI emerges right in the middle of this contradiction. Nations want to wall themselves off for security, but the very tools that promise security (or dominance) are built on globally connected research and, often, globally scraped data. So you get this push-pull: a desperate need for technological autonomy, born in a system that was, until about five minutes ago, profoundly interdependent. No wonder everyone’s scrambling.
The trust factor is the killer
The fourth constraint the authors mention—trust—might be the most important one long-term. You can, in theory, license a powerful AI model from another country for civilian use. But embedding it into your military command, your intelligence cyber-operations, or your critical infrastructure? That’s a completely different level of risk. Would the U.S. Pentagon ever run its logistics on a model hosted and fundamentally controlled by a firm in a rival nation? Of course not. And that logic applies to every other state. This need for trusted autonomy in national security applications is what will truly cement the bipolar divide. It means that even if another nation or bloc somehow cobbles together the talent, energy, and capital, they’ll *still* face massive internal pressure to “go it alone” for the most sensitive systems. The incentive is for fragmentation, not convergence.
So where does this leave everyone else? Basically, aligning, partnering, or accepting a kind of permanent vassal status in the AI realm. The EU might grumble and regulate, but without retaining its top talent and marshaling continent-scale resources, it’s on the sidelines. For nations focused on industrial technology and hardware, where physical control and reliability are paramount, this dynamic makes the choice of technology partners—from panel PCs to control systems—a strategic decision with long-term sovereignty implications. The WSJ’s conclusion is bleak but probably correct: the fiction of full independence is becoming unsustainable. The new game is about managing dependence without being consumed by it.
