According to DCD, liquid cooling firm Submer has launched a new cloud unit called InferX that aims to deliver sovereign AI cloud services through a dual-plane strategy connecting core and Edge infrastructure. The company, founded in 2015 by Daniel Pope and Pol Valls Soler, plans to use core data centers for large-scale training in high-density GPU clusters while Edge AI infrastructure will handle real-time inference through telco and regional networks. Submer recently announced its first 56MW facility in Barcelona, Spain, and signed a Memorandum of Understanding with the government of Madhya Pradesh in India to develop up to 1GW of liquid-cooled AI data centers. The company emphasized that InferX will leverage Submer’s immersion cooling expertise while focusing on “real AI use-case enablement,” though specific facility details weren’t disclosed. This strategic pivot represents a significant evolution for the cooling technology provider.
Table of Contents
The Vertical Integration Gamble
Submer’s move from cooling provider to full-stack cloud computing operator represents one of the most ambitious vertical integration plays in recent infrastructure history. Historically, cooling specialists have remained in their lane, providing technology to data center operators rather than competing with them. By launching InferX, Submer is essentially betting that its expertise in immersion cooling provides such a fundamental advantage in AI workloads that it justifies entering the highly competitive cloud market dominated by hyperscalers. The risk is substantial – they’re now competing with AWS, Google Cloud, and Microsoft Azure while potentially alienating their existing customer base of data center operators who might view them as competitors rather than partners.
Sovereign AI’s Infrastructure Demands
The emphasis on “sovereign AI cloud services” taps into a growing global trend where nations want to maintain control over their artificial intelligence infrastructure and data. This concept of sovereign computing has gained traction as governments recognize AI’s strategic importance and seek to reduce dependence on foreign cloud providers. Submer’s approach appears particularly well-suited to this market, as their liquid cooling technology enables higher density deployments that can be placed closer to population centers, addressing both performance and data residency requirements. The India partnership suggests they’re targeting emerging markets where sovereign AI concerns are particularly acute and infrastructure gaps create opportunities for new entrants.
Technical Advantages and Limitations
Liquid cooling’s primary advantage for AI workloads lies in its ability to handle the extreme thermal density of modern GPU clusters. Traditional air cooling struggles with racks exceeding 40kW, while immersion systems can manage 100kW or more per rack – crucial for the dense AI training clusters InferX promises. However, the transition from cooling provider to cloud operator requires mastering multiple new domains simultaneously, including networking, storage, orchestration, and the complex software stack needed for AI workload management. Submer’s success will depend on whether their cooling advantage outweighs the decades of operational experience possessed by established cloud providers in managing global-scale infrastructure.
Significant Execution Challenges
The most immediate challenge facing InferX is capital intensity. Building and operating AI cloud infrastructure requires billions in investment, particularly for the GPU inventories needed to compete. Submer’s Barcelona facility and India MoU suggest they’re pursuing a capital-light partnership model, but scaling to compete with hyperscalers will require massive continued investment. Additionally, the dual-plane edge strategy introduces operational complexity – managing distributed inference infrastructure across multiple regions and telco partnerships is fundamentally different from operating centralized data centers. Their success will hinge on executing this distributed model while maintaining the reliability standards enterprise AI workloads demand.
Market Timing and Competitive Landscape
Submer enters an increasingly crowded specialized AI infrastructure market that includes both established cloud providers and new entrants like CoreWeave and Lambda Labs. The timing is both opportune and challenging – AI compute demand continues to outstrip supply, creating openings for new providers, but the competitive intensity means margins are already under pressure. InferX’s differentiation through liquid cooling could appeal to cost-sensitive AI startups and enterprises, but they’ll need to demonstrate superior price-performance compared to established alternatives. The coming 12-18 months will be critical for proving whether their technology-first approach can translate into sustainable market position against well-funded competitors.