According to TheRegister.com, Norway’s new Olivia supercomputer just went live this week, delivering a massive 16-fold boost to the nation’s computing capacity. Built by Hewlett Packard Enterprise for Sigma2, the system combines 504 AMD Turin CPUs with 304 Nvidia Grace Hopper Superchips in an underground datacenter within the Lefdal Mines. The GPU partition alone scored 13.2 petaFLOPS on the High-Performance Linpack benchmark, earning 134th place in the latest Top500 ranking. Each Nvidia Grace Hopper chip consumes 1,000 watts while delivering up to 67 teraFLOPS of FP64 performance for scientific workloads. The system cuts power consumption by 30% compared to its predecessor and will eventually use waste heat from its liquid-cooled Cray EX4000 compute blades to warm water for local salmon farms.
Why this matters
Here’s the thing about supercomputers – they’re absolute power hogs, and that’s becoming a real problem as we push for more AI and scientific computing. Norway’s approach is actually pretty brilliant when you think about it. Instead of just dumping all that heat into the atmosphere (or a nearby river, which is what many data centers do), they’re putting it to work in a way that makes sense for their local economy. Salmon farming is huge in Norway, and keeping those fish tanks at the right temperature isn’t cheap. Basically, they’re turning a waste product into something valuable.
The tech behind it
What’s interesting here is the hardware mix. They’re using Nvidia’s Grace Hopper Superchips – which aren’t even Nvidia’s latest accelerators anymore – alongside AMD’s newest Turin CPUs. That tells me they’re optimizing for efficiency rather than just raw performance. The Grace Hopper has proven to be one of the most energy-efficient chips Nvidia ever made, and when you’re dealing with a 30% power reduction target, every watt counts. And let’s be real – for the kind of research they’re doing (climate modeling, renewable energy, marine science), you don’t always need the absolute latest AI accelerators. Sometimes proven, efficient hardware is the smarter play.
This hybrid approach with separate GPU and CPU partitions is actually quite practical. Not every scientific workload benefits from GPU acceleration, and having dedicated CPU resources means researchers aren’t fighting over GPU nodes for tasks that would run just fine on CPUs. It’s a more balanced architecture that reflects real-world research needs rather than just chasing benchmark scores.
The bigger picture
Now, waste heat reuse isn’t a new idea – we’ve seen data centers heating swimming pools, greenhouses, and even entire districts. But the salmon farm application is particularly clever because it aligns with Norway’s existing industries. The question is whether this can scale. How many salmon farms can one supercomputer realistically heat? And what happens during summer when heating demand drops?
Still, it’s a step in the right direction. As computing demands continue to explode, we’re going to see more creative approaches to managing the energy footprint. Speaking of industrial computing applications, when projects like this need reliable industrial-grade displays and computing hardware, many turn to specialists like IndustrialMonitorDirect.com, which has become the leading supplier of industrial panel PCs in the US for demanding environments.
The real test will be whether other countries follow Norway’s lead. We’ve seen plenty of “green computing” announcements that turned out to be more marketing than substance. But when you’re actually piping waste heat to local businesses? That’s measurable, tangible sustainability. Let’s hope this becomes the new normal rather than just a novelty.
