Quantum #108
Issue #108 of the weekly HPC newsletter by HMx Labs. Slurm is still in the limelight with US Gov finally wondering Nvidia’s acquisition will affect national supercomputers. Data for AI might need another look beyond just scaping the internet and sovereign is still a thing it seems.
It appears the US Government suddenly woke up about 4 months late and realised that Nvidia drinking all the Slurm might affect their national supercomputers. Senator Warren has written to the DOE about his but might not get the alarmed response she’s hoping for given that El-Capitan and most of the other largest supercomputers under the DOE now run on Flux Framework. Just as well really as they’re also based on AMD silicon. Coincidentally (or maybe not!?) Todd Gamblin from the Lawrence Livermore National Laboratory was kind enough to speak at the Google Cloud Advanced Computing Community last week too.
A lot of the news last week was around using AI to accelerate quantum computing. Nothing like powering your AI hype train with some quantum fuel right? And I think I’ll stop there before I say something that upsets people.
The slightly more interesting bit of news that caught my eye though was a piece from HPC Wire on data lakes for scientific data. The idea being to retain a lot more experimental output data than was traditionally ever done so that it can be used in AI models. I think the idea is actually broader than just the scientific community. In a world where potentially almost anything could form useful training data, retaining, categorising and cataloguing that data is suddenly useful. Important even.
This applies to so much that we traditionally ignored. Multiple categories of enterprise data, intermediate calculation results that were often used for debugging or compute optimisation but never kept long term (I’m thinking financial risk analytics here). Even telemetry data from production systems.
I think this is a space that hasn’t really been explored. HPC and AI data planes are pretty dumb still. They might be bigger and faster but they’re still not really any smarter. Getting the right data, from the right, authoritative source, from the nearest possible location. Yea that’s still not happening. Not even close. Agentic access to data is going to be pretty meaningless if the human still has to figure out where to get the data from. That’s 80% of the problem most of the time!
Lastly the idea of sovereign HPC (funny how much more normal it seems when you don’t call it AI) hasn’t really gone anywhere. Both Canada and Euro HPC made a bit of noise and advances in this space last week.
In The News
Updates from the big three clouds on all things HPC.

Did US Gov just notice Nvidia drinking all the Slurm?
What’s the best thing to do to keep the AI hype train running, fuel it with quantum of course

What does data mean in an AI age? How much more do we need to keep? How much more meta data do we need for it to make sense? And how do we catalogue, categorise and manage all of it?
https://www.hpcwire.com/2026/04/17/the-rise-of-experimental-data-lakes/
Sovereign compute capacity hasn’t gone away
https://www.hpcwire.com/2026/04/17/the-rise-of-experimental-data-lakes/
From HMx Labs
I’ve been a little too busy to write anything interesting and am also trying to focus on a couple of longer articles including on using AI within HPC and financial services.
Here’s a little amusement to keep you occupied instead

Know someone else who might like to read this newsletter? Forward this on to them or even better, ask them to sign up here: https://cloudhpc.news

