Quantum #111

Issue #111 of the weekly HPC newsletter by HMx Labs. Supercomputing Top Trumps. Its not new but have we finally got to the end?

Quantum #111

Supercomputing is weird. For all its technical excellence, its intricacies and complexities it’s amazing how often it’s just becomes a game of Top Trumps. Doesn’t matter what it is, be it FLOPs, core counts, tasks per second, GWs or any other metric we can game to have the highest number. As someone who’s written a benchmark or two in the past, I’ll put my hands up and say I mea culpa, I guess I’m guilty of this too… but it does get kinda boring after a while, right?

Anyway, I guess it should come as no surprise then that the latest niche use for supercomputing (AI) behaved the same way. Also perhaps, to those of us that have seen this play out before, not much of a surprise that bigger might not always be better as was the assertion in a recent blog post by Glenn Lockwood. And given what he’s been working on the last few years I would guess he should know. Much of it certainly rings true as anyone who’s run compute at scale (especially on hew hardware!) can attest. What was coincidentally funny though was Microsoft’s own blog post about its Fairweather supercomputer released in the same week, as well as Next Platform’s piece on Azure’s ambition to double its AI infrastructure in the next two years. 

Sure, Glenn isn’t asserting that we don’t need lots more compute (I mean he does still work for VAST so I doubt he could say such a thing 😁) but while inference may demand lots of GPU what it doesn’t need in the same way is a single networked fabric of them. And how many people need to train frontier class LLMs? Hmm. Maybe Ed Zitron has a point?

Anyway, all of this raises an interesting question, have we hit the limit on Top Trumps? Will we make an attempt to actually work out what’s better rather than just bigger? Nah! we will just find another metric to game, right? 🤣


In The News

Updates from the big three clouds on all things HPC now via Noteworthy! Let me know what you think of this compared to our previous format. More details about this below

Noteworthy by HMX Labs

Some interesting information on Microsoft’s Fairweather supercomputer both from Microsoft themselves and also from Glenn Lockwood. The differences in the two are telling

AI doesn’t need giant supercomputers after all
I attended the 2026 Salishan Conference on High Speed Computing last month, and it was a week well spent in coastal Oregon hearing what man…
Building resilient networks for AI supercomputers | Microsoft Community Hub
By Valerie Cutts and Jithin Jose   Last fall we introduced Fairwater, the world’s most powerful AI datacenter. Delivering a system of this scale…
Microsoft Committed To Doubling AI Infrastructure In Two Years
Microsoft built a systems software platform – Windows Server and its zillions of add-ons and extensi…

While AMD seems to be taking the bet that FP64 and more normal sized supercomputers are still important

https://www.hpcwire.com/2026/05/08/amd-delivers-plug-in-ai-power-with-pci-based-gpu/


From HMx Labs

Time to up our game in handling HPC (and AI) release notes.

Noteworthy: Release Notes Worth Your Time
Introducing Noteworthy, release notes about HPC and AI that are actually worth your time to read.

Old man doesn’t shout at clouds but just reminisces instead

Reflections of an old man: Learning to code
Not sure this anecdote necessarily has a point, just sharing a story for Friday afternoon.

Know someone else who might like to read this newsletter? Forward this on to them or even better, ask them to sign up here: https://cloudhpc.news