Quantum #88
Issue #88 of the weekly HPC newsletter by HMx Labs. A quiet week so I thought I’d share some of my thoughts on the last HPC Club and the future of computing in HPC.
The HPC world is still recovering from SC25 and there’s not a great deal to report from last week. Even HPC Club is over for this year. Things over at HMx Towers are still busy though. Not only are we hiring but we’re at that point nearing the end of several projects where I’m just itching for them to be complete so I can share the final versions!
Thinking back on HPC Club, and I could well be wrong here, for me the interesting difference between AMD and Nvidia wasn’t about CUDA vs ROCm or the capabilities of their hardware. It was how they each see what’s coming. Nvidia sees the future as very much being one of accelerated compute and with an increasing use of AI (though not necessarily exactly in today’s form) solving the problems of the future in potentially different ways but that are likely to require lower precision (FP4/8/16) compute. This contrasts with AMD hedging against such a future by also catering not only for higher precision accelerated compute but also more vested in a larger diversity of compute. Not just in their CPUs but also APU architectures in addition to GPU.

As ever, my crystal ball is no better than yours (it’s probably worse), but if history is anything to go by, any complex instruction set that can be hardware accelerated and is in high demand tends to end up in the CPU eventually. Historically we have examples of this both in encryption (see AES-NI in x86 for example) and SIMD instructions to accelerate not only scientific compute but also media playback.
Maybe we’re already seeing the beginning of this with things like Apple’s Neural Engines in their SoCs? Certainly, power constrained devices have historically been earlier adopters of hardware acceleration than data centres. If so then this version of the future certainly has accelerated compute, but that acceleration just moves back to the CPU. Honestly that’s a good thing as it makes life easier for the software stack
LLMs are still evolving far too rapidly right now for that transition to happen but who knows…
In The News
Updates from the big three clouds on all things HPC.

Oh look, El Reg is catching up to what I’ve been talking about since January

US Gov launches Genesis, a plan to power scientific research with AI and Quantum computing by combining the HPC computing resources across the US Department of Energy (the home of the US’ largest supercomputers including El Capitan).

If you thought this means that the US will own all its compute infrastructure though you’d be wrong as they’re planning to farm a large portion of it out to AWS it seems

From HMx Labs
We’re growing and looking for someone to join the team? Interested? Know anyone that fits the bill?

Some days it seems like AI is getting really good and then it will do something really stupid and I’m back to thinking it’s a gimmick

Know someone else who might like to read this newsletter? Forward this on to them or even better, ask them to sign up here: https://cloudhpc.news

