Quantum #100

Issue #100 of the weekly HPC newsletter by HMx Labs. AWS gives us two gifts in the form of a new HPC VM type and nested virtualisation. CPUs becoming heterogeneous becomes a baked in certainty with Meta buying ARM CPUs. Microsoft becomes an Nvidia Exemplar

Quantum #100

It seems that AWS got the news that this week’s edition is number 100 and gave us all a new AMD Turin powered HPC optimised VM type in the form of the new HPC8a. They also gave us nested virtualisation which is great news but not on AMD CPUs it seems. It’s an Intel only feature for now.

Microsoft isn’t sitting on its laurels though and is showing off their new Exemplar status with Nvidia and the installation of their new GB300s

My predictions a year ago of the CPU market becoming increasingly heterogenous got a big confirmation with Meta inking a deal with Nvidia not for GPUs but CPUs. Much is being made of this in press as being a death knell for Intel or showing Meta’s leadership as the first hyperscaler to buy Nvidia CPUs. I don’t think either is true. Intel (despite its woes) doesn’t seem to be going anywhere just yet. It’s no surprise that none of the big three cloud providers haven’t bought Nvidia CPUs either. They already have Graviton, Axion and Cobalt. They don’t need Nvidia for ARM CPUs. Meta on the other hand made a play on RISC-V but I think we’re still a little too early in that game for serous adoption. Nvidia also makes sense, as much like Isambard AI (and probably others) if you have a Grace Hopper GPU side to your supercomputer it makes sense to adopt the same CPU for the CPU only side. This is a relatively common pattern in the HPC world.

We also saw the first LLM ASIC. I’ll be honest I’m surprised it’s come this early in the game. I didn’t expect something like this till LLMs became more stable and we seem to be a long way from that! It does provide an early glimpse though, into what future supercomputers might look like. If your only GPU workload is an LLM, do you buy GPUs anymore? Will we see ASICs for LLM eventually just integrated into SoCs? Probably. But I think we’re some ways off that.

In our own news we’re hiring, and in somewhat of a first for us, this time it’s not an engineering role! Also, if you’re looking for an update on HAL our vibe coded scheduler, that’s down below too.


In The News

Updates from the big three clouds on all things HPC.

HPC Cloud Updates WE 22 Feb 2026
Updates to AWS, Azure & GCP in the last week relevant for HPC practitioners. AWS Gives us a new HPC specific AMD powered VM type and the ability to run nested virtualisation (but not on AMD). Azure and Nvidia get even cosier.

Meta buys ARM GPUs

Meta already deploying Nvidia’s standalone CPUs at scale
: CPU adoption is part of deeper partnership between the Social Network and Nvidia which will see millions of GPUs deployed over next few years

The first LLM ASIC

Taalas Etches AI Models Onto Transistors To Rocket Boost Inference
Adding big blocks of SRAM to collections of AI tensor engines, or better still, a waferscale collection of such engines, turbocharges AI inference, as has

From HMx Labs

We’re hiring for an non engineering role in what is probably a first for us:

Hiring: Growth & Relationships Coordinator
We’re looking to hire someone to help us get our house in order in terms of managing our client and partner relationships and our sales and marketing functions.

Vibe coding, dopamine factories and trusting big tech

Vibing Dopamine
Vibe coding, dopamine circuits and continuous integration.

Know someone else who might like to read this newsletter? Forward this on to them or even better, ask them to sign up here: https://cloudhpc.news