Quantum #105

Issue #105 of the weekly HPC newsletter by HMx Labs. ARM makes a CPU, LLM-D goes open source, HPSFCon gives us hours of useful content, we give you a way to save money on cloud VM and vibe coding amusement.

Quantum #105

As the dust settles on the hype created by GTC we saw that NemoClaw wasn’t meaningfully more secure than its ancestor OpenClaw and that the shine of growth forever in GPU clusters is starting to fade (in the financial markets at least). A week after HPSFCon though and we have hours of footage with actually useful information. Awesome.

Last week ARM told us that it will start creating its own CPU. Wait. Haven’t they already been doing that? No, they’ve just been designing chips for other people. So, you don’t run on an ARM CPU but an AWS Graviton or a Google Axion or possibly an Ampere. Now you’ll get to run on a chip that identifies not only its architecture as ARM but also its manufacturer in the lscpu output. It’s an interesting development and potentially puts ARM into competition with its own customers, a strategy that isn’t working out for Intel so well right now. I guess they’ve got Meta backing them at least though.

What I’m looking forward to though is to (hopefully) finally be able to test ARM against AMD EPYC and see how well the power efficiency claims hold up, not only when sat idle babysitting a GPU but also when running full tilt on some HPC workload. We already have the numbers on AMD, anyone want to give us access to some ARM kit?

At a time where people are questioning what happens to open source code in the face of the AI assault, IBM has just donated LLM-D to the Cloud Native Compute Foundation (CNCF) 🎉. Now if only they would open source their good HPC schedulers too and we could build a decent inference stack on open source IBM software.

Does anyone know what happened to insidehpc.com? All I get is a 404 error from WP Engine. It’s been like that for a couple of weeks now.

Aside from making Lego jokes last week we’ve also been busy giving you a way to look up and compare the details, performance and cost of various cloud VMs. Whilst continuing to vibe code (I mean agentically engineer) a new HPC workload manager and scheduler for giggles. Or more accurately, fail to vibe code one. More details on that down below as are all the links for the news above.


In The News

Updates from the big three clouds on all things HPC.

HPC Cloud Updates WE 29 Mar 2026
Updates to AWS, Azure & GCP in the last week relevant for HPC practitioners. AWS gives us updates to both PCS and the other one (Parallel Cluster). Learn how to use TPUs from Google.

HPSF have made the videos from HPSFCon available

ARM gets into the CPU game directly

Arm Comes Full Circle With Homegrown, AI-Tuned Server CPU
It has been nearly five decades since British workstation maker Acorn Computer was founded, and near…
Meta Partners With Arm to Develop New Class of Data Center Silicon
We’re partnering with Arm to develop a new class of CPUs purpose-built to support data centers and large-scale AI deployments.

LLM-D goes open source. CNCF Style.

Donating llm-d to the Cloud Native Computing Foundation
llm-d offers a replicable blueprint for developers and researchers to deploy inference stacks for any model, on any accelerator, in any cloud.

From HMx Labs

AI is like a rusty old saw blade. Totally jagged

The Jagged Intelligence of AI
A non-update but still an update on HAL, our vibe coded HPC scheduler experiment

Want to know what’s in a cloud VM and how it perform and what it costs?

Announcing FLOPx: Data Discovery for Cloud VMs
Want to know what you’re getting when you spin up a cloud VM? How many cores, threads, memory? Actual benchmark performance numbers? We’ve got your back.

Make sure you keep your HPC sysadmin on side

And definitely play nice with your AWS account manager


Know someone else who might like to read this newsletter? Forward this on to them or even better, ask them to sign up here: https://cloudhpc.news