Nvidia Drinks (Acquires) Slurm
Some thoughts and questions on what Nvidia’s acquisition of Slurm means for both AI and traditional HPC.
Nvidia has acquired SchedMD, the company behind the open source HPC scheduler Slurm. I have… thoughts….
Well, I’m afraid I’m probably going to disappoint. I have more questions than answers and I have spoken to precisely zero people to gain insight. Means I can’t get in any trouble, right? 😁
I saw a few social media posts from various companies that provide alternatives to Slurm the past few days highlighting their vendor neutrality (in relation to hardware vendors I guess?). Personally, I don’t think Slurm users have much to worry about.
Nvidia, so far at least, does not have an appalling record of playing nicely with the open source community. More so in the recent past. Slurm is also GPL licensed and has a large and fairly technically adept userbase. I don’t doubt that a number of large users have their fingers hovering over the fork button already. I know we’ve seen a few OSS license rug pulls in recent history but I’m quietly confident that we won’t see that in this space.

That’s not to say that we may not see new features developed in Slurm made available only for Nvidia hardware.
While it deserves a longer, dedicated, post of its own that I’ve been meaning to write for some time, I’ll say a few words here. HPC schedulers are rather lacking for managing GPU workloads. Be that AI or anything else quite frankly. Slurm, and many others, lack basic functionality that I find hard to believe is still missing. Features such as the ability to schedule work based on both memory and processor utilisation, to be able to checkpoint and consolidate workloads.
I’m hoping that Nvidia sees this too and will provide that and I won’t be in the least bit surprised if that support is limited to Nvidia GPUs. My next guess would be that Nvidia acquires one of the many startups that are selling GPU workload checkpointing and migration solutions next to integrate with both runAI and Slurm.
I’m also quietly encouraged by this move. Are AI bros finally waking up to the fact that they should stop reinventing the wheel and buy in some HPC expertise? We can only hope. I suspect that might be a trend we see continue into 2026. As economic pressure mounts on the AI companies I think we’ll see increased focus on improving operational efficiency and who better to do that than HPC folk that have been doing it for 30+ years.
Oh and the image? If you know, you know! And if you don’t then a quick read of Wikipedia will clear things up for you
Nvidia’s press release:

