HAL Update: No More Tokens
Not enough tokens to provide a real update but AI does seem to be struggling now
I was hoping to provide an update today that said the caching component of HAL (the AI first coded HPC workload manager/ scheduler) is done. I can’t. I ran out of tokens.
I asked Codex to create the actual implementation, which it kind of did…with kind of doing the heavy lifting in that sentence. When I asked it to actually run the tests and make sure they pass it seemed to just get into a bit of a funk and span it wheels for ages and then spat out:
“Selected model is out of usage You've hit your usage limit for GPT-5.3-Codex-Spark.”
All that trying to fix a bash script (which it wrote):
hamza@devbox tests % ./run-full-cache-tests.sh.
/run-full-cache-tests.sh: line 218: unexpected EOF while looking for matching `"'
🤣
Oh well. I have both a Copilot and a Claude subscription I guess I could point at it I suppose. Or I might just wait.
I have to say though, the implementation it has generated feels a little sparse. Either this thing is amazing and it can reimplement ValKey++ in about one tenth as much code or (more likely) it just doesn’t actually do most of what is needed.
Might need to take a stap back and generate smaller work items for it and then set it off to work on each item (with less context in each one) in turn. I think I was perhaps being a little ambitious in attempting to one shot the implementation. Even with all the specification and tests in place.
I do really wonder about some of the claims out in the wild about what people have created with AI at this point though. Maybe Claude’s uptime metrics and bug count are better indicators of AI’s true capability.
