Show HN: Self-growing neural networks via a custom Rust-to-LLVM compiler
github.comHi HN,
I built NOMA (Neural-Oriented Machine Architecture), a systems language where reverse-mode autodiff is a compiler pass (lowered to LLVM IR).
My goal is to treat model parameters as explicit, growable memory buffers. Since NOMA compiles to standalone native binaries (no Python runtime), it allows using realloc on weights mid-training. This makes "self-growing" architectures a system primitive rather than a complex framework hack.
I just pushed a reproducible benchmark (Self-Growing XOR) to validate the methodology: it compares NOMA against PyTorch and C++, specifically testing how preserving optimizer state (Adam moments) during growth affects convergence.
I am looking for contributors! If you are into Rust, LLVM, or SSA, I’d love help on the harder parts (control-flow AD and memory safety).
Repo: https://github.com/pierridotite/NOMA A quick note on the implementation details for those interested in compilers: The hardest part wasn't the AD itself, but managing memory safety during the "growth" phase. Since NOMA compiles to native code (LLVM), I had to ensure that when a weight buffer gets realloc'd (moved in memory): The gradient tape updates its pointers. The optimizer state (Adam moments) is correctly mapped to the new indices. The benchmark I linked shows the result: "Preserving" this state allows the model to continue converging immediately after resizing, whereas "Resetting" it causes a massive performance regression. I'm specifically curious if anyone here has experience with handling SSA Phi-nodes during reverse-mode AD on the Control Flow Graph? That's my next big hurdle for supporting complex control flow. Hey ! I Saw a post that u done on reddit how can I help if I wl to contibute ? Go on our discord :) We can help u to find an issue and some implementation that could be usefull as demo or others x)