Show HN: Kernel-level LLM inference via /dev/llm0
github.comI saw an April Fools joke and decided to implement it.
This is a rough port of llm.c into a kernel module. A lot of hacks were needed to make this happen, so a lot of performance was left on the table. Nevertheless, it is a minimally functional GPT2 inference loop running in the kernel.
No comments yet.