Settings

Theme

Show HN: Kernel-level LLM inference via /dev/llm0

github.com

2 points by RandomBK 9 months ago · 0 comments · 1 min read

Reader

I saw an April Fools joke and decided to implement it.

This is a rough port of llm.c into a kernel module. A lot of hacks were needed to make this happen, so a lot of performance was left on the table. Nevertheless, it is a minimally functional GPT2 inference loop running in the kernel.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection