Show HN: Moondream, a small vision language model that runs on 8GB of RAM
github.comI've been working on training this small vision language model for the last month - excited to release the first prototype today! It is based on SigLIP (image encoder), Phi-1.5 (text model) and trained using the LLaVa-1.5 training dataset.
It runs reasonably fast on CPU with ~8GB of RAM in full 32-bit precision. There's plenty of room to speed it up and reduce memory consumption by quantizing the model.
I posted a video of it running on my M2 Macbook Air (on CPU not MPS, so performance should be comparable on other hardware) on Twitter to demonstrate inference speed: https://twitter.com/vikhyatk/status/1740910503323734448
No comments yet.