Fast Llama inference in pure, modern Java
youtube.comFeatures: - Single file, no dependencies - GGUF format parser - Llama 3 tokenizer - Support Llama 3, 3.1 (ad-hoc RoPE scaling) and 3.2 (tie word embeddings) - Fast matrix-vector multiplication routines for Q4_0 and Q8_0 quantized tensors using Java's Vector API - GraalVM's Native Image support - AOT model preloading for instant time-to-first-token
Quite good stuff. Was looking for something like this since a long time.