Settings

Theme

Show HN: Airgapped Offline RAG – Run LLMs Locally with Llama, Mistral, & Gemini

github.com

9 points by koconder a year ago · 1 comment · 1 min read

Reader

I've built an airgapped Retrieval-Augmented Generation (RAG) system for question-answering on documents, running entirely offline with local inference. Using Llama 3, Mistral, and Gemini, this setup allows secure, private NLP on your own machine. Perfect for researchers, data scientists, and developers who need to process sensitive data without cloud dependencies. Built with Llama C++, LangChain, and Streamlit, it supports quantized models and provides a sleek UI for document processing. Check it out, contribute, or suggest new features!

novitzmann a year ago

hey, we were ready to biuld something smilar for a "shaddow client". What s the main language used ? we r all about cpp https://github.com/docwire/docwire

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection