Shown HN: Cria – Run LLMs locally and programmatically, as easy as possible
github.comHello Hacker News! My name is Anonyo, and I am a seventeen year old from Southeast Michigan. This is my second open source project.
I built Cria, a Python library that allows you to run LLMs programmatically through Python. Cria is designed so there is as little friction as possible — getting started takes just five lines of code.
I created this library because I was using OpenAI in my project, and kept running into rate limits. With local LLMs getting better and better, I sought to switch, but found command line configurations to be limited. Running and configuring LLMs can be trivial, but programs like ollama make it easier. The only problem I found with ollama though, was the lack of features in its Python client.
To counteract this, I built Cria — a program that is concise, efficient, and most importantly programmable. The name is based off of Meta's llama, as baby llamas are called crias. Cria solves pain points of ollama - it manages the ollama server for you, saves your message history, streams responses by default, and allows easier management of multiple LLMs at once.
While this isn't my first open source project, it's my first Python library. The code may be a little rough around the edges, so I would appreciate any and all suggestions.
Thank you!