Lightweight backend service for a grammar assistant app. Can serve as an inspiration for LLM token streaming with OpenAI SDK and FastAPI.
Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Prerequisites
The project requires Python and pip installed on your system. The required Python packages are listed in the requirements.txt file.
Environment
Copy the .env.example file to .env and fill in the required values.
Config
To configure the application, especially the LLM prompts, copy the config.example.yaml file to config.yaml and fill in the required values.
cp config.example.yaml config.yaml
Installing
- Clone the repository to your local machine.
- Navigate to the project directory.
- Install the required packages using pip:
pip install -r requirements.txt
Running the Application
To run the application, use the following command:
uvicorn main:app --reload
Or you can run the application with Docker:
The application will be available at http://localhost:80 exposed with Nginx.
Project Structure
The project is structured into several modules and services. For people interested only in LLM integration, the most interesting parst will be:
Endpoint documentation is available at /docs when the application is running.
