High-performance, AI-native server built from scratch in C + Assembly β handles heavy AI payloads with minimal latency.
π Quick Start
1οΈβ£ AI Provider Setup (Optional)
NeuroHTTP is provider-agnostic and does not require a specific AI vendor.
You may run the server using any OpenAI-compatible API, GROQ, or even a local AI model.
If your setup requires an API key, export it as an environment variable:
export OPENAI_API_KEY="gsk_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
2οΈβ£ Install Dependencies
On Debian / Ubuntu / Kali:
sudo apt-get update sudo apt-get install -y libcurl4-openssl-dev build-essential
3οΈβ£ Clone the Repository & Build the Server
git clone https://github.com/okba14/NeuroHTTP.git
cd NeuroHTTP
make rebuildThe make rebuild command compiles the server from scratch.
4οΈβ£ Run the Server
The server will run on port 8080 by default. Logs are displayed in the same terminal.
5οΈβ£ Send a Test Request (curl)
Open a second terminal and send a POST request:
curl -X POST http://localhost:8080/v1/chat \ -H "Content-Type: application/json" \ -d '{"prompt": "Hello."}'
6οΈβ£ Example Response
{
"response": "Hello! AI server received your prompt."
}Users can now send any prompt to the AI server.
Real AI inference request.
POST /v1/chat handled by NeuroHTTP, parsed at low level
and routed to a real LLM backend (LLaMA-based API).
Shows full request lifecycle, logging, and successful 200 OK response.
π§ Important Notes
- Make sure the OPENAI_API_KEY environment variable is set before starting the server.
- To change the port or server options, edit include/config.h.
- The server uses libcurl to communicate with the AI backend.
Benchmark Comparison
For detailed benchmark results comparing NeuroHTTP and NGINX, see benchmark.md
π§© Visual Benchmark Evidence
Below are the live screenshots from the actual benchmark runs.
πΉ NeuroHTTP β 40,000 Connections
πΉ NGINX β 40,000 Connections
π§ͺ Performance Highlights
Server Conns Requests/sec Avg Latency Transfer/sec NGINX 1.29.3 10k 8,148 114ms 1.2 MB/s NeuroHTTP 10k 2,593 57ms 7.9 MB/s
π‘ Insight
NeuroHTTP handles heavier, AI-rich payloads with lower latency and higher throughput per connection.
π Why Star?
- Low latency, high throughput
- Compact C + Assembly core
- Open-source & extensible
If you love high-performance AI servers, consider giving the project a β and sharing it with others.
Β© 2025 GUIAR OQBA
Licensed under the MIT License.
β Support the Project
If you believe in the vision of a fast, AI-native web layer, your support helps NeuroHTTP evolve. π
𧬠Author
π¨βπ» GUIAR OQBA π©πΏ
Creator of NeuroHTTP β focused on low-level performance, AI infrastructure, and modern web systems.
βBuilding the next generation of AI-native infrastructure β from El Kantara, Algeria.β

