Show HN: LynxDB – Log analytics in a single Go binary
github.comHey HN, I spent the last year building LynxDB because I got tired of the gap between grep and Splunk. At work I deal with ClickHouse and Splunk daily — Splunk's query language is great for log analysis, but running it costs a fortune and takes a dedicated team. On the other end, grep works until you need aggregations. LynxDB is a log analytics engine that ships as a single binary with zero dependencies. It has two modes: Pipe mode - works like grep. No server, no config. kubectl logs deploy/api | lynxdb query '| group by endpoint compute avg(duration_ms)'
Server mode — persistent storage with full-text search (FST inverted index + roaring bitmaps), columnar segments with dictionary encoding and LZ4, and materialized views for precomputed aggregations. The query language is called Lynx Flow — a pipeline language where data flows left to right through |. If you know SPL, you'll feel at home. It also has partial SPL2 compatibility.
from nginx | parse combined(_raw) | status >= 500 | group by uri compute count() as hits, avg(duration_ms) as latency | order by hits desc | take 10
Quickest way to try it:
curl -fsSL https://lynxdb.org/install.sh | sh
lynxdb demo # streams sample logs from 4 sources lynxdb query 'from nginx | group by status compute count()'
Idle memory is around 50 MB. It accepts Elasticsearch _bulk, OpenTelemetry OTLP, and Splunk HEC, so you can point existing pipelines at it without changing anything. Fair warning: this is v0.1.3 and not production-ready. Storage format and APIs may change between releases. I'm using it for my own log analysis workflows and it works well enough there, but I wouldn't run it in prod yet. Written in Go. Interested to hear what you think — especially about the query language design and what you'd want from a tool like this
No comments yet.