Settings

Theme

Show HN: I built a database for AI agents

github.com

11 points by Kappa90 3 hours ago · 10 comments · 2 min read

Reader

Hey HN,

I just spent the last few weeks building a database for agents.

Over the last year I built PostHog AI, the company's business analyst agent, where we experimented on giving raw SQL access to PostHog databases vs. exposing tools/MCPs. Needless to say, SQL wins.

I left PostHog 3 weeks ago to work on side-projects. I wanted to experiment more with SQL+agents.

I built an MVP exposing business data through DuckDB + annotated schemas, and ran a benchmark with 11 LLMs (from Kimi 2.5 to Claude Opus 4.6) answering business questions with either 1) per-source MCP access (e.g. one Stripe MCP, one Hubspot MCP) or 2) my annotated SQL layer.

My solution consistently reached 2-3x accuracy (correct vs. incorrect answers), using 16-22x less tokens per correct answer, and being 2-3x faster. Benchmark in the repo!

The insight is that tool calls/MCPs/raw APIs force the agent to join information in-context. SQL does that natively.

What I have today: - 101 connectors (SaaS APIs, databases, file storages) sync to Parquet via dlt, locally or in your S3/GCS/Azure bucket - DuckDB is the query engine — cross-source JOINs across sources work natively, plus guardrails for safe mutations / reverse ETL - After each sync a Claude agent annotates the schema: table descriptions, column docs, PII flags, relationship maps

It works with all major agent frameworks (LangChain, CrewAI, LlamaIndex, Pydantic AI, Mastra), and local agents like Claude Code, Cursor, Codex and OpenClaw.

I love dinosaurs and the domain was available, so it's called Dinobase.

It's not bug free and I'm here to ask for feedback or major holes in the project I can't see, because the results seem almost too good. Thanks!

igrvs 7 minutes ago

Interesting approach, makes a lot of sense. Looks promising!

peterbuch 3 hours ago

Nice work. One thinga I'd love to see in the bench mark: a breakdown by question type (aggregations vs. multi-hop joins vs. lookups). My guess is the SQL approach pulls ahead hardest on the join-heavy ones, and showing that explicitly would make the "too good to be true" results feel more grounded. Either way, the token efficiency numbers sounds intruiging.

  • Kappa90OP 3 hours ago

    It's not explicitly stated in the benchmarks README, good catch.

    80% of the benchmark questions are aggregations, 16% are multi-hop, 4% are lookups/subqueries.

    Multi-hop is where LLMs struggle the most (hallucinations, partial answers), and aggregations is where you get the most token efficiency, since you skip on pagination which you need with APIs/MCPs that don't provide filters.

c6d6 2 hours ago

How does it handle schema drift (eg saas vendor changes a column)? Does the annotation agent mark breaking changes in some way or just describe the current state of the world? With that many connections, you'll hit a bunch of weird edge cases, especially with things like salesforce custom objects.

  • Kappa90OP 2 hours ago

    Right now schema changes create new columns, I’m working on reconciling old columns, which right now are not dropped.

    The annotation/semantic layer agent creates a new description of the schema on sync, which represents the current state, but that includes stale columns as of today, data is not dropped.

    I’ll implement automated schema migrations in the next week or so!

federiconitidi 3 hours ago

Could you give more context on the benchmarks included in the repo?

  • Kappa90OP 3 hours ago

    It's an experimental benchmark, I couldn't find any off-the-shelf benchmarks to use this with. There's Spider 2.0 but it's for text-to-SQL. I'm planning to run this [1] next but it's quite expensive.

    There's 75 questions, divided in 5 use case groups: revenue ops, e-commerce, knowledge bases, devops, support.

    I then generated a synthetic dataset with data mimicking APIs ranging from Stripe to Hubspot to Shopify to Zendesk etc..

    I expose all the data through Dinobase vs. having one MCP per source e.g. one MCP for Stripe data, one MCP for Hubspot data etc.

    I tested this with 11 models, ranging from Kimi 2.5 to Claude Opus 4.6.

    Finally there's an LLM-as-a-judge that decides if the answer is correct, and I log latency and tokens.

    [1] https://arxiv.org/abs/2510.02938

tosh 3 hours ago

Which llm is best at driving DuckDB currently?

  • Kappa90OP 3 hours ago

    DuckDB exposes Postgres SQL, and most coding LLMs have been trained on that.

    Of the small models I tested, Qwen 3.5 is the clear winner. Going to larger LLMs, Sonnet and Opus lead the charts.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection