Giving AI Agents Read-Only Database Access

5 min read Original article ↗

TL;DR: I built dbridge to give safe, read-only database access either from the terminal or through MCP for AI agents. Credentials stay in a dedicated OS keychain, never in config files or chat. PostgreSQL and MySQL enforce read-only at the connection/session level, while MongoDB blocks write stages and server-side JavaScript operators before a query is sent.


When working with an AI agent on a backend feature, I kept running into the same gap: the agent could help write the code, but it couldn't verify whether things actually worked. Did the record get created? Is that field set correctly? What does the current state of the table look like? Every time I wanted the agent to confirm something, I had to query the database myself and paste the result.

The natural fix was to let the agent query the database directly. But that comes with uncomfortable questions: how do you share credentials without putting them in a config file that might end up in git? How do you trust that the agent won't run a DELETE by accident? The fastest workarounds are brittle. You paste a connection string into chat, or add a .env file and hope nobody commits it, or create a read-only user but still expose credentials through environment variables that end up in shell history.

I wanted something where the agent could check data safely, with the security properties built in by default, not bolted on.

Read-only enforced by default

For PostgreSQL, dbridge appends default_transaction_read_only=on to the connection string. For MySQL, it issues SET SESSION TRANSACTION READ ONLY immediately after connecting. MongoDB is different: dbridge validates the JSON query before sending it and rejects write pipeline stages like $out and $merge, plus server-side JavaScript operators like $where, $function, and $accumulator. There is no configuration option to disable any of this.

Credentials in the keychain, not the config

When you add a connection, dbridge prompts for the password once and stores it in a dedicated OS keychain named dbridge: Keychain on macOS, Credential Manager on Windows, Secret Service on Linux.

dbridge config add production \
  --driver=postgres \
  --host=db.example.com \
  --database=myapp \
  --username=analyst

No password appears in shell history, environment variables, or any file that could be accidentally committed. The config file still contains connection metadata like hostnames, usernames, and database names, but it never stores the password.

Running dbridge config without arguments opens an interactive TUI for managing connections:

dbridge  Manage Connections

    name          env         connection

  production    pg  production  db.example.com:5432/myapp
   analytics     my  production  db.example.com:3306/analytics
   staging       pg  staging     staging.example.com:5432/myapp
   local         pg  local       localhost:5432/myapp

  a add · enter edit · d delete · t enable/disable · esc quit

The agent never sees the credentials. When it calls list_connections, it gets lightweight metadata it can use to choose a connection by name:

[
  {
    "name": "production",
    "driver": "postgres",
    "environment": "production",
    "description": "Main app database"
  },
  {
    "name": "analytics",
    "driver": "mysql",
    "environment": "production",
    "description": "Analytics database"
  },
  {
    "name": "staging",
    "driver": "postgres",
    "environment": "staging"
  },
  {
    "name": "local",
    "driver": "postgres",
    "environment": "local"
  }
]

The agent picks a connection by name. dbridge resolves the rest internally, and credentials never cross into the agent's context.

Output optimized for token efficiency

When an AI agent runs a query, the output goes straight into the context window. By default, dbridge returns compact JSON that separates column names from row data, so schema information isn't repeated for every row:

{
  "cols": ["id", "name", "active"],
  "rows": [
    [1, "Alice", true],
    [2, "Bob", false]
  ]
}

A count(*) returns just the number. A single-column result collapses to a plain array:

["[email protected]","[email protected]","[email protected]"]

When I want something easier to read in a terminal, I can switch to --format table. For agents, the compact JSON default is usually what I want.

Two ways to use it

dbridge works as a plain CLI tool or as an MCP server for AI agents. Both modes use the same connections and the same read-only guarantees.

From the terminal, you can query directly:

dbridge query production \
  "SELECT count(*) FROM users
   WHERE created_at > now() - interval '7 days'"

For MongoDB, queries use a JSON format instead of SQL:

dbridge query mongo '{
  "collection": "events",
  "filter": {"type": "signup"},
  "limit": 10
}'

You can also inspect schemas and tables without writing any SQL:

dbridge schema list-schemas production
# ["public"]

dbridge schema list-tables production
# ["users","accounts","transactions","categories"]

And describe any table to inspect its full structure:

dbridge schema describe production users
{
  "Schema": "public",
  "Name": "users",
  "Columns": [
    {
      "Name": "id",
      "Type": "integer",
      "Nullable": false,
      "Default": "nextval('users_id_seq'::regclass)"
    },
    {
      "Name": "email",
      "Type": "character varying",
      "Nullable": false,
      "Default": null
    },
    {
      "Name": "created_at",
      "Type": "timestamp with time zone",
      "Nullable": false,
      "Default": "now()"
    }
  ],
  "Indexes": [
    {
      "Name": "users_pkey",
      "Columns": ["id"],
      "Unique": true,
      "Primary": true
    }
  ],
  "Constraints": [
    {
      "Name": "users_pkey",
      "Type": "PRIMARY KEY",
      "Def": "PRIMARY KEY (id)"
    }
  ]
}

For AI agents, register it with Claude Code or Codex:

claude mcp add dbridge -- $(which dbridge) mcp
codex mcp add dbridge -- $(which dbridge) mcp

From there, the agent has access to query, list_connections, list_schemas, list_tables, describe_table, and explain_query as native tools. No credentials in the config, no pasting, no stale schema dumps.

Trade-offs

dbridge is read-only by design with no override. MongoDB's enforcement is also narrower than the SQL drivers: Postgres and MySQL enforce it at the connection/session level, while MongoDB blocks dangerous pipeline stages and operators during query validation. Distribution is currently Homebrew or build from source.

Try it

brew tap thalesfp/dbridge
brew install dbridge

Then add your first connection from the command line:

dbridge config add local \
  --driver=postgres \
  --host=localhost \
  --database=mydb \
  --username=me

Or open the TUI and press a to add one interactively:

dbridge config

Either way, dbridge prompts for the password and stores it in the keychain. After that, dbridge query local "SELECT now()" should work.

The README covers PostgreSQL, MySQL, and MongoDB setup, plus the full MCP configuration for Claude Code and Codex.

Further reading