My AI Speaks `curl` and `jq` — MCP Can Catch Up Later

3 min read Original article ↗

I asked my AI buddy what would help it dig in my DB. It arrived with a daemon, two port collisions, and a success message that only works on Gary’s Intel Mac — also known as “MCP server installation instructions.” So I taught it a simpler habit: Markdown guides + tiny curl/jq scripts. I get answers; my AI learns its tools; MCP can join when installs stop feeling like escape rooms.

A quick vibe check

You could build your own MCP server, park it in a private repo, and feel accomplished — until you need everyone else to use it. Now you’re writing install notes for mismatched runtimes, opening ports — wait a minute. Congrats: you’re Head of Ports & Feelings. New job description includes explaining why your MCP only works on :4312.
Are MCPs impressive? Yes. Lightweight for teammates and AI? Not really.

Here’s the deliberately boring alternative

A short guide in the repo.
And a handful of tiny scripts that do nothing fancy except work.

No daemons. No discovery layer. Copy, paste, run. AI follows the same guide; humans can audit every line.
All the guides are generated by AI, for AI.

The stack, in layers

  • Data Provider’s API
  • curl + jq boilerplate scripts — hide headers/auth/quoting; emit raw JSON
  • Guides — AI-runnable Markdown with tenant/time-scoped recipes
  • AI — reads the guide → runs scripts → pipes JSON through small jq formatters
  • Human — skims the summary, clicks links, enjoys blissful ignorance

Humans struggle with fussy detail; AI thrives on it. Give AI small, deterministic commands and it assembles complex workflows while you sip coffee.

A repo shape AI can walk without tripping

repo/
├─ guides/
│ ├─ dev.md # developer's guide
│ └─ testing.md # conventions, guidelines, examples
├─ integrations/
│ ├─ metabase.md # read-only DB access
│ ├─ kafka.md # events investigations
│ ├─ linear.md # Linear access
│ └─ coralogix.md # logs, traces, and metrics
├─ scripts/
│ ├─ run_metabase_query.sh # queries runner
│ ├─ logs_query.sh # log search wrapper
│ └─ tickets_graphql.sh # ticket system GraphQL wrapper
├─ formatters/
│ ├─ bi_summarize.jq
│ ├─ logs_top_errors.jq
│ └─ tickets_summary.jq
├─ playbooks/
│ ├─ investigate_ticket.md # relevant data collection, code look-up
│ └─ incident_triage.md
└─ README.md # safety, quick start, conventions

How to help your AI help you

Prompt

Create a repo with `guides/`, `scripts/`, and `formatters/`. Each script wraps a service with `curl`, outputs raw JSON. Each guide is runnable Markdown. Each `jq` file turns JSON into skinbags-readable format.
Search online (or ask your AI) how to access each tool’s API and which environment variables to set.

A real AI run (obfuscated, because we’re civilized)

Prompt

Prompt
Using `integrations/metabase.md`: collect organizations (anonymized) and how many banks each has.
Then, using `integrations/coralogix.md`: compute how many logs each org generated in the last month.

Result

organizations=redacted, avg_active_banks≈redacted, logs_30d≈redacted

Timeline

  • Discover guides & scripts — ~2 min
  • Skim conventions & prep env — ~6 min
  • Run queries & log counts (parallelized) — ~9 min
  • Join, anonymize, export, summarize — ~5 min

Total: ~22 minutes. AI did the meticulous parts; the human was on a toilet break.

Press enter or click to view image in full size

Closing

MCP might solidify one day. When installation and discovery get consistent across vendors, I’ll happily wrap these same HTTP calls behind it. Until then, a Markdown page and a handful of tiny, strict scripts are the shortest distance between “what the AI should do” and “the AI did it.”
Small pieces. Loosely joined. Shipped today.