Write Once, Run Everywhere
One action class. It's your HTTP endpoint, your CLI command, your WebSocket handler, and your background task — all at the same time.
export class Status implements Action {
name = "status";
description = "Return the status of the server";
inputs = z.object({});
web = { route: "/status", method: HTTP_METHOD.GET };
task = { queue: "default", frequency: 60000 };
async run() {
return {
name: api.process.name,
uptime: Date.now() - api.bootTime,
};
}
}That's it. Same validation, same error handling, same response shape — whether the request comes from a browser, a CLI, or a cron job.
One Action, Every Transport
This is one action:
ts
export class UserCreate implements Action {
name = "user:create";
description = "Create a new user";
inputs = z.object({
name: z.string().min(3),
email: z.string().email(),
password: secret(z.string().min(8)),
});
web = { route: "/user", method: HTTP_METHOD.PUT };
task = { queue: "default" };
async run(params: ActionParams<UserCreate>) {
const user = await createUser(params);
return { user: serializeUser(user) };
}
}That one class gives you:
HTTP — PUT /api/user with JSON body, query params, or form data:
bash
curl -X PUT http://localhost:8080/api/user \
-H "Content-Type: application/json" \
-d '{"name":"Evan","email":"evan@example.com","password":"secret123"}'WebSocket — send a JSON message over an open connection:
json
{
"messageType": "action",
"action": "user:create",
"params": {
"name": "Evan",
"email": "evan@example.com",
"password": "secret123"
}
}CLI — flags are generated from the Zod schema automatically:
bash
./keryx.ts "user:create" --name Evan --email evan@example.com --password secret123 -q | jqBackground Task — enqueued to a Resque worker via Redis:
ts
await api.actions.enqueue("user:create", {
name: "Evan",
email: "evan@example.com",
password: "secret123",
});MCP — exposed as a tool for AI agents automatically:
json
{
"mcpServers": {
"my-app": {
"url": "http://localhost:8080/mcp"
}
}
}Same validation, same middleware chain, same run() method, same response shape. The only thing that changes is how the request arrives and how the response is delivered.
Why Keryx?
Most backends start simple — an HTTP framework — then bolt on a WebSocket server, a CLI tool, a job queue, and now an MCP layer. Each one has its own handler, its own validation, its own auth. You end up maintaining five implementations of the same logic.
Keryx flips that: write your controller once, and the framework delivers it across every transport.
| Feature | Keryx | Hono | Elysia | NestJS | FastAPI | Django |
|---|---|---|---|---|---|---|
| HTTP | yes | yes | yes | yes | yes | yes |
| WebSocket | yes | adapter | yes | yes | yes | channels |
| CLI commands | yes | — | — | limited | — | yes |
| Background tasks | yes | — | — | Bull | Celery | Celery |
| MCP tools | yes | — | — | — | — | — |
| Unified controller | yes | — | — | — | — | — |
| Type-safe responses | yes | yes | yes | partial | yes | — |
| OAuth 2.1 built-in | yes | — | — | Passport | — | allauth |
| Per-session agents | yes | — | — | — | — | — |
The MCP SDK gives you the protocol. Keryx gives you the framework — actions, validation, middleware, auth, database, and background tasks alongside your MCP tools.
Built for AI Agents
Keryx is the only TypeScript framework where your API is automatically an MCP server. No separate tool definitions, no duplicated auth, no schema mapping.
- Zero-config tool registration — write an action, it's an MCP tool. The Zod schema becomes the tool's input schema automatically.
- OAuth 2.1 + PKCE built-in — agents authenticate the same way browser clients do. One auth layer, not two.
- Dynamic OAuth forms — login and signup pages are generated from your Zod schemas. Change a field, the form updates.
- Per-session MCP servers — each agent connection gets isolated state. No cross-session leaks.
- Typed errors — agents get structured
ErrorTypevalues, not generic failure messages. They can distinguish validation errors from auth failures. - Real-time notifications — PubSub events are forwarded to connected agents as MCP logging messages.
llms.txtincluded — AI agents and LLMs can discover optimized Markdown documentation at/llms.txt, no scraping needed.
Claude Desktop, VS Code Copilot, Cursor, Windsurf, and any other MCP client can discover and call your actions out of the box.
Why Bun?
- Native TypeScript — no compilation step, no
tsconfiggymnastics - Built-in test runner —
bun testwith watch mode, no extra dependencies - Fast startup — sub-second cold starts for dev and production
- Module resolution that works — ESM, CommonJS, and
.tsimports without configuration fetchincluded natively — great for testing your own API
Quick Start
bash
bunx keryx new my-app
cd my-app
cp .env.example .env
bun install
bun devRequires Bun, PostgreSQL, and Redis. See the Getting Started guide for full setup instructions.
Built With
Bun · Zod · Drizzle · Redis · PostgreSQL · OpenTelemetry