Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP
github.comHi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months.
Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases.
This is why we are building AgentKit, and we’re really excited about it for a few reasons:
Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives:
- Agents: LLM calls that can be combined with prompts, tools, and MCP native support.
- Networks: a simple way to get Agents to collaborate with a shared State, including handoff.
- State: combines conversation history with a fully typed state machine, used in routing.
- Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration
The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents.
AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run.
This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug.
This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build.
Then comes the local development and moving to production capabilities.
AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop.
In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale.
You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples.
It’s fully open-source under the Apache 2 license.
If you want to get started:
- npm: npm i @inngest/agent-kit
- GitHub: https://github.com/inngest/agent-kit
- Docs: https://agentkit.inngest.com/overview
We’re excited to finally launch AgentKit; let us know what you think! AgentKit and Inngest are awesome. Thank you @tonyhb and the Inngest team for building such an amazing DX. I've been testing Inngest realtime with AgentKit and it's an awesome combo already despite being a few weeks old. I highly recommend the Inngest demo checking out their SWE bench example and the E2B example (https://github.com/inngest/agent-kit/tree/main/examples/e2b-...) to build a network of coding agents. I've also got this working with Daytona sandboxes. Some feedback:
- Would love an llms.txt for your docs like https://developers.cloudflare.com/agents/llms-full.txt
- clear way of accessing realtime publishing from within AgentKit. You can grab the LLMs-full.txt file here: https://agentkit.inngest.com/llms-full.txt More realtime examples coming soon! I've been using Agentkit + Inngest to build out an agentic customer support network for the past week and the experience so far has been wonderful. I've been evaluating n8n and Mastra.ai at the same time to determine the best platform for my use-cases and Agentkit + Inngest has been the clear winner IMO. The fact that Agentkit is able to leverage Inngest's durable workflow execution engine is awesome as it makes the interaction with the agentic network waaaayy more reliable. How's the performance compared to LangGraph? I'm working on a project that needs to handle high throughput agent interactions. Akka focuses on enterprise agentic with a focus on creating certainty and solving scale problems. We have a customer, Swiggy, which is >3M inferences per second for a blended set of models, both ML and LLMs, with a p99 latency of roughly 70ms. This level of throughput is achieved by including memory database within the agentic process and then the clustering system automatically shards and balances memory data across nodes with end user routing built in. Combined with non-blocking ML invocations with back pressure you get the balance for performance. The framework itself is super low overhead. You can deploy this anywhere, and if you deploy to inngest.com the P99 latency of starting agents is sub-50ms (and you can also realtime stream steps, tools, or model responses to the browser). One of the main differences is the DX — _how_ you define the agentic worklflows is far cleaner, so it's both faster to build and fast in production. Great stuff. Would be neat to have an adapter make the output stream match the ai-sdk format so it plugs in to existing UX (https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol) Interesting, have you considered adding benchmarks comparing AgentKit to other frameworks? Would help teams evaluating options This looks really good. I'm going to take a detailed look at this for sure. Thanks! Really love the decoupling of the logic and the runtime for the actual tool calls. That is great! How does debugging work with the local DevServer? If you use AgentKit + Inngest then you can do all the things you normally do with the Inngest dev server like observe runs with AI metadata and rerun functions and rerun functions from steps with edited inputs. We do have a step-through debugger coming pretty soon here as well. Note you don't have to use AgentKit with Inngest though. good stuff