GitHub - stanford-mast/a1: The agent-to-code compiler that optimizes AI for speed 🏎️ and safety 🏁

3 min read Original article ↗

BLAST Logo

The agent compiler framework

Documentation Discord Twitter Follow

A1 is a new kind of agent framework. It takes an Agent (a set of tools and a description) and compiles either AOT (ahead-of-time) into a Tool or JIT (just-in-time) for immediate execution optimized for each unique agent input.

uv pip install a1-compiler
# or
pip install a1-compiler

🏎️ Why use an agent compiler?

An agent compiler is a direct replacement for agent frameworks such as Langchain or aisdk, where you define an Agent and run. The diference is:

  1. Safety - Minimizes exposure of sensitive data to an LLM.
  2. Speed - Up to 10x faster code generation.
  3. Determinism - Code is optimized for minimal non-deterministic behavior (e.g. LLM calls replaced with code where possible).
  4. Flexibility - Skills and Tools from any existing OpenAPI, MCP server, databases, fsspec paths, Python functions

Agent compilers emerged from frustration with the MCP protocol and SOTA agent frameworks where every agent runs a static while loop program. Slow, unsafe, and highly nondeterministic.

An agent compiler can perform the same while loop (just set Verify=IsLoop()) but has the freedom to explore superoptimal execution plans, while subject to engineered constraints (e.g. type-safety).

Ultimately the goal is "determinism-maxing": specifying as much of your task as fully deterministic code (100% accuracy) and gradually reducing non-deterministic LLM calls to the bare minimum.

🚀 How to get started?

from a1 import Agent, tool, LLM
from pydantic import BaseModel

# Define a simple tool
@tool(name="add", description="Add two numbers")
async def add(a: int, b: int) -> int:
    return a + b

# Define input/output schemas
class MathInput(BaseModel):
    problem: str

class MathOutput(BaseModel):
    answer: int

# Create an agent with tools and LLM
agent = Agent(
    name="math_agent",
    description="Solves simple math problems",
    input_schema=MathInput, # like DSPy modules, A1 agent behavior is specified via schemas. The difference is that in A1, an engineer may implement a Verify function to enforce agent-specific constraints such as order of tool calling.
    output_schema=MathOutput,
    tools=[add, LLM(model="gpt-4.1")],  # in A1, LLMs are tools!
)

async def main():
    # Compile ahead-of-time
    compiled = await agent.aot()
    result = await compiled.execute(problem="What is 2 + 2?")
    print(f"AOT result: {result}")

    # Or execute just-in-time
    result = await agent.jit(problem="What is 5 + 3?")
    print(f"JIT result: {result}")

import asyncio
asyncio.run(main())

See the tests/ directory for extensive examples of everything A1 can do. Docs coming soon to docs.a1project.org

✨ Features

  • Import any Langchain agent
  • Observability via OpenTelemetry
  • Tools instantiated from MCP or OpenAPI
  • RAG instantiated given any SQL database or fsspec path (e.g. s3://my-place/here, gs://..., or local filesystem)
  • Skills defined manually or by crawling online docs
  • Context engineering via a simple API that lets compiled code manage multi-agent behavior
  • Zero lock-in use any LLM, any secure code execution cloud
  • Only gets better as researchers develop increasingly powerful methods to Generate, Cost estimate, and Verify agent code

🙋 FAQ

Should I use A1 or Langchain/aisdk/etc?

Prefer A1 if your task is latency-critical, works with untrusted data, or may need to run code.

Is A1 production-ready?

Yes in terms of API stability. The caveat is that A1 is new.

Can we get enterprise support?

Please don't hesitate to reach out (calebwin@stanford.edu)

🤝 Contributing

Awesome! See our Contributing Guide for details.

📄 MIT License

As it should be!

📜 Citation

Paper coming soon! Reach out if you'd like to contribute.