Settings

Theme

Show HN: Output.ai - OSS framework we extracted from 500+ production AI agents

output.ai

40 points by bnchrch a month ago · 19 comments

Reader

danielvlopes2 a month ago

Hey HN! I'm Daniel, cofounder of GrowthX and Ben's colleague (who posted it). We have about 20 engineers building AI agents and workflows for companies like Lovable, Webflow, Airbyte. Output is the framework we extracted from that work. It runs our AI infrastructure and we open-sourced it.

We kept hitting the same problems: writing and iterating on prompts at scale, orchestrating API calls that fail unpredictably, tracking costs, testing non-deterministic code, building datasets from production data, organizing repos so coding agents perform well. And every piece of tooling was a different SaaS product that didn't talk to the others.

We built Output around three ideas:

1. Make it easy for devs and coding agents to create and modify workflows in one or a few shots.

Filesystem first. Everything your agent needs lives in self-contained folders, full context visible without hunting. TypeScript and Zod provide the first validation layer for whether your workflow is correct.

2. One framework, minimal tooling sprawl.

We got tired of scattering data across SaaS products that don't talk to each other. Prompt files, evals, tracing, cost tracking, credentials all live in one place.

Your data stays on your infrastructure. Under the hood, we built on Temporal for orchestration. It's a hard problem and we weren't going to reinvent the wheel they've perfected. Open source and self-hostable, or Temporal Cloud. We wrapped it so you don't need to learn Temporal upfront, but the full power is there underneath.

3. A flat learning curve.

Our team is web engineers at different levels. We didn't want anyone to learn Python, five different tools, or the nuances of workflow idempotency before they could ship. We baked in conventions: same folder structure, file names, patterns across every workflow. Advanced features like Temporal primitives, evals, LLM-as-a-judge stay out of the way until you reach for them.

We've been building production workflows this way for over a year.

We extracted it, cleaned it up, and wanted to put it in front of people who'd push on it.

Docs and a video building a HN AI digest newsletter from scratch: https://output.ai

Happy to answer questions.

chirdeeps 23 days ago

Extracting a framework from production experience is the right way to build one. The failure modes you encode are the ones that actually matter.

Curious what the three failure modes were that caused the most incidents before you extracted the framework, those tend to reveal the assumptions baked into the design.

dangent a month ago

This looks really interesting - appreciate you sharing. Is it only API key driven or is there a way to try out with a Claude/Anthropic subscription?

  • bnchrchOP 25 days ago

    Hey dangent! Glad you find it interesting!

    So the API keys during setup are entirely optional. They're used in the example workflow that evaluates blog posts for clarity and provides feedback on how to improve.

    Youre more than free to ignore/delete the example workflow and create your own that doesn't make use of an LLM 1. Fetching trending hn posts 2. Pulling reddit posts that match keywords 3. Transforming Daily calendar events into an html page etc..

    And the claude code plugins (that are installed for you) all work with you Anthropic subscription no problem

kawi12 a month ago

This is awesome!

dp05 a month ago

Looks great. Sharing with my team

danelliot a month ago

Interesting that this came out of 500 agents in production. The hardest part I've seen with agent tool calls is handling partial failures gracefully — the tool returns something but it's incomplete or stale. Do you bake retry/fallback logic into the framework itself or leave that to individual tool implementations?

  • bnchrchOP 25 days ago

    Oh I can answer that.

    So we had a few goals here

    1. Be opinionated on best practises, tools and libraries

    2. Not get in the way of what the developer wants to do

    To that end the core is built on top of Temporal, and our llm package is a thin wrapper around ai-sdk that provides QoL enhancements (Prompt files, tracing, cost tracking etc..)

    So for failures in general, and tool calling specifically there are two levels of retries.

    1. ai-sdk level tool retries: The library by default handles tool call failures and will retry if the LLM deems it a transient issue, and will never hard fail if one of its tool calls in unsuccessful (unless perhaps you instruct it to).

    2. Temporal level activity failures: Our workflows and steps are all configured with a base line affordance to reattempt steps that have failed. You as the developer are able to change this, you can make it so a step is never retried, or retried say 100 times with exponential backoff.

    Hope that helps!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection