Settings

Theme

Tambo 1.0: Open-source toolkit for agents that render React components

github.com

101 points by grouchy 2 months ago · 26 comments · 1 min read

Reader

Hi HN — cofounder Michael here.

We've been building Tambo for about a year, and just released our 1.0.

We make are making it easier to register React components with Zod schemas, a build an agent picks the right one and renders the right props.

We handle many of the complications with building generative user interfaces like: managing state between the user, the agent, and react component, rendering partial props, and we handle auth between your user, and MCP. We also support adding MCP servers and most of the spec.

We are 100% open-source and currently have 8k+ GitHub stars, thousands of developers, and over half-millions messages processed by our hosted service.

If you're building AI agents with generative UI, we'd like to hear from you.

avaer 2 months ago

Since I didn't see it in the Readme, how does this compare to something like Google's A2UI? Seems like it's doing more, but could e.g. Tambo work on top of A2UI protocol or is it a different beast?

My agents need a UI and I'm in the market for a good framework to land on, but as is always the case in these kinds of interfaces I strongly suspect there will be a standard inter-compatible protocol underlying it that can connect many kinds of agents to many kinds of frontends. What is your take on that?

  • lachieh 2 months ago

    Hey! I'm an the Tambo team so I'll chip in. There isn't really any reason we couldn't support A2UI. It's a great way to allow models to describe generative UIs. We could add an A2UI renderer.

    The way we elevator-pitch Tambo is "an agent that understands your UI" (which, admittedly, is not very descriptive on the implementation details). We've spent the time on allowing components (be that pre-existing or purpose-built) to be registered as tools that can be controlled and rendered either in-chat, or out within your larger application. The chat box shouldn't be the boundary.

    Personally, my take on standards like A2UI is that they could prove useful but the models have to easily understand them or else you have to take up additional context explaining the protocol. Models already understand tool-calling so we're making use of that for now.

dzogchen 2 months ago

I don’t understand what this does. Who would use this and why? I need an ELIF.

Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui

Can it also generate new components?

  • grouchyOP 2 months ago

    You install the React SDK, register your React components with Zod schemas, and then the agent responds to users with your UI components.

    Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.

    We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).

    We do have a skill you can give your agent to create new UI components:

    ``` npx skills add tambo-ai/tambo ```

    /components

    • oulipo2 2 months ago

      Okay but I fail to see how this is "new tech"?

      Basically it's just... agreeing upon a description format for UI components ("put the component C with params p1, p2, ... at location x, y") using JSON / zod schema etc... and... that's it?

      Then the agent just uses a tool "putCompoent(C, params, location)" which just renders the component?

      I'm failing to understand how it would be more than this?

      On one hand I agree that if we "all" find a standard way to describe those components, then we can integrate them easily in multiple tools so we don't have to do it again each time. At the same time, it seems like this is just a "nice render-based wrapper" over MCP / tool calls, no? am I missing something?

      • milst 2 months ago

        It's that plus the hosted service which interacts with the LLM, stores threads, handles auth, gives observability of interactions in your app, etc.

fitzgera1d 2 months ago

There’s an MCP Apps version of this that is interesting: https://creature.run

Maybe I’m misunderstanding, but isn’t generating UI just-in-time kind of risky because AI can get it wrong? Whereas you can generate/build an MCP App once that is deterministic, always returns a working result, and just as AI native.

  • milst 2 months ago

    With this you build your own React components and register them with the AI. The AI chooses which to use and what props to pass into them, so it's not generating UI from scratch, if that's what you mean - other Michael from the Tambo team

    • fitzgera1d 2 months ago

      I see. how often does AI get the components wrong? This is my big question wrt just in time generative ui. UI built beforehand (also likely generated by ai but vetted by eng team) feels like it has the win here. Why take on risks of non determinism if you don’t have to. This feels like the big pro for mcp apps.

      • grouchyOP 2 months ago

        It get's it wrong sometimes but I think the alternative is the user getting it wrong trying to navigate your site.

        I like to think how much time I spend clicking different nav links, clicking different drop downs trying to find the functionality I need.

        It's just a new way for the app to surface what the user needs when they need it.

jauntywundrkind 2 months ago

Is there any interest or discussion about finding a way to use these tools to work with MCP Apps?

Release: http://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-app... . Announcement: http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-app... . Submission: https://news.ycombinator.com/item?id=46020502

  • grouchyOP 2 months ago

    Yeah, we've been following it closely. We already support the majority of the MCP spec and plan to add support for UI over MCP.

    But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.

    That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.

_the_inflator 2 months ago

I got you an my watch list for quite some time and will definitely give you a try.

It sounds promising, because it is on the outside reproducible deterministic component generation in a modern fashion as far as I understood it.

I build a large platform using a methodically comparable approach I suppose, albeit in the pre-AI time, and that's why I wanna have a closer look at the inner workings and results of your project - curiosity so to say.

You appear to be the only solid and promising endeavor in the GenUI domain with a solid approach other than simply relying on an LLM but using math in combination with AI.

Good luck!

  • grouchyOP 2 months ago

    Glad you like the approach. When you give it a spin or look into the implementation please let us know what you think.

    We are constantly improving tambo. It's crazy to see how much it's improved since we first started.

deep_origins 2 months ago

Big fan of Tambo and what the team has built. Started using it on a couple side projects and being able to use the zod schemas as source of truth for llm structured outputs is handy.

  • grouchyOP 2 months ago

    Awesome to meet another tambonaut.

    We love zod, we also support standard schema and thus most other popular typing libraries.

    I'm curious how you found us?

svrma 2 months ago

I love the idea! I want to use it for a commercial product (start with self-hosting). However, I'm unsure how the License on the website (tambo.co/license) interacts with the MIT/Apache 2.0 on the GitHub repo. Could you shed some light on this?

cjonas 2 months ago

All these libraries are to trying to do too much. The "batteries included" approach makes for great demos, but falls apart for any real application

  • grouchyOP 2 months ago

    I'm curious what would make you say that? Because we haven't experienced this. We are being used a fortune 1000 fintech in production.

    Any specific experience you had? or more specifics of where batteries included went to far?

krashidov 2 months ago

congrats on the launch! we're building type.com and we would love to use this - shoot me an email: k at type dot com

our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)

  • grouchyOP 2 months ago

    Thank you. I just sent you an email. Looking forward to learning more about what you are building.

danialtz 2 months ago

impressive!

is this the same category to CopilotKit? CPK is a AGUI proxy for similar topics, but here seems to be more emphasis on linked components?

  • grouchyOP 2 months ago

    There's overlap for sure. I'd say we've built a more drop-in solution. We actually migrated to AG-UI events under the hood, and we have plans to expand cross-compatibility across standards.

    The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.

eagleinparadise 2 months ago

Does this support standard schema?

  • grouchyOP 2 months ago

    It does!

    import { z } from "zod";

    inputSchema: z.object({ query: z.string() });

    or

    import * as v from "valibot";

    inputSchema: v.object({ query: v.string() });

    or

    import { type } from "arktype";

    inputSchema: type({ query: "string" });

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection