Settings

Theme

Show HN: AI Proxy with Support for OpenAI, Anthropic, LLaMa2, Mistral

braintrustdata.com

10 points by ankrgyl 2 years ago · 19 comments

Reader

quickthrower2 2 years ago

I prefer something open source. If the idea is to reduce dependencies on services. It is not against paying, but more against taking on another corporate dependency.

kiwicopple 2 years ago

from first glance, it looks similar to LiteLLM[0]. Are there any key differences?

Also, in light of this weekend's madness, are you reconsidering using the OpenAI API as the canonical interface? Do you think there could be changes in the future?

[0] https://github.com/BerriAI/litellm

  • ankrgylOP 2 years ago

    LiteLLM is a great project. The key differences (from what I can tell) are that:

    a) This is hosted b) Supports caching and load balancing c) Can manage multiple providers behind a single API key d) Implemented in Typescript (vs. Python)

    On the other hand, LiteLLM is a more mature project and supports significantly more model providers than we currently do!

bryanh 2 years ago

We've been using Braintrust for evals at Zapier and it's been really great -- pumped to try out this proxy (which should be able to replace some custom code we've written internally for the same purpose!).

ibolmo 2 years ago

This looks great! The interface is really simple and I love that I can abstract away LLM provider concerns. I could see how this could be a nice play to overcome reliability issues with LLM providers.

nikita 2 years ago

I've been thinking that with OpenAI introducing more and more workflow there will be a company that turns opensource model to API + workflow. I'm wondering if Braintrust vision is that?

  • ankrgylOP 2 years ago

    We think a lot about this. I think it's very important to us to provide building blocks that foster a more open ecosystem. Part of that is providing building blocks (like the proxy) that do not lock you too much into "our" way of doing things either.

spdustin 2 years ago

Do you have plans to support Bedeock? In a hosted environment, I recognize it's a little more...involved. But still, I'm curious.

  • ankrgylOP 2 years ago

    We absolutely can. The proxy already supports running in multiple environments, including your own AWS VPC deployed on Lambda, so I'm sure we could add support for Bedrock (Anthropic or other models).

    Feel free to ping me at ankur@braintrustdata.com and we can chat more!

azhang36 2 years ago

Very timely release in terms of interoperability. Auto caching seems like a no-brainer too. Nice job team for shipping this past weekend

theiceberg 2 years ago

This is neat. It is more obvious than ever that everyone should be building in redundancy, testing across models.

tuhins 2 years ago

this is great. redundancy across models (closed and open-source) seems like an obvious need, esp in light of everything this weekend

alanaan 2 years ago

how does this differ from a model “router” (e.g. withmartian.com)?

  • ankrgylOP 2 years ago

    Model routers are a "semantic" abstraction -- they analyze the contents of a prompt and automatically determine which provider to use. This proxy is (currently) much simpler: you pick the provider you want and we add a bunch of nice features like caching and low-latency on top.

    We will likely work with or build some routing capabilities in the future!

mdev23 2 years ago

where is the github?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection