Settings

Theme

Your LLM provider will go down, but you don't have to

assembled.com

24 points by johnjwang 7 months ago · 3 comments

Reader

ceebzilla 7 months ago

This is interesting. The core models are clearly doing well as standalone businesses and have started to establish lock-in with end consumers (I've invested enough time tailoring GPT to me that I'm wary to switch to Claude or Gemini now), but for businesses relying on these models. But as a business leveraging these models, yeah, I think they are all fairly commoditized and why wouldn't you swap them out willy nilly based on best performance?

  • johnjwangOP 7 months ago

    From the API standpoint, it makes a lot of sense for us to be able to provide different types of providers. And we've found also that different models/providers are better at different types of tasks. For example, the Gemini models have really great latency, which are good for specific types of tasks that are very latency sensitive, but we've found reasoning to be quite strong with OpenAI/Anthropic.

attaboy 7 months ago

With apologies to Randall Munroe, it sometimes feels like LLM providers are the new "project from some random person in Nebraska" https://imgur.com/a/qjAinj2

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection