Intent-tuned LLM router that selects the best LLM for a user query
github.comAn example with different fine-tuned models (especially smaller/cheaper ones) would probably be more interesting than running against a bunch of similar foundation models. For example throwing in some code-generation models and demonstrating that it picks those for coding problems.
That’s an interesting use case, we thought of a composition of expert router with specialized models like code-generation models to handle coding tasks. Currently we encourage Users to experiment with their own data, and we're happy to help. If you're interested in applying this to smaller LLMs and specialized adapters, let's connect!
I was shocked how much better CodeQwen1.5 was at Python compared to ChatGPT 4.
Very cool... nice work.