Saving $750K on AI inference with one line of code and no quality loss
notdiamond.aiThis is neat -- how would you think about evaluating the quality loss as you change to more efficient models? I saw you did an analysis on the number of messages, but wondering if there's more robust methods?
In offline training of our router, we run extensive cross-domain evaluations to determine when a smaller model can handle a request without any quality loss relative to more powerful models. In an online setting like our chat app, there's probably some more rigorous post-hoc analysis we could do on response quality—could make for a good follow-up post.