Show HN: Beam – Find Better Answers with Multi-Model AI Reasoning
big-agi.comHN, first things first: one year ago you make me believe in my opensource AI project, and I'm forever grateful[1].
I am back with Beam - a technique to use diverse LLMs to generate responses, and Merge them - all within a snappy UX. I am no researcher, so you'll find a dark-mode blog, and not a light-mode PDF on arxiv :) Blog, open code, and live hosted demo, all published.
You can use Beam early on in a chat, where looking at more options is key to be more confident in the answer, but also when no answer if perfect, but fusing many together will work well. Take a look and let me know what you think!
[1]: The good reception to my first HN post and the 300 GH stars gave me the courage to follow my passion and double-down on my project, which now users love. With Beam, we can easily experiment approaches such as Chain-Of-Though-with-Self-Consistency (CoT-SC) and other reasoning meta framework, but with more manual control. I always had issues using LLM's chat driven interface to figuring out/explore issues that i'm interested, since conversation/chats is always linear while reasoning/working on some ideas is structural. Beam seems to be a much better UX than the linear chat UX that saves me a lot of copy and paste and save and retry. Awesome work! Yes, the only issue is the usage of tokens, which is obviously greater as we are sampling more of the solutions space. But it's a compromise to have GPT-4.5 level intelligence with GPT-4. Probably even higher jump as the models have some amount of unique training data, and they are fact-checking each other, to a more common “truth”, and hallucinations are weeded out. Awesome feature! Quick question, how do you choose which model to use when you "fuses" multiple beams back into one? There's a combo box on the right side, and when you click on the "Add Merge" (green) button, the currently active model will be selected. got it! This is a very clever approach… I am super interested One person on Discord has called this 'taking the idea of self-consistency forward to ensemble model usage'. I guess this is, technically, what this approach is about :) I can't believe how much quicker it is to get to an optimal, comprehensive outcome using Beam. This is a groundbreaking way of interacting with LLMs. Exactly my thought, as mentioned in the other thread, Chat's linear conversation style is not fit for reasoning/exploration type of tasks, while Beam's fan-out -> select --> merge is a much better and natural flow! Same experience. Once you beam you look for it everywhere! Wow, this is awesome. I tried Beam mode. It does produce better results! Same. I like using Opus | Gpt-4 | Gemini Pro (I don't have Ultra) | Mistral Large. Interestingly, Mistral Large “wins” sometimes, and at least provides unique results in comparison. Huge fan, thank you for your hard work! This is ok outstanding!! Thank you so much - there's much more and much better coming ;)