Settings

Theme

Show HN: Beam – Find Better Answers with Multi-Model AI Reasoning

big-agi.com

18 points by enricoros 2 years ago · 20 comments · 1 min read

Reader

HN, first things first: one year ago you make me believe in my opensource AI project, and I'm forever grateful[1].

I am back with Beam - a technique to use diverse LLMs to generate responses, and Merge them - all within a snappy UX. I am no researcher, so you'll find a dark-mode blog, and not a light-mode PDF on arxiv :) Blog, open code, and live hosted demo, all published.

You can use Beam early on in a chat, where looking at more options is key to be more confident in the answer, but also when no answer if perfect, but fusing many together will work well. Take a look and let me know what you think!

[1]: The good reception to my first HN post and the 300 GH stars gave me the courage to follow my passion and double-down on my project, which now users love.

fredliu 2 years ago

With Beam, we can easily experiment approaches such as Chain-Of-Though-with-Self-Consistency (CoT-SC) and other reasoning meta framework, but with more manual control. I always had issues using LLM's chat driven interface to figuring out/explore issues that i'm interested, since conversation/chats is always linear while reasoning/working on some ideas is structural. Beam seems to be a much better UX than the linear chat UX that saves me a lot of copy and paste and save and retry. Awesome work!

  • enricorosOP 2 years ago

    Yes, the only issue is the usage of tokens, which is obviously greater as we are sampling more of the solutions space. But it's a compromise to have GPT-4.5 level intelligence with GPT-4.

    • keithc24 2 years ago

      Probably even higher jump as the models have some amount of unique training data, and they are fact-checking each other, to a more common “truth”, and hallucinations are weeded out.

fredliu 2 years ago

Awesome feature! Quick question, how do you choose which model to use when you "fuses" multiple beams back into one?

  • enricorosOP 2 years ago

    There's a combo box on the right side, and when you click on the "Add Merge" (green) button, the currently active model will be selected.

sdeshcloud 2 years ago

This is a very clever approach… I am super interested

  • enricorosOP 2 years ago

    One person on Discord has called this 'taking the idea of self-consistency forward to ensemble model usage'. I guess this is, technically, what this approach is about :)

arbiterFF 2 years ago

I can't believe how much quicker it is to get to an optimal, comprehensive outcome using Beam. This is a groundbreaking way of interacting with LLMs.

  • fredliu 2 years ago

    Exactly my thought, as mentioned in the other thread, Chat's linear conversation style is not fit for reasoning/exploration type of tasks, while Beam's fan-out -> select --> merge is a much better and natural flow!

  • enricorosOP 2 years ago

    Same experience. Once you beam you look for it everywhere!

keithc24 2 years ago

Wow, this is awesome. I tried Beam mode. It does produce better results!

  • enricorosOP 2 years ago

    Same. I like using Opus | Gpt-4 | Gemini Pro (I don't have Ultra) | Mistral Large.

    • keithc24 2 years ago

      Interestingly, Mistral Large “wins” sometimes, and at least provides unique results in comparison.

molander 2 years ago

Huge fan, thank you for your hard work! This is ok outstanding!!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection