Settings

Theme

Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat

huggingface.co

30 points by osanseviero 2 years ago · 12 comments

Reader

osansevieroOP 2 years ago

Zephyr 141B is a Mixtral 8x22B fine-tune. Here are some interesting details

- Base model: Mixtral 8x22B, 8 experts, 141B total params, 35B activated params

- Fine-tuned with ORPO, a new alignment algorithm with no SFT step (hence much faster than DPO/PPO)

- Trained with 7K open data instances -> high-quality, synthetic, multi-turn

- Apache 2

Everything is open:

- Final Model: https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v...

- Base Model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1

- Fine-tune data: https://huggingface.co/datasets/argilla/distilabel-capybara-...

- Recipe/code to train the model: https://huggingface.co/datasets/argilla/distilabel-capybara-...

- Open-source inference engine: https://github.com/huggingface/text-generation-inference

- Open-source UI code https://github.com/huggingface/chat-ui

Have fun!

adt 2 years ago

Added, thanks.

https://lifearchitect.ai/models-table/

mjewkes 2 years ago

My current favorite “LLM breaker” below. GPT4, Claude, and this all fail.

—-

Apples are better than bananas. Cherries are worse than apples. Are cherries better than bananas?

  • cowanon44 2 years ago

    FWIW, Claude Opus (the paid model) gets the correct answer, and explains it well:

    Based on the information provided, we cannot definitively determine whether cherries are better than bananas. The statements given only provide relative comparisons between apples and bananas, and apples and cherries, but do not directly compare cherries and bananas.

    Here's what we know: 1. Apples are better than bananas. 2. Cherries are worse than apples.

    However, these statements do not necessarily imply a transitive relation. In other words, just because A is better than B, and C is worse than A, it doesn't always mean that C is worse than B.

    For example, if we assign numerical values to represent the quality of each fruit (higher numbers indicate better quality), we could have a scenario like this:

    - Apples: 8 - Bananas: 6 - Cherries: 7

    In this case, apples (8) are better than bananas (6), and cherries (7) are worse than apples (8), but cherries (7) are still better than bananas (6).

    Therefore, more information would be needed to determine the relative quality of cherries compared to bananas.

  • loudmax 2 years ago

    To be fair, a lot of humans fail that. Including people that should know better.

    • mjewkes 2 years ago

      For sure. It's not a fair prompt at all. I'm super bullish on LLMs and am using GPT-4 in production right now. This stuff is magic.

      It's actually hard to find short, simple, "plain english" failure cases like the above.

      The "chain of reasoning" that the modern models deploy before the fail is funny too. This is GPT-4:

      ---

      To determine the relationship between cherries and bananas based on your statements, let's break it down:

        1. Apples are better than bananas.
        2. Cherries are worse than apples.
      
      From statement 1, we know apples rank higher than bananas. Statement 2 tells us cherries rank lower than apples. By this logic, since cherries are lower than apples, which are higher than bananas, it follows that cherries are also lower than bananas.

      Therefore, based on these comparisons, cherries are not better than bananas.

      • mjewkes 2 years ago

        Notably, if you ask it to transform the statements to formal logic, you get a correct response! This stuff is truly magic.

        https://chat.openai.com/share/81e45fef-a72b-4258-98d6-5c8190...

        • anon373839 2 years ago

          This makes sense to me. If you think about the training data, texts working through problems using formal predicate logic are likely to be correct, and much more likely to be precise about what information is (or isn’t) contained in the propositions. So if you formulate the problem in this language, you’re prompting the model to sample from patterns that are more likely to give you the result you want. Whereas if you use regular English, it could be sampling from cooking blogs or who knows what.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection