Settings

Theme

Mistral AI Launches New 8x22B MOE Model

twitter.com

379 points by varunvummadi 2 years ago · 160 comments

Reader

freeqaz 2 years ago

What's the easiest way to run this assuming that you have the weights and the hardware? Even if it's offloading half of the model to RAM, what tool do you use to load this? Ollama? Llama.cpp? Or just import it with some Python library?

Also, what's the best way to benchmark a model to compare it with others? Are there any tools to use off-the-shelf to do that?

  • fbdab103 2 years ago

    I think the llamafile[0] system works the best. Binary works on the command line or launches a mini webserver. Llamafile offers builds of Mixtral-8x7B-Instruct, so presumably they may package this one up as well (potentially a quantized format).

    You would have to confirm with someone deeper in the ecosystem, but I think you should be able to run this new model as is against a llamafile?

    [0] https://github.com/Mozilla-Ocho/llamafile

    • jart 2 years ago

      llamafile author here. I'm downloading Mixtral 8x22b right now. I can't say for certain it'll work until I try it, but let's keep our fingers crossed! If not, we'll be shipping a release as soon as possible that gets it working.

      My recent work optimizing CPU evaluation https://justine.lol/matmul/ may have come at just the right time. Mixtral 8x7b always worked best at Q5_K_M and higher, which is 31GB. So unless you've got 4x GeForce RTX 4090's in your computer, CPU inference is going to be the best chance you've got at running 8x22b at top fidelity.

      • moffkalast 2 years ago

        Correct me if I'm wrong, but in the tests I've run, the matmul optimizations only have an effect if there's no other blas acceleration. If one can at least offload the KV cache to cublas or run with openblas it's not really used, right? At least I didn't see any speedup in with that config when comparing that PR to the main llama.cpp branch.

        • jart 2 years ago

          The code that launches my code (see ggml_compute_forward_mul_mat) comes after CLBLAST, Accelerate, and OpenBLAS. The latter take precedence. So if you're not seeing any speedup in enabling them, it's probably because tinyBLAS has reached terms of equality with the BLAS. It's obviously nowhere near as fast as cuBLAS, but maybe PCIE memory transfer overhead explains it. It also really depends on various other factors, like quantization type. For example, the BLAS doesn't support formats like Q4_0 and tinyBLAS does.

    • noman-land 2 years ago

      +1 on llamafile. You can point it to a custom model.

  • varunvummadiOP 2 years ago

    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)

    • sheepscreek 2 years ago

      In that regard, it’s even easier to use one Apple Studio with sufficient RAM and llama.cpp or even PyTorch for inference.

  • hmottestad 2 years ago

    LM Studio is a great way to test out LLMs on my MacBook: https://lmstudio.ai/

    Really easy to search huggingface for new models to test directly in the app.

    • LeoPanthera 2 years ago

      Make sure you get the prompt template set correctly, the defaults are wrong for a lot of models.

      • unifer1 2 years ago

        Could you explain how to do this properly ? I've been having problems with the app and am wondering if this is ehy

        • LeoPanthera 2 years ago

          Look at the HuggingFace page for the model you are using. (The original page, not the page for the GGUF conversion, if necessary.) This will explain the chat format you need to use.

  • bevekspldnw 2 years ago

    There is a user called The Bloke on hugging face- they release pre quantized models pretty soon after the full size drop. Just watch their page and pray you can fit the 4 bit in your GPU.

    I’m sure they are already working on it.

  • mritchie712 2 years ago
SushiHippie 2 years ago

[dupe] https://news.ycombinator.com/item?id=39986047

Which has the link to the tweet instead of the profile:

https://twitter.com/MistralAI/status/1777869263778291896

mlsu 2 years ago

8x22b. If this is as good as Mixtral 8x7b we are in for a wonderful time.

  • cchance 2 years ago

    I've heard command-r is first opensource to beat gpt4 in benchmarks

    • jxy 2 years ago

      It's "Command R+". "Command R" is a smaller model.

    • varunvummadiOP 2 years ago

      It beats the old GPT4 version in lmsys benchmark you can check it out here https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar... but Command R is commercially licensed We can assume that mistral will do a better job.

      • skissane 2 years ago

        > but Command R is commercially licensed

        It is licensed under CC-BY-NC-4.0. That license means you are free to use, modify and redistribute it, so long as you aren't doing so "commercially". What exactly counts as "commercial" use is a complex legal question, and the answer may vary from jurisdiction to jurisdiction (different courts may interpret the phrase differently). But, for example, if you are just using it at home for private experimentation on your own personal time, with no plans to make money from doing so (whether now or in the future), I think pretty much everyone will agree that counts as "non-commercial".

        Other cases – e.g., if a government agency uses the software to provide some government function, is that "non-commercial"? – are far less clear. Those are really the kind of questions you need to ask a lawyer (which I am not).

        • ryao 2 years ago

          I am not a lawyer, but lately, I have been wondering whether the contra proferentem rule interacts with these licenses.

          • column 2 years ago

            For anybody else not in the know : "Contra proferentem is a legal principle that suggests when there is ambiguity in the terms of a contract, the ambiguity should be resolved against the party that drafted the contract."

          • skissane 2 years ago

            I think the correct answer to your question is almost certainly some combination of "it depends on the jurisdiction" and also (in many cases) "nobody can be entirely sure because no court has considered the issue yet"

            There have been a handful of court decisions on what "non-commercial" use means – the Creative Commons legal case database records [0] records three cases involving non-commercial CC licenses in the US, one in Belgium, one in Israel, plus I also know of one in Germany [1] which their database seems to be missing. I don't know if any of them addressed the contra proferentem rule which you mention.

            The German and US cases on this topic appear contradictory – from what I understand, the German case assumed that all government use is commercial, interpreting "non-commercial" to basically mean "private home use", whereas two of the US cases (Great Minds v FedEx Office and Great Minds v Office Depot) were about use by commercial entities acting under contract to public school districts, and the holdings of those cases assume that government-operated schools are "non-commercial" (and furthermore, the commercial entities were engaging in "non-commercial" use, even though they were acting commercially, because they were doing so on behalf of a "non-commercial" customer). That said, all these cases have somewhat limited precedential value – the US cases are binding precedent in two federal judicial circuits (2nd and 9th) but have merely persuasive value in the remainder of the US; I don't know what the ultimate outcome of the German case was (Deutschlandradio said they were going to appeal but I don't know if they did and what the outcome was if they did), and German law doesn't view precedent as "binding" in quite the same sense that common law systems do anyway

            [0] https://legaldb.creativecommons.org/cases/?keywords=&tags%5B...

            [1] https://www.techdirt.com/2014/03/27/german-court-says-creati... and if you can read German, here is the actual court judgement: https://netzpolitik.org/wp-upload/OLG-K%C3%B6ln-CC-NC-Entsch...

        • refulgentis 2 years ago

          I have a weird problem where I want to charge per month for you to use my app that allows you to use N different paid models and any llama.cpp model you want. Im curious if you have any thoughts in what situation I'm in if it's one of 5 built in local options highlighted in the app

          Morally I feel 100% fine because the app would be just as appealing without it, and subscribing means you get sync, you could theoretically not pay me and use Command R

        • cyanydeez 2 years ago

          The His website tends to move towards things that can make money.

          That's typically synonymous with commercial.

  • moralestapia 2 years ago

    You mean better, right?

    Why would you want another 8x7b, if you already have it ...

nazka 2 years ago

Out of topic but are we now back at the same performance than ChatGPT 4 at the time people said it worked like magic (meaning before the nerf to make it more politically correct but making his performance crash)?

  • hmottestad 2 years ago

    I’ve been testing a lot of LLMs on my MacBook and I would say that all of them are far away from being as good as GPT-4, at any time. Many are as good as GPT-3 though. There are also a lot of models that are fine tuned for specific tasks.

    Language support is one big thing that is missing from open models. I’ve only found one model that can do anything useful with Norwegian, which has never been an issue GPT-4.

    • Eisenstein 2 years ago

      Which ones have you tested? There were some huge ones released recently.

      • hmottestad 2 years ago

        Samantha, llama 2 pubmed, marcoroni, openchat, fashiongpt, falcon 180B, deepseek llm chat, orca 2, orca 2 alpac uncersored, meditron, tigerbot, mixtral instruct, wizardcoder, gemma, nouse hermes 2 solar, yarn solar 64k, nouse hermes 2 yi, nous hermes 2 mixtral, nouse hermes llama 2, starcode2, hermes 2 pro mistral, norskgpt mistral and norskgpt llama.

        Nouse Hermes 2 Solar is the best model for Norwegian that I've tried so far. It's much better than NorskGPT Mistral/Llama. I actually got it to make fairly decent summaries of news articles, though it wouldn't follow any stricter commands like producing 5 keywords in a json list. Kept producing more than 5 keywords and if I doubled down on the restriction on the number of keywords it would start messing up the json.

        The best competitor to GPT-4 was falcon 180b, it's still terrible compared to GPT-4. Mixtral is my new favourite though, it's faster than falcon and in general as good or better. Though I would still pick GPT-4 over Mixtral any day of the week, it's leagues ahead of Mixtral.

        Tigerbot has a very interesting trait. It tends to disagree when you try to convince it that it's wrong.

        I haven't been able to test out the new 8x22 mixtral or command r plus. These are the next ones on my list!

        • hmottestad 2 years ago

          Just tested out Command R+ with some niche SHACL constraint questions and it performs considerably worse than GTP-4. Might be a bit better than GPT-3.5 though, which is actually pretty amazing.

          • Eisenstein 2 years ago

            You need to use their beginning and end token scheme and set rep pen to 1 to get good quality out of cr+.

  • segmondy 2 years ago

    With open models, yes we are at the performance of at least the first release of ChatGPT 4.

    • sp332 2 years ago

      Could you recommend one or a few in particular?

      • sanjiwatsuki 2 years ago

        The current best open weights model is probably Cohere Command-R+. The memory requirements on it are quite high, though.

        • bevekspldnw 2 years ago

          I really want to see some benchmarks with performance weighted by energy use. I think Mistral 7B performance to watt would be the leader by a huge margin. On many tasks I get equal performance on zero shot classification tasks on Mistral than in bigger models.

zmmmmm 2 years ago

A pre-Llama3 race for everyone to get their best small models on the table?

  • moffkalast 2 years ago

    262 GB is not exactly small. But yes it seems they're all getting them out the door in case they end up being worse than llama-3 in which case it'll be too embarrassing to release later.

    • hmottestad 2 years ago

      Since it’s a MOE model it will only need to load a few of the 8 sub models into vram in order to answer a query. So it may look large, but I think a quantized model will easily fit on a Mac with 64GB of memory and maybe even a bit fewer bits and it’ll fit into 32GB.

      I think it might be the end for 24GB 4090 cards though :(

      • dragonwriter 2 years ago

        MOE models don’t, in practice, selectively load experts on activation (and if a runtime for them could be designed that would do that, it would make them perform worse, since the experts activated may differ from token to token, so you’d be churning a whole lot swapping portions of the model into and out of VRAM.) But they do less computation per token for their size than monolithic so you can often get tolerable performance on CPU or split between GPU/CPU at a ratio that would work poorly with a similarly-sized monolithic model.

        But, still, its going to need 262GB for weights + a variable amount based on context without quantization, and 66GB+ at 4-bit quantization.

      • brandall10 2 years ago

        Unless something has changed, it needs to load the full 8 models at the same time. During inference it performs like a 2 x base model.

        Mixtral 7B @ 5 bit takes up over 30gb on my M3 Max. That's over 90 for this at the same quantization. Realistically you probably need a 128gb machine to run this with good results.

        • fzzzy 2 years ago

          A 4 bit quant of the new one would still be about 70 gb, so yeah. Gonna need a lot more ram.

      • Kubuxu 2 years ago

        The 8x is misleading; there are 8 sets of weights (experts) per token and per layer. If it is similar to the previous MoE Mistral models, then two experts get activated per token per layer. This reduces the amount of compute and memory bandwidth you need to perform inference but doesn't reduce the amount of memory you need as you cannot load the experts into GPU memory on demand without performance impact.

      • mark_l_watson 2 years ago

        I think you are an optimist here. I can barely run mixtral-8x-7B on my M2 Pro 32G Mac, but I am grateful to be able to run it at all.

  • swyx 2 years ago

    this is likely v true given llama 3 rumored to release in next 2 weeks

nen-nomad 2 years ago

Mixtral 8x7b has been good to work with, and I am looking forward to trying this one as well.

ZeljkoS 2 years ago

Here is the unofficial benchmark: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1/...

  • bevekspldnw 2 years ago

    Wish it had GPT-4, that’s the one to beat still.

    • GuB-42 2 years ago

      It is there, not for all the benchmarks, but for those where it is included, GPT-4 scores much higher.

      Not surprising since GPT-4 is still state-of-the-art and much bigger. Where Mistral has been particularly impressive is when you take the size of the model into account.

      • mirekrusin 2 years ago

        GPT-4 is instruct tuned model, of course it's going to score higher, apples and oranges.

deoxykev 2 years ago

4 bit quants should require 85GB VRAM, so this will fit nicely on 4x 24G consumer GPUs, plus some leftover for KV cache optimization.

  • qeternity 2 years ago

    4bit should take up less than this, there are quite a few shared parameters between experts.

    But unless you’re running bs=1 it will be painful vs 8x GPU as you’re almost certain to be activating most/all of the experts in a batch.

  • hedgehog 2 years ago

    I've found the 2 bit quant of Mixtral 8x7B is usable for some purposes with an 8GB GPU. I'm curious how this new model will work in similar cheap 8-16GB GPU configurations.

    • reissbaker 2 years ago

      16GB will be way too small unfortunately — this has over 3x the param count, so at best you're looking at a 24GB card with extreme 2bit quantization.

      Really though if you're just looking to run models personally and not finetune (which requires monstrous amounts of VRAM), Macs are the way to go for this kind of mega model: Macs have unified memory between the GPU and CPU, and you can buy them with a lot of RAM. It'll be cheaper than trying to buy enough GPU VRAM. A Mac Studio with 192GB unified RAM is under $6k — two A6000s will run you over $9k and still only give you 96GB VRAM (and God help you if you try to build the equivalent system out of 4090s or A100s/H100s).

      Or just rent the GPU time as needed from cloud providers like RunPod, although that may or may not be what you're looking for.

      • timschmidt 2 years ago

        Reasonably priced Epyc systems with up to 12 memory channels and support for several TB of system memory are now available. Used datacenter hardware is even less expensive. They are on par with the memory bandwidth available to any one of the CPU, GPU, or NPU in the highest end Macs, but capable of driving MUCH more memory. And much simpler to run Linux or Windows on.

        • reissbaker 2 years ago

          I would be very curious to see pricing on Epyc systems with terabytes of RAM that cost less than $6k including the RAM...

        • hmottestad 2 years ago

          Do you have any feel for the performance compared to the M3 Max?

          • Manabu-eo 2 years ago

            LLM inference is mostly memory bound. An 12-channel Epyc Genoa with 4800MT/s DDR5 ram clocks at 460.8 GB/sec. It's more than the 400GB/s of the M3 Max, only part of that accessible for the CPU.

            And on the Epyc System you can plug much more memory for when you need larger memory and PCI-E gpus, for when you need less faster memory.

            Threadripper PRO is only 8-channel, but with memory overclocking it might reach numbers similar to those too.

            • hedgehog 2 years ago

              I'm curious how the newer consumer Ryzens might fare. With LPDDR5X they have >100 GB/s memory bandwidth and the GPUs have been improved quite a bit (16 TFLOPS FP16 nominal in the 780M). There are likely all kinds of software problems but setting that aside the perf/$ and perf/watt might be decent.

              • cjbprime 2 years ago

                Consumer Ryzens only have two-channel memory controllers. Two dual-rank (double sided) DIMMs per channel, which you would need to use to get enough RAM for LLMs, drops the memory bandwidth dramatically -- almost all the way back down to DDR4 speeds.

                • timschmidt 2 years ago

                  Yup. Strix Halo will change this, with a 256bit memory bus (4 channel) which CPU and GPU have access to. However it is only likely to be available in laptop designs and probably with soldered-down RAM to reduce timing and board space issues. So it won't be easy to get enough memory for large LLMs with either. But it should be faster than previous models for LLM work.

                • hedgehog 2 years ago

                  For consumer Ryzen to pencil out it would require a cluster of APU-equipped machines with the model striped across them. Given say 16GB of model per machine and 60GBps actual memory bandwidth @ $500 it's favorable vs A100s if the software is workable (which my guess is it's not today due to AMD's spotty support). This is for inference, training probably would be too slow due to interconnect overhead.

            • sliken 2 years ago

              If you Epyc's are too pricey, there's the Threadripper pro, 8 channels. AMD Siena/8000 series with 6 channels, and and Threadripper with 4 channels.

            • hmottestad 2 years ago

              That's interesting. It's about the same speed as the M3 Max then.

              Have you tested it yourself?

              • Manabu-eo 2 years ago

                Nope, but this guy has a similar build: https://www.reddit.com/r/LocalLLaMA/comments/1bt8kc9/compari...

                It seems to reach only a little above half the theoretical speed, and scale only up to 32 threads for some reason. Might be a temporary software limitation or something more fundamental.

              • timschmidt 2 years ago

                Should be at least twice the speed of the M3 Max, as the M3 CPU or GPU only get about half the memory bandwidth available to the package each. M3 Max can't take full advantage of it's memory bandwidth unless CPU, GPU, and NPU are all working at the same time.

                • hmottestad 2 years ago

                  I tried looking for some info on this but could only find the M1 Max review over at anandtech that managed to push 200 GB/s when using multiple cores on the CPU, but couldn’t really get any numbers for just the GPU that seemed realistic.

                  Do you have a source for the GPU only having access to half the bandwidth of the memory?

      • dannyw 2 years ago

        You can QLoRA decent models on 24GB VRAM. There’s also optimised kernels like Unsloth that are really VRAM efficient and good for hobbyists.

        • reissbaker 2 years ago

          Yes, but I still don't think you'll be able to run Mixtral 8x22b with 16GB VRAM, or QLoRA it, even with Unsloth. It's much bigger than the original Mixtral.

    • aydyn 2 years ago

      AFAIK, 2-bit quant leads to too much loss of performance, such that you're better off using a different smaller model altogether. See here:

      https://www.reddit.com/r/LocalLLaMA/comments/18ituzh/mixtral...

    • cjbprime 2 years ago

      Wouldn't expect that to work at all.

      • hedgehog 2 years ago

        Ollama (which wraps llama.cpp) supports splitting a model across devices so you get some acceleration even on models too big to fit entirely in GPU memory.

zone411 2 years ago

Very important to note that this is a base model, not an instruct model. Instruct fine-tuned models are what's useful for chat.

  • haolez 2 years ago

    What's the feeling of playing with a powerful base model? Will it just complete the prompt text like a continuation of it?

    • MPSimmons 2 years ago

      Generally, yes, it literally just tries to predict the next token again and again and again.

      This model is apparently surprisingly good at chat, even though it is a base model, and will take part it it to some extent. It should be really interesting once it's fine-tuned.

talsperre 2 years ago

Right on time as LLama 3 is released.

  • jimmySixDOF 2 years ago

    And the same day Google Gemini Pro gets almost complete open long context multimodal access and OpenAI upgrade to GPT4-Turbo it was a big day in general for news drops that's for sure!

abdullahkhalids 2 years ago

Why are some of their models open, and others closed? What is their strategy?

  • Jackson__ 2 years ago

    My personal speculation is that their closed models are based on other companies' models.

    For example on EQbench[0], Miqu[1], a leaked continued pretrain based on LLama2, performs extremely similar to the mistral medium model their API offers.

    Maybe they're thinking it'd be bad PR for them to release models they didn't create from scratch, or there is some contractual obligation preventing the release.

    [0]https://eqbench.com/index.html

    [1]https://huggingface.co/miqudev/miqu-1-70b

    • moffkalast 2 years ago

      That's quite likely, some have also speculated that Mistral 7B got some EU grant funding that stipulated it had to be openly released later, and Mixtral is based on Mistral 7B so it would likely be subject to the same terms. I haven't found any source to substantiate it though.

  • unraveller 2 years ago

    Mistral have stated they want to chase the fine-tune dollar to support le research. We should get thrown a bone of hard to tune mid-range stuff occasionally. Especially when big announcements about small models are expected later in the week (llama3) or when haiku is stealing the thunder from mixtral 8x7b.

  • kvmet 2 years ago

    It's gotta be either perceived value or training data/licensing restrictions.

  • blackeyeblitzar 2 years ago

    I am not sure why some are open and some are closed - if I had to speculate, it’s perhaps that the commercial models help fund the team. They come with safety features built-in as well as API-based access (instead of needing to self-host). They word their mission (https://mistral.ai/company/#missions) as follows:

    > Our mission is to make frontier AI ubiquitous, and to provide tailor-made AI to all the builders. This requires fierce independence, strong commitment to open, portable and customisable solutions, and an extreme focus on shipping the most advanced technology in limited time.

wkat4242 2 years ago

Weird, the last post I see at that link is from the 8th of December 2023 and it's not about this.

Edit: Ah, it's the wrong link. https://news.ycombinator.com/item?id=39986047

Thanks SushiHippie!

intellectronica 2 years ago

It's weird that more than a day after the weights dropped, there still isn't a proper announcement from Mistral with a model card. Nor is it available on Mistral's own platform.

ein0p 2 years ago

To this day 8x7b Mixtral remains the best model you can run on a single 48GB GPU. This has the potential to become the best model you can run on two such GPUs, or on an MBP with maxed out RAM, when 4-bit quantized.

varunvummadiOP 2 years ago

They Just announced their new model on Twitter, which you can download using torrent

aurareturn 2 years ago

Might be a dumb question but does this mean this model has 176B params?

  • idiliv 2 years ago

    In Mixtral 8x7B, the 8 means that the model uses Mixture-of-Experts (MoE) layers with 8 experts. The 7B means that if you were to remove 7 of the 8 experts in each layer, then you would end up with a 7B model (which would have exactly the same architecture as Mistral 7B). Therefore, a 1x7B model has 7B params. An 8x7B model has 1 * 7B + (8-1) * sz_expert params, where sz_expert is some constant value that the MoE layers increase by when adding one expert. In the case of Mixtral 8x7B the model size is 46.3GB, so, sz_expert ≈ 5.6B.

    If these assumptions port over to 8x22B, then 8x22B has, at 281GB, sz_expert ≈ 13.8B.

    • KTibow 2 years ago

      I tried to check this for myself.

      I agreed for the first one, (46.3 - 7) / 7 = 5.61b.

      The second one doesn't match up, (281 - 22) / 7 = 37b or (140.5 - 22) / 7 = 16.92b. Am I doing something wrong?

      • idiliv 2 years ago

        Just tried this again and I also arrive at 16.92B. Not sure what I did wrong the first time, thanks for double-checking this!

    • idiliv 2 years ago

      Oh, and to answer your actual question: Assuming that the model is released with 16 bits per parameter, then it as 281GB / 16 bit = 140.5 parameters.

  • hovering_nox 2 years ago

    8x7 had 46B or so.

resource_waste 2 years ago

What is the excitement around models that arent as good as llama?

This is clearly an inferior model that they are willing to share for marketing purposes.

If it was an improvement over llama, sure, but it seems like just an ad for bad AI.

  • Me1000 2 years ago

    Mixtral 7x8b was way better than llama2 70b and used less RAM and compute at the same time. This model is way better than llama.

    In fact I would go as far as saying llama2 isn’t that good compared to some of the most recent models.

  • jeppebemad 2 years ago

    We use their earlier Mixtral model because it outperforms llama for our use case. They do not release full models for marketing purposes, though it definitely grabs attention! You may need to revise your views..

  • cma 2 years ago

    It beats llama on the benchmark posted below (though maybe leaked into training data). But also you can run it on cheaper split up hardware with less individual vram than the big llama.

  • zone411 2 years ago

    What makes it you think it's not as good as LLaMA? It's likely much better. There are multiple open-weight models that are better than LLaMA 2 out there already.

swalsh 2 years ago

Is this Mistral large?

  • Jackson__ 2 years ago

    Unlikely, this model has a max sequence length of 65k, while mistral large is 32k.

  • varunvummadiOP 2 years ago

    Not sure trying to download the torrent and checking it out

    • fbdab103 2 years ago

      For those of us without twitter, how many GB is the model?

      • KTibow 2 years ago

        (hope this isn't against rules but) If you don't have Twitter, the magnet link is

          magnet:?xt=urn:btih:9238b09245d0d8cd915be09927769d5f7584c1c9&dn=mixtral-8x22b&tr=udp%3A%2F%http://2Fopen.demonii.com%3A1337%2Fannounce&tr=http%3A%2F%http://2Ftracker.opentrackr.org%3A1337%2Fannounce
      • confused_boner 2 years ago

        262 gb

        • fbdab103 2 years ago

          Ooof. I really need to pick up another HD, these model sizes are killer.

          Lacking a godly GPU, I will probably hold off for a quanitized version which has the potential to run okish on CPU or my modest GPU, but really appreciate the info.

stainablesteel 2 years ago

has anyone had success making an auto-gpt concept for mistral/llama models? i haven't found one

angilly 2 years ago

The lack of a corresponding announcement on their blog makes me worry about a Twitter account compromise and a malicious model. Any way to verify it’s really from them?

tjtang2019 2 years ago

What are the advantages compared to GPT? Looking forward to using it!

  • qball 2 years ago

    >What are the advantages compared to GPT?

    It actually does what you tell it, and won't try to silently change your prompt to conform to a specific flavor of Californian hysterics, which is what OpenAI's products do.

    Also, since it's a local model, your queries aren't being datamined nor can access to the service be revoked on a whim.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection