Settings

Theme

Mistral releases Pixtral 12B, its first multimodal model

techcrunch.com

163 points by jerbear4328 a year ago · 43 comments

Reader

buran77 a year ago

The "Mistral Pixtral multimodal model" really rolls off the tongue.

> It’s unclear which image data Mistral might have used to develop Pixtral 12B.

The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.

  • simonw a year ago

    What do you mean by copyright measures? Has anything changed on that front in the last two years?

    My hunch is that most AI labs are already sitting on a pretty sizable collection of scraped image data - and that data from two years ago will be almost as effective as data scraped today, at least as far as image training goes.

    • dartos a year ago

      The issue with image models is that their style becomes identifiable and stale quite quickly, so you’ll need a fresh intake of different, newer, styles every so often and that’s going to be harder and harder to get.

      • GaggiX a year ago

        The style becoming identifiable and stale has mostly to do with CFG and almost nothing with the dataset, the heavy use of CFG by most models trades diversity with coherency. You don't need a costant intake of new images and styles, it's like saying that an image created two years ago is stale because it doesn't follow a new style or something.

        Also Pixtral is not a text-to-image model.

        • p0rkbelly a year ago

          There is the problem of literal style though. The aesthetics of say clothes do evolve overtime, not year to year big changes, but every 3-5? Sure. Just laughing at the thought of the model where any image generated is say stuck in 1990s grunge attire.

        • esafak a year ago

          CFG for Classifier-Free Guidance?

          • GaggiX a year ago

            Exactly, https://arxiv.org/abs/2207.12598

            Jonathan Ho, one of the authors of the CFG paper, now works for Ideogram, and Ideogram 2 is one of the very few models (or perhaps the only one) where I don't see the artifacts caused by the CFG, maybe he has achieved a breakthrough.

      • Eisenstein a year ago

        > Built on one of Mistral’s text models, Nemo 12B, the new model can answer questions about an arbitrary number of images of an arbitrary size given either URLs or images encoded using base64, the binary-to-text encoding scheme. Similar to other multimodal models such as Anthropic’s Claude family and OpenAI’s GPT-4o, Pixtral 12B should — at least in theory — be able to perform tasks like captioning images and counting the number of objects in a photo.

        This is a not a diffusion model -- it doesn't create images, it answers questions.

      • namlem a year ago

        Train LoRas for models that can take them

        • dartos a year ago

          The issue is getting the data on newer aesthetic styles.

          The more and more platforms lock down access to their data, the harder it’ll be for models to stay up to date on art trends.

          We just haven’t had image gen around long enough to witness a major style change like the skeuomorphic iPhone icons of old to the new modern flat ones.

      • whimsicalism a year ago

        solvable without additional images

        • dartos a year ago

          It’s literally not.

          If an artist born today develops their own style that takes the world by storm in 20years, the image generators of the time (for this thought experiment, imagine we’re using the same image gen techniques as today) would not know about it. They wouldn’t be able to replicate it until they get enough training data on that style.

  • bronco21016 a year ago

    At what point does an agent sitting at a browser collecting information differ from a human?

    I have multiple ad-blockers running, how am I different from a bot scouring the “free” web? I get the idea of copyright and creators wanting to be paid for their content. However, I think there are plenty of human users out there not “paying” for “free” content either. Which one is a greater loss of revenue? A collection of over a million humans? Or 100 or so corporate bots?

    • a2128 a year ago

      Humans use Google Chrome from their home IP address that isn't on any blacklists, and they're always happy to make an account and download an app instead of accessing a website. Or at least that's what companies think humans are

  • GaggiX a year ago

    >The days of free web scraping especially for the richer sources of material are almost gone

    I would say the opposite, it has never been easier to collect a huge amount of data, in particular if you have a target, also you don't even need to write a line of code if you are good at explaining Claude 3.5 Sonnet what you want to achieve and the details.

  • jazzyjackson a year ago

    You don't need a contract with reddit to scrape it, you can just add `.json` to any url and you'll get the entire thread as one object.

  • htrp a year ago

    there are torrents all over the internet of AI training data for images and video....

    img2dataset also exists

reissbaker a year ago

Couple notes for newcomers:

1. This is a VLM, not a text-to-image model. You can give it images, and it can understand them. It doesn't generate images back.

2. It seems like Pixtral 12B benchmarks significantly below Qwen2-VL-7B [1], so if you want the best local model for understanding images, probably use Qwen2. If you want a large open-source model, Qwen2-VL-72B is most likely the best option.

1: https://qwenlm.github.io/blog/qwen2-vl/

  • Jackson__ a year ago

    >If you want a large open-source model, Qwen2-VL-72B is most likely the best option.

    Only the 2&7B have been "open sourced". From your link:

    >We opensource Qwen2-VL-2B and Qwen2-VL-7B with Apache 2.0 license, and we release the API of Qwen2-VL-72B!

aucisson_masque a year ago

Mistral being more open than 'openai' is kind of a meme. How can a company call itself open while it refuses to openly distribute it's product and when competitor are actually doing it.

ChrisArchitect a year ago

Related earlier:

New Mistral AI Weights

https://news.ycombinator.com/item?id=41508695

azinman2 a year ago

I’d love to know how much money Mistral is taking in versus spending. I’m very happy for all these open weights models, but they don’t have Instagram to help pay for it. These models are expensive to build.

wruza a year ago

A question for sd lora trainers, is this usable for making captions and what are you using, apart from BLIP?

Also, can your model of choice understand your requests to include/omit particular nuances of an image?

  • Jackson__ a year ago

    I like Qwen2-VL 7B because it outputs shorter captions with less fluff. But if you need to do anything advanced that relies on reasoning and instruction following the model completely falls flat on it's face.

    For example, I have a couple way-too-wordy captions made with another captioner, which I'd like to cut down to the essentials while correcting any mistakes. Qwen2 is completely ignoring images with this approach, and decides to only focus on the given caption, which makes it unable to even remotely fix issues in said caption.

    I am really hoping Pixtral will be better for instruction following. But I haven't been able to run it because they didn't prioritize transformers support, which in turn has hindered the release of any quantized versions to make it fit on consumer hardware.

  • AuryGlenz a year ago

    I’m no expert but Florence2 has been my go-to. It’s pretty great at picking up art styles and IP stuff - “The image depicts Goku from the anime series Dragonball Z…”

    I don’t believe you can really prompt it though, but the other models where I could also didn’t work well on that front anyways.

    TagGui is an easy way to try out a bunch of models.

    • wruza a year ago

      Yeah, blip mostly ignores prompt too. I tried to disassemble it and feed my prompts, to no avail. Although I found that default kohya gui arguments are not even remotely the best. Here's my args:

        finetune/make_captions.py ... \
          --num_beams=12 \
          --top_p=0.9 \
          --max_length=75 \
          --min_length=24 \
          --beam_search \
          ...
      
      With this, it's very often that I just take its caption as is, or add little.

      TagGui

      Oh, interesting, thanks!

Flockster a year ago

Could this be used for a selfhosted handwritten text recognition instance?

Like writing on an ePaper tablet, exporting the PDF and feed this into this model to extract todos from notes for example.

Or what would be the SotA for this application?

edude03 a year ago

12B is pretty small, so I’m doubting it’ll be anywhere close to internvl2 however mistral does great work and likely this model is still useful for on device tasks

  • Jackson__ a year ago

    It appears to be slightly worse than Qwen2VL 7B, a model almost half it's size, if you look at the Qwen's official benchmarks instead of Mistral's.

    https://xcancel.com/_philschmid/status/1833954941624615151

    • kaoD a year ago

      But Qwen is not multimodal, or is it?

      • Jackson__ a year ago

        https://qwen2.org/vl/

        >Qwen2-VL is the latest addition to the vision-language models in the Qwen series, building upon the capabilities of Qwen-VL. Compared to its predecessor, Qwen2-VL offers:

        >State-of-the-Art Image Understanding

        >Extended Video Comprehension

        Besides, it'd have been pretty silly for them to mention it on their slides if it wasn't.

  • jazzyjackson a year ago

    I've found llama 3.1 8B to be effective at transforming unstructured text into structured data, now that LM Studio accepts a json schema parameter.

    For a general knowledge chatbot it doesn't know much of course, but its a good worker bee.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection