Settings

Theme

'Western Qwen': IBM Wows with Granite 4 LLM Launch and Hybrid Mamba/Transformer

venturebeat.com

83 points by 2bluesc 2 months ago · 25 comments

Reader

danielhanchen 2 months ago

I made some dynamic GGUFs for the 32B MoE model! Try:

./llama.cpp/llama-cli -hf unsloth/granite-4.0-h-small-GGUF:UD-Q4_K_XL

Also a support agent finetuning notebook with granite 4: https://colab.research.google.com/github/unslothai/notebooks...

incomingpain 2 months ago

"Small" is 32b a9b for 19GB @ Q4_K_XL

20GB @ 100,000 context.

But for some reason... LM studio isnt loading it onto gpu for me?

I just updated to 0.3.28 and still wont load onto gpu.

Switching from Vulkan to rocm. It's now working properly?

https://docs.unsloth.ai/new/ibm-granite-4.0

Fantastic work from unsloth folks as usual.

As it's running in roo code, it's using more like 26GB of vram.

~30TPS

Roo code does not work with it.

Kilo code next. It seems to be about 22GB of vram.

Kilo code works great.

The model however didn't 1 shot my first benchmark. That's pretty bad news for this model given magistral 2509 or apriel 15b are better.

Better on pass 2, still no 100%

3rd pass achieved.

Im predicting it'll be around 30% on livecodebench. Probably like 15% on aiderpolyglot. Very disappointed in its coding capability.

I just found:

https://artificialanalysis.ai/models/granite-4-0-h-small

25.1% on livecodebench. Absolutely deserved.

2% terminal bench.

16% on coding index. Completely deserved.

baobun 2 months ago

IBM announcement post is more informative than venturebeat

IBM Granite 4.0: hyper-efficient, high performance hybrid models for enterprise

https://www.ibm.com/new/announcements/ibm-granite-4-0-hyper-...

  • flowerthoughts 2 months ago

    ISO 42001 certified.

    > ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

    https://www.iso.org/standard/42001

    If anyone has access to ISO standards, I'm really curious what the practical effects of that certification is. I.e. what things does Granite have that others don't, because they had to add/do it to fulfill the certification.

    The committee was formed in 2017, chaired by an AI expert: https://www.iso.org/committee/6794475.html

    • PeterStuer 2 months ago

      Depends. In my experience, some countries, e.g. Spain, are very into certs while others just ignore it.

  • magicalhippo 2 months ago

    They also have a nice write-up on the Mamba architecture:

    https://www.ibm.com/think/topics/mamba-model

aetherspawn 2 months ago

I really just want to know how it compares to ChatGPT and Claude at various tasks, but there aren’t any graphs for that.

  • KronisLV 2 months ago

    It will probably take a few days/week for some in depth benchmarks to start popping up.

    The IBM article has this image showing that it's supposed to be a bit ahead of GPT OSS 120B for at least some tasks (horrible URL but oh well): https://www.ibm.com/content/dam/worldwide-content/creative-a...

    So in general it's going to be worse than GPT-5 and also Sonnet 4.5, but closer to GPT-5 mini. At least you can run this on prem, but none of the others. Pretty good, could possibly replace Qwen3 for quite a few use cases!

EagnaIonat 2 months ago

Tried out the Ollama version and it's insanely fast with really good results for 1.9GB size. Supposed to have a 1M context window, would be interested where the speed goes then.

No Mamba in the Ollama version though.

  • mehdibl 2 months ago

    Ollama default to Q4 usually and 8/16k context and not the 1M context

  • Flere-Imsaho 2 months ago

    (I've only just starting running local LLMs so excuse the dumb question).

    Would Granite run with llama.cpp and use Mamba?

    • RossBencina 2 months ago

      Last I checked Ollama inference is based on llama.cpp so either Ollama has not caught up yet, or the answer is no.

      EDIT: Looks like Granite 4 hybrid architecture support was added to llama.cpp back in May: https://github.com/ggml-org/llama.cpp/pull/13550

      • magicalhippo 2 months ago

        > Last I checked Ollama inference is based on llama.cpp

        Yes and no. They've written their own "engine" using GGML libraries directly, but fall back to llama.cpp for models the new engine doesn't yet support.

thawab 2 months ago

After getting burned by Watson. I am not touching any AI from IBM.

anshumankmr 2 months ago

Also worth checking out was codestral... I think that had a 256k context and used Mamba even if it is slightly older model now... it had worked great for a Text2SQL use case we worked on.

  • incomingpain 2 months ago

    Magistral 2509 just came out. It super slows down when you go over 40,000 context. It's quite a fantastic model.

serioussecurity 2 months ago

Every technical paper I've read that IBM publish at an ML conference has been P-hacked to hell. Stay away.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection