Yet another reminder why you should not use Ollama
github.comGeorgi's relevant comment: https://github.com/ggml-org/llama.cpp/pull/19324#issuecommen...
Can someone add some context as to what that diff is showing?
and use the original llama.cpp directly. Its infinitely more easy to setup and use now
Setting up ollama is 2 steps:
1. yay -S ollama
2. sytemctl enable --now ollama
How is llama.cpp infinitely more easy to set up?
infinitely more easy relative to what it used to be