Settings

Theme

Llamafile – The easiest way to run LLMs locally on your Mac

ppaolo.substack.com

27 points by paolop 2 years ago · 18 comments

Reader

wokwokwok 2 years ago

Why?

It's unsafe and it takes all the choice and control away from you.

You should, instead:

1) Build a local copy of llama.cpp (literally clone https://github.com/ggerganov/llama.cpp and run 'make').

2) Download the model version you actually want from hugging face (for example, from https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGU..., with the clearly indicated required RAM for each variant)

3) Run the model yourself.

I'll say this explicitly: these llamafile things are stupid.

You should not download arbitrary user uploaded binary executables and run them on your local laptop.

Hugging face may do it's best to prevent people from taking advantage of this (heck, they literally invented safetensors), but long story short: we can't have nice things because people suck.

If you start downloading random executables from the internet and running them, you will regret it.

Just spend the extra 5 minutes to build llama.cpp yourself. It's very, very easy to do and many guides already exist for doing exactly that.

  • superkuh 2 years ago

    It only takes away choice if you use the demo files with the models baked in. There are versions of this under the Releases->Assets that are only the actual llama.cpp OS portable binaries that you pass the model file path to as normal.

    Compiling llama.cpp is relatively easy. Compiling llama.cpp for GPU support is a bit harder. I think it's nice this OS portable binaries of llama.cpp applications like main, server, and llava exist. Too bad there's no opencl ones. The only problem was baking in the models. Downloading applications off the internet is not that weird. After all, it's the recommended way to install Rust, etc.

    • wokwokwok 2 years ago

      > Compiling llama.cpp is relatively easy. Compiling llama.cpp for GPU support is a bit harder.

      It is not.

      For Mac using M1 as per this specific post, you run “make”.

      If you have an older Intel Mac and have to run using cpu, you run “make”.

      > Downloading applications off the internet is not that weird. After all, it's the recommended way to install Rust, etc.

      Downloading applications from trusted sources is not that weird.

      Eg. Rust, from the rust organisation.

      Downloading and running user uploaded binaries is a security nightmare.

      • paolopOP 2 years ago

        While in general I agree with your security concerns, here the links are from very trusted sources (Mozilla Internet Ecosystem and Mozilla's innovation group) and the user is well known (present on X too with a large following).

        Re: "simplicity", sure for you and I it's simple to compile llama.cpp, but it's like asking a regular user to compile their applications themselves. It's not that simple for them, and should not be required if we want to make AI and OSS AI in particular more mainstream.

        • wokwokwok 2 years ago

          The command to run is:

          Open terminal

          curl -LO https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/ma...

          chmod 755 …

          ./…

          Open localhost:8080 in browser

          To make this accessible to a broader cohort you would package it into an app and put it somewhere with provenance, eg. A well known GitHub account or App Store.

          The solution, as shown, doesn’t solve either of the problems you’ve said are problems it attempts to solve.

          It is a bad solution.

          • paolopOP 2 years ago

            Totally agreed it's not yet ideal - absolutely. But I feel we are expanding the pie of users with this step, which is just an intermediate step. Do you want to work on that packaging ;-)?

senthil_rajasek 2 years ago

See also,

Llamafile is the new best way to run a LLM on your own computer (simonwillison.net)

https://news.ycombinator.com/item?id=38489533

And

https://news.ycombinator.com/item?id=38464057

Akashic101 2 years ago

I'd like to train one of the provided LLM's with my one data, I heard that RAG can be used for that. Does anyone have any pointers on how this could be achieved with llamafiles all locally on my server?

paolopOP 2 years ago

What's your experience with open source LLMs like LLaVA 1.5 or Mistral 7B?

aldarisbm 2 years ago

Why does this keep popping up on here?

  • gapchuboy 2 years ago

    Because, people on hackernews are interested more in the prompt engineering. Convenience and satisfaction 》5 minutes of git pull and make

    • paolopOP 2 years ago

      Agreed - and ultimately, you start removing the need to have a git app and git knowledge to pull and compile... it's not just 5 minutes, but you open up the market to way more people. Now, ideally it should just be as installing an app, but it's a good step in that direction.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection