How to Run Local LLMs with Claude Code
Guide to use open models with Claude Code on your local device.
This step-by-step guide shows you how to connect open LLMs and APIs to Claude Code entirely locally, complete with screenshots. Run using any open model like Qwen3.6, DeepSeek and Gemma.
For this tutorial, we’ll use the open models: Qwen3.5 which is a strong MoE agentic & coding model (works on 24GB RAM/unified mem device). For inference, we'll use Unsloth Studio and llama.cpp enables you to run/serve LLMs on macOS, Linux, and Windows. You can swap in any other model, just update the model names in your scripts.
Claude Code Setup🦥 Use open LLMs via Unsloth
For model quants, we will utilize Unsloth Dynamic GGUFs to run any LLM quantized, while retaining as much accuracy as possible.
Claude Code has changed quite a lot since Jan 2026. There are lots more settings and necessary features you will need to toggle.
Claude Code Tutorial
Before we start setting up your local LLMs, we'll firstly need to setup Claude Code. Claude Code is Anthropic's agentic coding tool that lives in your terminal, understands your codebase, and handles complex Git workflows via natural language.
Install Claude Code and run it locally
curl -fsSL https://claude.ai/install.sh | bash
# Or via Homebrew: brew install --cask claude-codeConfigure
Set the ANTHROPIC_BASE_URL environment variable to redirect Claude Code to your local llama.cpp server.
export ANTHROPIC_BASE_URL="http://localhost:8001"Also you might need to set ANTHROPIC_API_KEY depending on the server. For example:
export ANTHROPIC_API_KEY='sk-no-key-required' ## or 'sk-1234'Session vs Persistent: The commands above apply to the current terminal only. To persist across new terminals:
Add the export line to ~/.bashrc (bash) or ~/.zshrc (zsh).
If you see Unable to connect to API (ConnectionRefused) , remember to unset ANTHROPIC_BASE_URL via unset ANTHROPIC_BASE_URL
Missing API key
If you see this, set export ANTHROPIC_API_KEY='sk-no-key-required' ## or 'sk-1234'
If Claude Code still asks you to sign in on first run, add "hasCompletedOnboarding": true and "primaryApiKey": "sk-dummy-key" to ~/.claude.json. For the VS Code extension, also enable Disable Login Prompt in settings (or add "claudeCode.disableLoginPrompt": true to settings.json).
Use Powershell for all commands below:
irm https://claude.ai/install.ps1 | iexConfigure
Set the ANTHROPIC_BASE_URL environment variable to redirect Claude Code to your local llama.cpp server. Also you must use $env:CLAUDE_CODE_ATTRIBUTION_HEADER=0 see below.
$env:ANTHROPIC_BASE_URL="http://localhost:8001"Claude Code recently prepends and changes a Claude Code Attribution header, which invalidates the KV Cache. See this LocalLlama discussion.
To solve this, do $env:CLAUDE_CODE_ATTRIBUTION_HEADER=0 or edit ~/.claude/settings.json with:
{
...
"env": {
"CLAUDE_CODE_ATTRIBUTION_HEADER" : "0",
...
}
}Session vs Persistent: The commands above apply to the current terminal only. To persist across new terminals:
Run setx ANTHROPIC_BASE_URL "http://localhost:8001" once, or add the $env: line to your $PROFILE.
If Claude Code still asks you to sign in on first run, add "hasCompletedOnboarding": true and "primaryApiKey": "sk-dummy-key" to ~/.claude.json. For the VS Code extension, also enable Disable Login Prompt in settings (or add "claudeCode.disableLoginPrompt": true to settings.json).
🕵️Fixing 90% slower inference in Claude Code
Claude Code recently prepends and adds a Claude Code Attribution header, which invalidates the KV Cache, making inference 90% slower with local models.
To solve this, edit ~/.claude/settings.json to include CLAUDE_CODE_ATTRIBUTION_HEADER and set it to 0 within "env"
Using export CLAUDE_CODE_ATTRIBUTION_HEADER=0 DOES NOT work!
For example do cat > ~/.claude/settings.json then add the below (when pasted, do ENTER then CTRL+D to save it). If you have a previous ~/.claude/settings.json file, just add "CLAUDE_CODE_ATTRIBUTION_HEADER" : "0" to the "env" section, and leave the rest of the settings file unchanged.
🌟Running Claude Code locally on Linux / Mac / Windows
We used unsloth/GLM-4.7-Flash-GGUF , but you can use anything like unsloth/Qwen3.5-35B-A3B-GGUF.
Navigate to your project folder (mkdir project ; cd project) and run:
To use Qwen3.5-35B-A3B, simply change it to:

To set Claude Code to execute commands without any approvals do (BEWARE this will make Claude Code do and execute code however it likes without any approvals!)
Try this prompt to install and run a simple Unsloth finetune:

After waiting a bit, Unsloth will be installed in a venv via uv, and loaded up:

and finally you will see a successfully finetuned model with Unsloth!

IDE Extension (VS Code / Cursor)
You can also use Claude Code directly inside your editor via the official extension:
Alternatively, press Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (Mac), search for Claude Code, and click Install.
If you see Unable to connect to API (ConnectionRefused) , remember to unset ANTHROPIC_BASE_URL via unset ANTHROPIC_BASE_URL
If you find open models to be 90% slower, see Claude Code first to fix KV cache being invalidated.
📖 Quickstart Inference Tutorials
Before we begin, we firstly need to complete setup for the specific model you're going to use. We use Unsloth (a web UI) and llama.cpp which are open-source frameworks for running and serving LLMs on your Mac, Linux, Windows devices. Unsloth also has unique self-healing tool-calling and web search capabilities.
🦥 Unsloth Tutorial llama.cpp TutorialVideo Tutorial
For this tutorial, we will use Unsloth with Claude Code. Claude code talks to Unsloth over the Anthropic-compatible `/v1/messages` endpoint.
Search, download, run GGUFs and safetensor models
Fast CPU + GPU inference via llama.cpp

You can connect Claude Code to Unsloth API with three environment variables. Get started with the following steps:
Run in your terminal:
MacOS, Linux, WSL:
Windows PowerShell:
Then launch Unsloth via: unsloth studio -H 0.0.0.0 -p 8888. Then open your specified URL.
Set environment variables
In the same terminal you’ll run Claude Code from, copy your API key and model name from Unsloth, then paste the variables below.
Variables Reference:
http://localhost:8888 (replace with your Unsloth port)
Your model name as set in Unsloth
Export Variables:
Replace: 8888 with your Unsloth port, sk-unsloth-xxxxxxxxxxxx with your API key (found in Unsloth under Settings), and qwen-local with the name of your loaded model.
Where to find your API keys in Unsloth Unsloth:
Open Unsloth and open Settings → API Keys to view or create your API key.

Run this command in the same terminal:
Claude Code should start and display the model name you chose. Type /model inside Claude Code to double-check it's pointing at your Unsloth-hosted model.

This confirms Claude code is connected to your Unsloth-hosted model
Before we begin, we firstly need to complete setup for the specific model you're going to use. We use llama.cpp which is an open-source framework for running LLMs on your Mac, Linux, Windows etc. devices. Llama.cpp contains llama-server which allows you to serve and deploy LLMs efficiently. The model will be served on port 8001, with all agent tools routed through a single OpenAI-compatible endpoint.
We'll be using Qwen3.5-35B-A3B and specific settings for fast accurate coding tasks. If you don't have enough VRAM and want a smarter model, Qwen3.5-27B is a great choice, but it will be ~2x slower, or you can use other Qwen3.5 variants like 9B, 4B or 2B.
Use Qwen3.5-27B if you want a smarter model or if you don't have enough VRAM. It will be ~2x slower than 35B-A3B however. Or you can use Qwen3-Coder-Next which is fantastic if you have enough VRAM.
We need to install llama.cpp to deploy/serve local LLMs to use in Claude Code etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference. For Apple Mac / Metal devices, set -DGGML_CUDA=OFF then continue as usual - Metal support is on by default.

Download and use models locally
Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection here. If downloads get stuck, see Hugging Face Hub, XET debugging

We used unsloth/Qwen3.5-35B-A3B-GGUF , but you can use another variant like 27B or any other model like unsloth/Qwen3-Coder-Next-GGUF.

To deploy Qwen3.5 for agentic workloads, we use llama-server. We apply Qwen's recommended sampling parameters for thinking mode: temp 0.6, top_p 0.95 , top-k 20. Keep in mind these numbers change if you use non-thinking mode or other tasks.
Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size .
We used --cache-type-k q8_0 --cache-type-v q8_0 for KV cache quantization for less VRAM usage. For full precision, use --cache-type-k bf16 --cache-type-v bf16 According to multiple reports, Qwen3.5 degrades accuracy with f16 KV cache, so do not use --cache-type-k f16 --cache-type-v f16 which is also on by default in llama.cpp. Note bf16 KV Cache might be slightly slower on some machines.
You can also disable thinking for Qwen3.5 which can improve performance for agentic coding stuff. To disable thinking with llama.cpp add this to the llama-server command:
--chat-template-kwargs "{\"enable_thinking\": false}"

We need to install llama.cpp to deploy/serve local LLMs to use in Claude Code etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference. For Apple Mac / Metal devices, set -DGGML_CUDA=OFF then continue as usual - Metal support is on by default.

Download and use models locally
Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection here. If downloads get stuck, see Hugging Face Hub, XET debugging
We used unsloth/GLM-4.7-Flash-GGUF , but you can use anything like unsloth/Qwen3-Coder-Next-GGUF - see Qwen3-Coder-Next

To deploy GLM-4.7-Flash for agentic workloads, we use llama-server. We apply Z.ai's recommended sampling parameters (temp 1.0, top_p 0.95).
Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size .
We used --cache-type-k q8_0 --cache-type-v q8_0 for KV cache quantization to reduce VRAM usage. If you see reduced quality, instead you can use bf16 but it will increase VRAM use by twice: --cache-type-k bf16 --cache-type-v bf16
You can also disable thinking for GLM-4.7-Flash which can improve performance for agentic coding stuff. To disable thinking with llama.cpp add this to the llama-server command:
--chat-template-kwargs "{\"enable_thinking\": false}"

Last updated
{
"promptSuggestionEnabled": false,
"env": {
"CLAUDE_CODE_ENABLE_TELEMETRY": "0",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
"CLAUDE_CODE_ATTRIBUTION_HEADER" : "0"
},
"attribution": {
"commit": "",
"pr": ""
},
"plansDirectory" : "./plans",
"prefersReducedMotion" : true,
"terminalProgressBarEnabled" : false,
"effortLevel" : "high"
}claude --model unsloth/GLM-4.7-Flashclaude --model unsloth/Qwen3.5-35B-A3Bclaude --model unsloth/GLM-4.7-Flash --dangerously-skip-permissionsYou can only work in the cwd project/. Do not search for CLAUDE.md - this is it. Install Unsloth via a virtual environment via uv. Use `python -m venv unsloth_env` then `source unsloth_env/bin/activate` if possible. See https://unsloth.ai/docs/get-started/install/pip-install on how (get it and read). Then do a simple Unsloth finetuning run described in https://github.com/unslothai/unsloth. You have access to 1 GPU.curl -fsSL https://unsloth.ai/install.sh | shirm https://unsloth.ai/install.ps1 | iexexport ANTHROPIC_BASE_URL=http://localhost:8888
export ANTHROPIC_AUTH_TOKEN=sk-unsloth-xxxxxxxxxxxx
export ANTHROPIC_MODEL=qwen-local$env:ANTHROPIC_BASE_URL = "http://localhost:8888"
$env:ANTHROPIC_AUTH_TOKEN = "sk-unsloth-xxxxxxxxxxxx"
$env:ANTHROPIC_MODEL = "qwen-local"apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpphf download unsloth/Qwen3.5-35B-A3B-GGUF \
--local-dir unsloth/Qwen3.5-35B-A3B-GGUF \
--include "*UD-Q4_K_XL*" # Use "*UD-Q2_K_XL*" for Dynamic 2bit./llama.cpp/llama-server \
--model unsloth/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf \
--alias "unsloth/Qwen3.5-35B-A3B" \
--temp 0.6 \
--top-p 0.95 \
--top-k 20 \
--min-p 0.00 \
--port 8001 \
--kv-unified \
--cache-type-k q8_0 --cache-type-v q8_0 \
--flash-attn on --fit on \
--ctx-size 131072 # change as requiredapt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cppimport os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/GLM-4.7-Flash-GGUF",
local_dir = "unsloth/GLM-4.7-Flash-GGUF",
allow_patterns = ["*UD-Q4_K_XL*"],
)./llama.cpp/llama-server \
--model unsloth/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-UD-Q4_K_XL.gguf \
--alias "unsloth/GLM-4.7-Flash" \
--temp 1.0 \
--top-p 0.95 \
--min-p 0.01 \
--port 8001 \
--kv-unified \
--cache-type-k q8_0 --cache-type-v q8_0 \
--flash-attn on --fit on \
--batch-size 4096 --ubatch-size 1024 \
--ctx-size 131072 #change as required