Settings

Theme

Running DeepSeek R1 Models Locally on NPU

blogs.windows.com

37 points by doomroot13 a year ago · 16 comments

Reader

jokowueu a year ago

How much are NPUs more efficient than GPUs ? What are the limitations , it seems it will have support for deepseek R1 soon

  • tamlin a year ago

    A decent chunk of AI computation is the ability to do matrix multiplication fast. Part of that is reducing the amount of data transferred to and from the matrix multiplication hardware on the NPU and GPU; memory bandwidth is a significant bottleneck. The article is highlighting 4-bit format use.

    GPUs are an evolving target. New GPUs have tensor cores and support all kinds of interesting numeric formats, older GPUs don't support any of the formats that AI workloads are using today (e.g. BF16, int4, all the various smaller FP types).

    NPU will be more efficient because it is much less general an GPU and doesn't have any gates for graphics. However, it is also fairly restricted. Cloud hardware is orders of magnitude faster (due to much higher compute resources I/O bandwidth), e.g. https://cloud.google.com/tpu/docs/v6e.

    • justincormack a year ago

      NPU also has no more memory bandwidth than CPU, but then the GPU on these machines doesnt either.

      • tamlin a year ago

        Agree on NPU vs CPU memory bandwidth, but not sure about characterizing the GPU that way. GDDR is usually faster than DDR of the same generation, and on higher end graphics cards has a width bus width. A few GPUs have HBM and pretty much all datacenter ML accelerators (NVidia B200 / H100 / A100, Google TPU, etc). The PCIe bus between the host memory and GPU memory is a bottleneck for intensive workloads.

        To perform a multiplication on CPU, even SIMD, that values have to fetched and converted to a form the CPU has multipliers for. This means smaller numeric types penalised. For a 128-bit memory bus, an NPU can fetch 32 4-bit values per transfer; the best case for a CPU is 16 8-bit values.

        Details are scant on Microsoft's NPU, but it probably has many parallel multipliers; either in the form of tensor cores or a systolic array. The effective number of matmul's per second (or per memory operation) is higher.

        • justincormack a year ago

          Yeah standalone GPUs do indeed have more bandwidth, but most of these Copilot PCs that have NPUs just have shared memory for everything I think.

          fetching 16 8 bit values vs 32 4 bit values is the same, this is the form they are stored in memory. Doing some unpacking into more registers and back is more or less free anyway, if you are memory bandwidth bound. Largely on these lower end machines everything is memory bound not compute bound, although the CPUs cant often use the full memory bandwidth in some systems (eg the Macs) but the GPU can.

          • tamlin a year ago

            Yes, agree. Probably the main thing is the NPU is just a dedicated unit without the generality / complexity of a CPU and so able to crunch matmuls more efficiently.

RandomBK a year ago

Reminder: DeepSeek distilled models are better thought of as fine-tunes of Qwen/Llama using DeepSeek output, and are not the same as actual DeepSeek v3 or R1.

This unfortunate naming has sown plenty of confusion around DeepSeek's quality and resource requirements. Actual DeepSeek v3/R1 continues to require at least ~100GB of VRAM/Mem/SSD, and this does not change that.

  • bestouff a year ago

    Out of curiosity, would an A100 80GB work for this ?

    • bestouff a year ago

      Replying to myself: apparently it's not 100GB VRAM but more around 700GB VRAM that's needed to run DeepSeek R1. The gear needed to run that would cost something in the vincinity of 100K€ !

      • RandomBK a year ago

        Yup. I was referring to the 1.58B quant which seemed to be performing alright and would be the smallest real-DeepSeek model. That requires ~140GB, which is just barely doable on a 128GB RAM + 24GB VRAM setup + a lot of patience. Others have made it work at 64GB RAM + a fast SSD.

        The true minimally-quantized DeepSeek experience will need one or possibly two 8xH100 nodes, so well upwards of $100K in CapEx.

  • darthrupert a year ago

    Wait, what am I running on my 32GB Macbook then? I thought it was the 32b version of deepseek-r1.

    • RandomBK a year ago

      The only 32B distill I'm aware of is `DeepSeek-R1-Distill-Qwen-32B`, which would be a base model of `Qwen-32B` distilled (further trained) on outputs from the full R1 model.

    • rahimnathwani a year ago

      Deepseek R1 has 671 billion parameters. Even if you could quantize each parameter to just 1 bit (from 8 bits), you'd still need 84GB of RAM just for the weights. There is no 32B parameter version of the V3/R1 model architecture.

    • Plankaluel a year ago

      You are running Qwen2.5 32b that has been fine tuned on data that was generated by R1

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection