Settings

Theme

55x Speedup of Andrej Karpathy's Minbpe LLM Tokenizer with PyTorch/CUDA

github.com

19 points by kuprel 2 years ago · 9 comments

Reader

kuprelOP 2 years ago

This adds PyTorch/CUDA training support to Andrej Karpathy's minbpe. It takes 2min 28sec (148 seconds) on an RTX4090 to train the BasicTokenizer with a vocab_size of 512 on 307MB of Enron emails. The original code takes 2hrs 15min (8076 seconds) on an M2 Air with Python 3.11 to do this. That is a 55x speedup.

  • threesevenths 2 years ago

    Am I reading this right? A 55x improvement while also going from an M2 Air to an RTX 4090?

    If so this doesn’t seem like a logical comparison and the 55x claim would likely not translate when using the same hardware.

    • lostmsu 2 years ago

      Why is it surprising? CPU-only M2 probably has under 1 teraops while RTX 4090 has 77. M2's GPU was not used, but even it only provides around 4 teraops, so would have been ~20x slower than 4090.

    • kuprelOP 2 years ago

      The M2 Air was actually much faster than whatever CPU was on the cloud RTX4090 machine I rented. I chose the stronger benchmark to compare to

      • kuprelOP 2 years ago

        Using int16 and an H100 the speedup is actually 108x over the M2 air

Havoc 2 years ago

> 307MB of Enron emails

Wait what?

Is that some sort of inside joke?

erichocean 2 years ago

Now someone needs to do a Mojo version, and write up the blog post.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection