Train Your Own LLM from Scratch
github.comIf you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
how does one get the lectures? I don't see the option for any lectures.
One goes to youtube and searches for cs336?
shameless plug:
A series of Jupyter notebooks explaining the whole machine learning mechanism, from the beginning
https://github.com/nickyreinert/DeepLearning-with-PyTorch-fr...
and of course also how to build an llm from scratch
https://github.com/nickyreinert/basic-llm-with-pytorch/blob/...
Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.
[0] https://github.com/rasbt/LLMs-from-scratch
[1] https://www.manning.com/books/build-a-large-language-model-f...
[2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...
I really enjoyed the book. Great for people who want to understand the real nuts and bolts and have worked examples of all of the calculations.
Been doing it since the day I was born. The beginnings were hard but I’m getting there.
You've actually been primarily training a physics model, with an LLM attached to it.
Good point, and I'm actually not sure that there is a clear dividing line. I expect that once we achieve capable world models and are able to analyze their internals, we'll find that the prediction mechanisms for purely physical and for verbal/behavioral responses to the agent's actions are at least partially colocated.
As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.
I did it back in the day when fast.ai was relatively new with ULMFiT. This must have been when Bert was sota. The architecture allows you to train a base and specialize with a head. I used the entire Wikipedia for the base and then some GBs of tweets I had collected through the firehouse. I had access to a lab with 20 game dev computers. Must have been roughly GTX 2080s. One training cycle took about half a day for the tokenized Wikipedia so I hyper parameter tuned by running one different setting on each computer and then moving on with the winner as the starting point for the next day. It was always fun to come to work the next morning and check the results.
The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).
This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?
The page describes its relationship to nanogpt.
...nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour...
Yes, you are.
Context: he is one of the MLX developers, a skilled ML researcher.
Source? I think that's not correct.
Google the name of the author.
I did. I think you are confusing him for someone else, so provide a source for your claim.
If you want to be snarky, it helps if you are right.
You are right, sorry the name is very similar and I thought it was: https://x.com/angeloskath
I don't think GP was being snarky, how else would you expect someone to cite a name he recognized?
He was being snarky. He does actually end up citing who he thought the author was, and in doing so realized he was wrong.
He could have done that initially instead of saying "Google the name of the author."
It was not my best (nor normal) behavior, but the point in this case is that the OP offered very little in his rebuttal. A more contextualized reply would have improved mine as well. I believe actually the person that published this LLM course on GitHub works at ElevenLabs, as Google shows. So the reply could be: "Are you sure? I googled and apparently he works for ElevenLabs". That would have triggered a different reply. So I was not polite enough, and I said sorry, but given the exchange to say "google it" was not terrible, was exactly how I thought I had found it (I google for the wrong name, but citing MLX, plus X, and Google returned the wrong result). So it was a metter of "I did this way".
> A hands-on workshop where you write every piece of a GPT training pipeline yourself, understanding what each component does and why.
I see in dependencies torch, so most likely tensors and backpropagation are not implemented, but rather taken as granted. Does it count then as writing "from scratch"?..
I did something similar (in Rust, AI assisted), but I restricted myself not to use any dependency, only standard library. As result, I have to implement much more things, such as tensor design, kernels concept, simple gradient descent optimizer and even custom json parser, cpu data parallelism abstractions similar to rayon, etc. It was quite fun when I got everything wired and working - soo sloooow, but working.
I'm not sure using pytorch counts as "from scratch" anymore. I'm not saying you should avoid the stdlib or anything crazy, but at the point where you're pulling in for-purpose libraries it really doesn't seem like "from scratch" to me.
Point taken, but I think to most ML folks, PyTorch basically is the stdlib.
Train your LM from scratch*
I doubt you have a machine big enough to make it "Large".
If you have a credit card with a "normal" ceiling you probably can rent enough on neocloud providers like HuggingFace or Mistral Forge.
I'm not saying it's worth it but you don't need to buy a GPU yourself to be able to train.
This is the whole point of Karpathy's nanochat which OP refers to, to train a GPT-2 level LLM for under $100, renting an 8xH100 VM.
You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
you can train it, but not fully
Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!
And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!
I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.
But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?
This is about learning concepts, and the rest of this is mostly moot.
On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.
Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
> Yeah it's just a semantic pet peeve.
I'm not sure. Microsoft calls Phi-4 a small language model, so the distinction is considered meaningful to some people working in the space. My own view is that the term "LLM" implies something about the capabilities of the model in 2026. Maybe there's not a hard definition of the term, but whatever the definition is, the model in the article wouldn't make it.
Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?
GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?
Maybe we should just give up, shrug and call it "AI".
> Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?
Sure, we could do it like we did radio frequencies! Most of what we use are "High Frequency" and above... Very High Frequency, Ultra High Frequency, Super High Frequency, Extremely High Frequency.
> In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).
So the definition shifts over time based on the market availability of RAM? And can also go backwards? I can't really see anyone bothering to look up the state of the GPU market in order to determine correct terminology whenever they want to talk about this stuff (or interpret old comments, or...).
That also decouples the terminology from the actual capabilities which is what people are generally more interested in. GPT-3 was a "large" language model at this present time. However the the seemingly much more capable Gemma 4 was a large language model at the time GPT-3 was in use, but isn't a large language model right now.
I kinda question the arbitrary line drawn here too--32GB VRAM? Where I am that's a ~$5-6k problem. I'm not sure I'd call that a "consumer" product any more than the $20k data center cards regardless of the OEM intent, but we could argue semantics on that one too.
Fundamentally, defining it this way just seems kind of... useless? It's borderline a meaningless modifier already. This just defines it in a way that's so complex to use or interpret that it's just meaningless in a different way.
For what it's worth, I'd vote to use "large" to mean "big enough to be general purpose", more differentiating from the small, specialized models that came before.
> And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.
Yeah, was mostly being silly--tried to allude to that with the "intergenerational project" comment toward the end there.
Though I _did_ try doing some inference on CPU, which is how I found out that these Xeons I have don't implement AVX512. Surprisingly Gemma 4 (2B) was able to spit out a solid 13-14 tok/s! Was expecting more like... 0.13.
Then rewrite the title and call it "learn how to do a non usable llm from scratch"
Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.
And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.
What tasks is it non-usable for?
The documentation is really helpful enough to get started
If someone is interested, I am giving short courses with walkthrough on how to train you LLM from scratch via AI Study Camp.
This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!
Nice. What scale does this realistically reach on a single machine?
Model: 36L/36H/576D, 144.2M params
runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m
I would start with linear algebra, some calculus and statistics and understand how a neural network - which really is just one type of ML - works, the learn the basics of CNN and RNN, then learn transformers and LLM.
But that is just me. I think is more useful to understand the how and whys before training a LLM.
This is a really interesting direction. Thanks for sharing!
Can anyone suggest or come up with viable "use cases" of a custom LLM like this? I wouldn't mind giving it a try but ideally I'm looking for something that is not just a toy.
That's interesting, UI is good
I know it's a bit of a joke, but "I Built a Neural Network from Scratch in SCRATCH" gave me, a complete outsider, a lot of insight into how neural networks work.
That’s actually super interesting