Settings

Theme

Ask HN: Relatively SoTA LLM Agents from Scratch?

3 points by solsane 2 days ago · 3 comments · 1 min read

Reader

As we know, OpenAI is not so open.

In 2023, I was playing with transformers, RNNs and I had an understanding how it worked from top to bottom (e.g. made my own keras, could whiteboard small nets) and I can throw things together in keras or tf pretty quick

I got a job and never touched that again. Data and compute notwithstanding, how hard would it be to make a pet project foundation model using the latest techniques? I’ve heard about MoE, things like that and I figure we’re not just throwing a bunch of layers and dropout in Keras anymore.

huevosabio 2 days ago

The Olmo team is AFAIK the only SOTA-ish model that has fully open source code and data. Their report is fantastic: https://www.datocms-assets.com/64837/1763662397-1763646865-o...

It should give you an idea of how hard it is to do a SOTA model from scratch!

If you relax the SOTA aspect, Karpathy's nanochat has you covered: https://github.com/karpathy/nanochat

walpurginacht a day ago

I'd suggest you take a read on HuggingFace's writeup when they trained smolLM3

https://huggingface.co/spaces/HuggingFaceTB/smol-training-pl...

rare detailed insight on the entire process

bjourne a day ago

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection