danielhanchen
- Karma
- 2,087
- Created
- 4 years ago
About
Unsloth github.com/unslothai/unsloth - finetune Llama 2x faster + use 70% less VRAM1. Used to work at NVIDIA RAPIDS cuML
2. Discord: https://discord.gg/unsloth
3. Github: https://github.com/danielhanchen
4. Twitter / X: x.com/danielhanchen
5. Email: my handle @ gmail.com
6. Bug fixes for Gemma: https://news.ycombinator.com/item?id=39671146
7. Bug fixes for Gradient Accumulation: https://x.com/danielhanchen/status/1846235913443262891?lang=en
Recent Submissions
- 1. ▲ Kimi K2 Thinking: How to Run Locally (docs.unsloth.ai)
- 2. ▲ LoRA Without Regret (thinkingmachines.ai)
- 3. ▲ Long context GPT-OSS fine-tuning (unsloth.ai)