funfunfunction
- Karma
- 356
- Created
- 9 years ago
About
80eafd64915ed77b50fd68efed69bfa51407b135a5f7b093006e175e4c23296aRecent Submissions
- 1. ▲ Show HN: Project AELLA – Open LLMs for structuring 100M research papers (aella.inference.net)
- 2. ▲ Hybrid-Attention models are the future for SLMs (inference.net)
- 3. ▲ Show HN: Using LLMs and >1k 4090s to visualize 100k scientific research articles (twitter.com)
- 4. ▲ Viral GPT wrappers are now training their own LLMs (twitter.com)
- 5. ▲ UWU – generate CLI commands without leaving the terminal (github.com)
- 6. ▲ Show HN: UwU – Generate CLI commands inline with GPT-5 (github.com)
- 7. ▲ How much energy does it take to produce an LLM token? (energy.inference.net)
- 8. ▲ When to use model distillation in production (inference.net)
- 9. ▲ Show HN: Batch inference for large-scale synthetic data generation (inference.net)
- 10. ▲ Show HN: Costco for LLM Tokens (inference.net)