Ask HN: Do we understand how neat LLMs are?
In about an hour on Saturday morning, I had Claude Code:
- Write, train, and test a Bayesian classifier
- Write, train, and test a 2-layer neural network from scratch in Rust
- Write, train, test, then optimize a from-scratch conv-net in Zig
- Run detailed benchmarks on several different scenarios comparing Rust vs Zig performance
Then this morning, I had it create my own Zig-based static site generator in about 20 minutes. Works perfectly.
This tech can be head-banging when writing production code, but it's pretty incredible for exploration.
Do we fully grasp the possibilities? That pretty much nails it. Starting something from 0? AI can get you from 0 to something in blistering speed. Working on a real product, in production, at scale? Tread carefully..