I’ve been surprised by – and enjoy – one aspect of using large language models more than any other.
They often put into words things I have long understood, but could not write down clearly. When that happens, it feels less like learning something new and more like recognition. A kind of “ok, yeah” moment.
I have not seen this effect discussed much. I also think it has improved how I think.
Much of what we know is tacit
Take my own job. As programmers and developers, we build up a lot of understanding that never quite becomes explicit.
You know when a design is wrong before you can say why. You sense a bug before you can reproduce it. You recognize a bad abstraction instantly, even if it takes an hour to explain the problem to someone else.
This is not a failure. It is how experience operates. The brain compresses experience into patterns that are efficient for action, not for speech. Those patterns are real, but they are not stored in sentences.
The problem is that reflection, planning, and teaching all require language. If you cannot express an idea, you cannot easily examine it, you cannot easily share it.
LLMs are good at the opposite problem
Large language models are built to do exactly this – turn vague structure into words.
When I ask a question about something semi-obvious to me, something I believe is true but am not sure why, the model responds with a formulation. It steps through each reason why that something may be true. Each point it makes is orthogonal to the previous one, allowing me to trade, exchange, and re-order the arguments it makes.
Putting things into words changes the thought
Once the LLM writes down an idea, I can then play with it in my mind.
Vague intuitions turn into named distinctions and my implicit assumptions become visible. At that point I can test them, discard them, or refine them.
Of course, this is not new. Writing has always done this for me. What is different is the speed. I can explore half-formed thoughts, discard bad descriptions, and try again. That encourages a kind of thinking I might have otherwise skipped.
The feedback loop matters
Over time I’ve noticed that now I do this without an LLM to hand. Can I “phrase in precise language what and why I am thinking, feeling, believing, right now?”
In that sense, the model is not improving my thinking directly. It is improving how I use language, improving the effectiveness of my internal monologue. And since reasoning depends heavily on what one can represent explicitly, that improvement can feel like a real increase in clarity.
The more I do this, the better I get at noticing what I actually think.