Eric S. Raymond: why is there such a huge variance in results from using LLMs?
twitter.comHe thinks they ought to converge. What does he think they ought to converge upon? How will he know that thing when he sees it? If he will know it when he sees it, why does he need help making it?
The answer to all of these, of course, is that convergence is not expected and correctness is not a priority. The use of an LLM is a boasting point, full stop. It is a performance. It is "look, Ma, no coders!". And it is only relevant, or possible, because although the LLM code is not right, the pre-LLM code wasn't right either. The right answer is not part of the bargain. The customer doesn't care whether the numbers are right. They care how the technology portfolio looks. Is it currently fashionable? Are the auditors happy with it? The auditors don't care whether the numbers are right: what they care about is whether their people have to go to any -- any -- trouble to access or interpret the numbers.
The variance mostly comes down to prompt craft and context management.
People who get consistently good results have usually internalized a few things: (1) being explicit about constraints and output format, (2) providing relevant context without noise, (3) matching the model to the task (reasoning-heavy vs creative vs code), and (4) iterating on the prompt when something fails rather than assuming the model is broken.
I've seen the same person get wildly different results depending on whether they ask "write code to do X" vs "I need a function that takes A, returns B, handles edge case C, and should be optimized for D. Here's the existing code it needs to integrate with: [context]."
The gap between those two approaches can be a 10x difference in usefulness. Most of the "LLMs are useless" crowd and the "LLMs are magic" crowd are just working with very different prompt habits.
What are your exact prompts (including project context) and generated code?
And for those who are struggling with LLMs, what are their prompts and code?