Ask HN: Should LLMs be all-knowing experts or rapid-learning minds?
Should we build LLMs like they're all-knowing experts of everything from programming languages to even neuroscience.
Is that really the right way to look at them? Or should it be made in such a way that they're so good at thinking, planning, and reasoning, with the unique capacity to learn and adapt incredibly fast - much like a fresh human brain? What makes you think you have a choice? That's like asking whether binary heaps should be more ambitious or more polite. Most of what you're describing is either faked by something outside the LLM, or if invented later will involve some completely new approach that will have a different name. If you're thinking of prompts like "be friendly", that's not imbuing a behavior into the LLM itself, but rather adding a prefix to the document in the hopes it will have a desirable effect on what gets appended. I think my question wasn't properly understood. I was thinking whether LLMs should be considered as a "fresh new" smart brain that can learn anything, just like a smart human kid. Or should it be like a brain that comes with preloaded knowledge? (like how it is today) I dislike both options, since "learn" and "knowledge" are overstating things, but of the two: Preloaded. That first metaphorical human child would be suffering from devastating anterograde amnesia and would periodically be eliminated and replaced with a fresh clone from the Training Vats. I say "overstating" because a lot of what we humans perceive from LLMs is a kind of self-trickery. Other humans are using the LLM to generate a document that contains fictional characters interacting, and then "acting out" one of the character to make us confuse the character with the author, and then human intelligence and knowledge is (automatically, instinctively) engaged in "filling in" the character being described.