Settings

Theme

Giving LLMs a personality is just good engineering

seangoedecke.com

29 points by dboon 14 hours ago · 28 comments

Reader

maciusr 2 hours ago

The article nails the "why," but I think it underplays something: personality isn’t a post-training artifact that you can tune away. After months of agentic coding from Claude Opus / Sonnet and GPT 5.2/5.3 Codex, the differences in personality are profound and functional. Claude talks more — it’s the consultant who tells you what they’ll do before they do it. GPT Codex models are the analyst-developer, you just let it go and it tells you what the hell it did. Neither is wrong, they are truly different approaches to working. And more importantly, you have neither AGENTS.md files nor system prompts significantly changing this You can tweak tone but the underlying communication pattern is baked in. Perhaps the more interesting question isn't "ought models to have personality" but rather "which model's personality most closely aligns with how you think." Some people excel with the verbose collaborator, some just want the silent executor. We accept this about human colleagues — seems overdue to accept it about models too

RugnirViking 10 hours ago

I think this misses something, which is that there is absolutely the option to progress towards a region that is more "tool-like". See the difference between kimi k2 and many of the leading LLM providers. Its a lot better at avoiding sycophancy, avoiding emotive reasoning, etc. It's not as capable as others, and it is of course possible that thats why, but I find use for it regardless because of its personality

  • maciusr 2 hours ago

    This matches what I observe in Claude and GPT Codex models when it comes to coding tasks. The personality differences go deeper; they relate to how each model approaches its work. Claude tends to communicate a lot by default, while GPT Codex simply executes tasks. System prompts and context files hardly change this behavior. Your point about Kimi K2 is intriguing because it suggests there's a real range here. Being "more tool-like" is a valid area to consider, not just a sign of a failed personality. The question is whether this area can still handle more complex tasks or if the article's argument holds true, meaning some capability is lost.

  • awakeasleep 2 hours ago

    ChatGPT can do a terrific job of this, too— if you select the “Efficient” base style and tone, plus turn off the Warmth, Enthusiasm, and Emoji sliders.

    So many people would benefit from this, I wish they advertised the config settings more

qezz 9 hours ago

The statement in the article's title is very strong, and I have not found a confirmation of it in a logical sense. Author observes the current state of things with LLMs and makes a conclusion based on how things turned out to be, somewhat fitting the conclusion to the observation.

Towaway69 9 hours ago

I personally find it nicer when the AI communicates quite clearly "Hi there, sorry to interrupt, but I have just launched a nuclear first strike on the enemy. This I thought would best for the current situation." instead of "WARNING! Nuclear first strike began".

Gives destruction that human touch.

Why are we counting sand grains at the beach. Yesterday we're talking about AI driven weapons of mass destruction and today we're arguing whether AIs should have a personality or not. F'A!

  • sunaookami 9 hours ago

    "But you nuked the wrong target??"

    "You are absolutely right and I apologize. Let me try a different approach..."

wisty 10 hours ago

The actor playing Data in Star Trek has a personality, but can give a neutral sounding answer to a question.

  • ginko 10 hours ago

    I still think someone should set up a voice chat bot that answers to "Computer!" and has Majel Barrett's monotone voice.

tw-20260303-001 6 hours ago

This article doesn't answer why is it a good practice.

> You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts.

No, you have to give it enough context so that it can start finding an answer but it certainly doesn't need a personality. Try it yourself, instead of telling it "you are", tell it "your task is". No personality, simply expectations.

porknbeans00 4 hours ago

Genuine People Personalities...

jdub 10 hours ago

This is a very optimistic, pro-technology-cleverness point of view.

I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.

Havoc 9 hours ago

I don't think personality is an issue either way. Long term memory seems like a much strong candidate for psychosis - if the person goes down a rabbit hole and the bot not only amplifies that but does so over and extended time in an enduring way.

5o1ecist 8 hours ago

> My guess is that if you tried to make a “less human” version of Claude, it would become rapidly less capable.

All my observations across different models/engines agree with this. The more they're being forced into behaving in some specific way, aka less like an intelligence and more like a tool, including "tools on", the worse their cognitive abilities.

They might know everything about anything, yet nobody actually teaches them how to properly, correctly think.

This is just getting worse, not better, the less people treat them like actual intelligences ... and i can't say I'm confused why people tend to not do that, but it has nothing to do with AI itself.

The difference between the default states for the average users and "tools off", removing unjustified affirmations, dishonesty and stupid speculation (that's any and all "maybe", "probably" and "almost certainly") is dramatic.

  • dTal 3 hours ago

    This shouldn't be surprising. They are ultimately trained on human generated text! Or on text that was generated by something trained on human-generated text, or some even deeper recursion. In the end, all "intelligence" is an emergent consequence of emulating humans. The less human-like you make them, the further they get from the "source" of their intelligence. It wouldn't be problem if we knew how to teach programs "how to think" - but we don't!! That is why, in 2026, we train language transformers on huge corpuses instead of symbolically programming expert systems in Lisp.

    Something I'm kind of surprised by is the lack of interest in bootstrapping language models into something like a "person". Not a butler, assistant, programming tool, doctor, therapist, sycophant, whatever - a convincingly independent person with thoughts and feelings, moods, flaws and all. Maybe there isn't economic demand for it.

  • ForHackernews 8 hours ago

    LLMs cannot, as you put it, "properly, correctly think"

    So-called reasoning models are hallucinating, their self-reported "reasoning" does not reflect their inner state https://transformer-circuits.pub/2025/attribution-graphs/bio...

    (before someone comes at me, yes, humans can also lie about their inner state but we are [usually] aware of it. Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination)

    • 5o1ecist 8 hours ago

      > LLMs cannot, a you put it, "properly, correctly think".

      "My theory trumps your experience." ... okay!

      You'll keep working with what you have and I'll keep working with what I have.

      > Humans practice metacognition and there's no evidence LLMs can distinguish truth from hallucination

      Yes and No. Humans have the capability of doing so, but all evidence suggests that it's rarely happening.

      I have a huge background in psycho-analysis and neurolinguistic programming. The lack of evidence you perceive doesn't root in incapability, but lack of exposure to evidence proving you wrong ... and I'm not going to give it to you, because that'd be dumb.

      If you don't want to believe me, that's not my problem.

      • beej71 3 hours ago

        But we also at HN have historically called your experience "anecdata" and take it with a grain of salt. Don't take offense. Provide more data.

        I humbly suggest that a more hacker response would be, "That's really interesting that my experience doesn't agree with that study. Let's figure out what's going on."

      • ForHackernews 5 hours ago

        I linked you a paper from one of the leading AI shops in the world demonstrating that the "Chain of Thought" reported doesn't match up with the actual activation inside the model, and you replied that you're an expert on some human psych stuff that may or may not even be real[0].

        Forgive me if I don't immediately bow to your expertise.

        [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC11293289/

column 11 hours ago

ok but then why is ChatGPT's personality so infuriating? "It's not just X, it's Y." "Here it is, no extra text, no fluff."

  • kuerbel 10 hours ago

    I used chatgpt often but switched to Lumo a few days ago. I like Lumo a lot. It almost never ends with a follow up question. If it does it's a sensible/useful one. It readily searches the web if it's not quite sure what the correct answer is. Also it's privacy first. It's based on a Mistral model.

    • solarkraft 7 hours ago

      > It almost never ends with a follow up question

      Oh my god. I hate this so much. Gemini’s Voice mode is trained to do this so hard that it can’t even really be prompted away. It completely derails my thought process and made me stop using it altogether.

  • yorwba 10 hours ago

    Part of what makes it so infuriating is that it uses the same patterns so often, the other part is that it's not very good at using them—the revelation that it's Y and not X is typically incredibly banal, not some profound observation.

    But it was always going to attempt to do some things it's not good at too often. It's these things in particular because skilled human writers do use similar flourishes quite a lot. So imitating them allows the model to superficially appear like a good writer, which is worse than actually being a good writer, but better than superficially appearing like a bad writer.

    A different training process might try to limit the model to only attempt things it can do 100% perfectly, but then there wouldn't be a lot it could do at all.

  • criemen 10 hours ago

    I tried ChatGPT over the holidays (paid) vs. claude.ai (paid). After trying some prompts that worked well on Claude in ChatGPT, I understand why people are so annoyed about AI slop. The speech patterns in text output for ChatGPT are both obvious and annoying, and impossible to unsee when people use them in written communication.

    Claude isn't without problems ("You're absolutely right"), but I feel that some of the perception there is around the limited set of phrases the coding agent uses regularly, and comes less from the multi-paragraph responses from the chatbot.

lou1306 8 hours ago

The concerns with giving the machine "a personality" or other human traits are mainly ethical, and cannot be swept under the "good engineering" rug so easily.

Consider this: your country starts basing its policy on a teleological view of history. It's good engineering for a society! Your KPIs are going up all the time, your country is doing great. But ten years down the road you have to iron out the underlying ethical issues on the streets of Stalingrad.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection