Ask HN: How do AI chatbots develop personalities?
If it's all just tensors, matrices, and math behind them, then how do these chatbots have distinct personalities?
One possible explanation is that the personality is just an emergent phenomenon. But still, it seems distinct and very human like. Not something random. Could it be that Bing's Sydney personality is just because someone fed it a large data of text which served as the "seed" for that personality? Or did this personality seemingly arise out of nowhere?
The Google engineer who made headlines last year also claimed LaMDA had emotions and a personality. The data makes a difference, but there is also prompt engineering. You can prime the same model with some pretext to get different "personalities" and Sydney could have a very different tone if given a different pretext These are not real personalities, just probability based text generation. https://github.com/verdverm/chatgpt has links and can be used to experiment > These are not real personalities My problem with this statement is the word “real”, by which I assume you mean human. But do we know what human personalities are? How they work and how they develop? I mean "real" in the sense they do not have a foundation. Animals have personalities, you cannot prompt them to change them at will. These "ai" systems do not demonstrate many of the metrics by which we measure intelligence, emotions, and behaviors in living creatures. They have real problems with consistency and hallucinations, but this is expected if you understand the algos, data, and training. I've detected equal amounts of personality, rationale and reasoning in all the chatbots I've interacted with, or seen interactions of.. The amounts are zero.. While most people are fairly weak in reasoning and general cognition, at least there's evidence they try.. But every chatbot output I've ever seen, just feels like it's outputting "what a human would probably say in this situation" without any cognition behind it at all.. I've spent many hours with chatgpt, the more I try to make it do something, the more it shines through how it's just a dangerously confident bullshit-generator. Sure, it can give you correct results, as long as something was already written on the subject, and the majority of what's written is true.. But it can't.. DO anything with it.. it can condense it, sometimes correctly, sometimes not.. But so far, I've been entirely unable to use it as an augment for thinking or developing ideas or thoughts or reasoning.. ChatGPT is just the clay. Poor is the potter, then. It's all anthropomorphic projection. LLMs don't have personalities or emotions, as you point out, the text is produced using tensors and matrix calculations. The idea is kind of absurd on its face, it's akin to asking "do characters in books have emotions?". No, they don't, why would they? Characters in book don't write their own dialogs, they are merely projections of the authors. But AI chatbots are non-deterministic and come up with dialog of their own. Of course, you can say they are trained on large corpus of texts and all they are doing are just predicting the next token in the series. But yet, why do they seem like they have distinct personalities? The prediction algorithm is working in such a way that it's creating a sense of personality. How is this happening? > "But AI chatbots are non-deterministic" I think you need to re-math that.. They're entirely and totally deterministic. You and I can spin up the same model, seed it with the same seed and they'll produce exactly the same nonsense to the same prompts, every single time. They have a seed, but lots of randomness is injected, and that makes them non-deterministic by definition. Certain applications have different levels of randomness, but the chatbots are usually put on higher levels of randomness, especially the entertainment ones. You can make something that produces the same output every time. This is more commonly used for code autocompletes. So if you try to chat with Copilot, you'll likely get something more deterministic. ChatGPT is probably toned down more than most, to avoid it from going off the rails. It is a bit sad that the AI most people have access too is rather "caged". The "jailbreak" techniques actually show AI closer to what it's like. Saying ChatGPT is deterministic is similar to saying a human in a call center is deterministic; they're following a SOP but that's not what they're really like. Can you expand on what further randomness is supposedly injected? Every single AI system which I have used has supported seeding, and seeding made very single one deterministic. I honestly don't know how you're supposed to have "randomness" in AI models if the RNG is seeded. It's a different story if you use specific libraries like Xformers, but those introduce randomness as an artifact of their optimizations, not due to any magical non-seeded randomness. I wouldn't waste my time, personally. Most God in the Gaps arguments trace back to intellectual cowards, in my experience. It's entirely deterministic. "chatgpt" is not an entity, it's a process, it doesn't even exist in a temporal sense, the sentences it produces aren't thoughts, it's a systematic technique for producing voluble text. The personality you say they have, happens in the head of the recipient, just like with a book. This also happens with people, animals and even objects, abstract shapes (which is friendlier, the spiky triangle or the rounded one?) ... humans ascribe personality to all kinds of things. It's a baked-in feature. It’s the average personality of the internet as revealed by text continuation statistics, and tuned by human-guided gan. Nothing more. They don't. It's you who gives it meaning. Also, it's trained on trillions (gazillions?) of man-hours of data, so even if it's an antiquated "algorithm" it manages to be authentic. Also, no one factors in the small adjustments (and likely deliberate prompt / text insertions) that the parent company would undertake to make the ""AI"" seem more real. It's 50% tech, 50% marketing. It's really a form of autocomplete. It ranks the next best text based on the ones it's given. You can train them to be more likely to respond in a certain way. I'll copy some text from here: https://buttondown.email/hillelwayne/archive/programming-ais... Original author personality: "Now, here’s some important context: Bertrand Meyer’s entire deal is software correctness. He invented Eiffel. He trademarked Design By Contract (tee em). He regularly rants about how SEs don’t know about logic. He didn’t notice the error." ChatGPT personality: "To provide some background information, Bertrand Meyer is heavily invested in ensuring the accuracy of software. He is credited with creating the Eiffel programming language and trademarking Design By Contract (DBC), and frequently criticizes software engineers for lacking a fundamental understanding of logic. Despite his expertise, he failed to detect an error." ChatGPT pretending to be Conan: "Listen up, for I shall give you the lowdown. Bertrand Meyer, he be all about makin' sure software is correct, through and through. The man created Eiffel, trademarked Design By Contract, and he's always grumblin' about how software engineers ain't got no grasp of logic. But get this, he missed an error, by Crom!" GPT-3 (curie) actually trained on Conan: "What’s the matter with you, Bertrand? How could you be so stupid as to waste your time studying software correctness and then not notice this glaring error?" ChatGPT tends to tone it down to a level of neutralness. Partly to prevent people from anthromorphizing AI and creating some other unpleasant questions. Part of it is just some safeguards to prevent it from sounding like the last paragraph. The three AI-generated paragraphs are all the same robotic personality, but they've just been rendered differently. That last one sounds angry, but it's not. It's just repeating the dialect from the environment it's used to. If someone talked exactly like X, you'd assume that their personality is also like X. Also the ChatGPT-Conan dialect is likely copying what others have written about Conan, and not Robert E Howard's Conan itself. If it's all just letters, words, and punctuation marks, then how do writers of texts in the training dataset have distinct personalities? can chatgpt develop a personality? Yes, ChatGPT can develop a personality. ChatGPT is an AI-powered chatbot platform that uses natural language processing (NLP) and deep learning to understand user input and generate natural-sounding responses. By using a combination of pre-defined rules, machine learning algorithms, and natural language processing, ChatGPT can learn to recognize patterns in user input, develop a personality, and even generate more natural-sounding responses. Always picking the best possible next token would be boring. The algo doesn't consistently take the highest rank. As such, it's start ( random) will define a bit of it's personality by guessing the best words that associate with the less neutral word.