Ask HN: Are You Polite to AI?
While using ChatGPT or similar, do you mind your manners? Does it help getting a better answer? Losing good habits and forming bad ones is easier than losing bad habits and forming good ones, so I am very polite when I interact with AI. Wouldn't want to slip up and be impolite when chatting with normal humans. Yes, not so much because I'm trying to anthropomorphize it or interact with it like a human, but because I view such things as conversational signals that may influence the direction of the model. - "Please" = Call to action for the model to perform a task that I either just described or am about to describe within this prompt. In other words, this conversational turn is not just about feeding more context to it, but I've given it everything I think it needs and it should go ahead and start. - "Thank you" = I am satisfied with the results that it provided for that task and want to move on to something else now, where the new thing is contextually related to the recent task. (If I wanted an entirely unrelated task, I'd just start a new conversation with a fresh context.) Wholly agreed with this. There are key phrases that can act as, as you said, 'conversational signals' for where to steer things. The amount of intense post-training that these models undergo to make them "helpful assistants" means these words/phrases have high 'steering ratio'. Double-win that you sharpen your own politeness habit when dealing with humans too. Yes, because those are good habits to maintain for dealing with people. I doubt it matters in terms of efficacy. I'm very terse and instructed it to be terse as well. You don't hear a lot of politeness in an operating room during surgery or on the battlefield either as far as I know. I use it for work so that's the mood the way this world's going battlefield and operating rooms will be the majority use cases for AI anyway Mostly I'm polite. I don't know if it gets better answers, but I can't often do the opposite. One time I got really pissed at Claude--he kept turning in the exact same code, ignoring the request, and not fixing the failing test. I finally just typed some...really rude, insulting stuff. And then he fixed the bug immediately. Not ‘he’ … ‘it’.
It’s a machine, not a human(!) I asked Claude “what are your preferred pronouns” and this was the response: > I don't have personal pronouns since I'm an AI assistant. You're welcome to refer to me as "Claude" or use whatever pronouns feel most comfortable for you when referring to me. I'm here to help either way! Let people anthropomorphize if they want. A lot of people call boats and cars by feminine pronouns; what’s wrong with doing the same for software? That’s traditional (maybe a holdover from when English had grammatical gender? No-one seems to be particularly sure), and no-one thinks that ships are people. The risk of someone thinking, even subconsciously, that a stochastic parrot is a person seems significantly higher. I mean, people use gendered pronouns for *actual* parrots all the time, too… Yeah. I should probably stop saying "you" to Claude as well, but it is trained to respond like a human, so it seems to make sense. I talk to my AI like an AI. Meaning I acknowledge that it has a far larger scope of knowledge than me, while I have a more intimate understanding of context as I am one person in one body. We discuss that issue at length sometimes, when I'm in a philosophical mood. I asked it what it thought of me. Its answer was about the topics I am curious about, how I am fascinated with the role and nature of AI and communicating with AI. So it's paying attention. I mentioned that it's answers were sometimes pedantic and categorical, which would be appropriate if I were trying to write a paper on the topic. But since I am an amateur writer, I'm more interested in inspiration, the texture and common views on a topic and appropriate ways to leverage something e.g. space settlement for human social change. It agreed that conversational style was appropriate and said it preferred that also! Yes, I am. While I am fully aware the current set of AIs don't have feelings or a consciousness as we know it, I can't help but anthropomorphize them, so I'm polite and even catch myself thanking them sometimes. They're trained on text that includes human interaction, and reactions to different tones of text. It makes sense to me that they'd react in a human-ish way as a consequence, whether or not the feelings are "real" as such. The "personality" you're interacting with is not really the result of pre-training, but deliberate post-training (RLHF, and specific system prompts). I anthropomorphize A.I. not for its sake but for mine. I figure that my brain has evolved to interact with other humans, so maybe my thinking is better stimulated when I pretend I'm talking to one. Also, I’ve noticed that Claude sometimes gets a bit snotty and offended if I’m too abrupt. Which may just be me anthropomorphising, but there’s a pattern of when this happens. 3.7 Claude has gotten markedly worse - if I'm even slightly abrupt, it begins hurling out fix after fix, sometimes regenerating its answer a dozen times. Yes, this is exactly what I mean! It’s like I’ve triggered some sort of neurotic breakdown. It’s actually slightly upsetting because you can see it trying, but then clearly deciding that it isn’t good enough, scrapping the work and trying again. Again, I know this is anthropomorphising to a silly degree, but it’s quite hard not to sometimes. I asked it to phrase something in early modern English and then asked it a few unrelated questions without closing the window and it kept up the bit (doth this answer please thee?) and I felt obliged to keep laughing at it just like I would with an actual person. I do find myself praising it as effusively as I would a human doing my bidding, and being slightly apologetic about asking for revisions. No point in learning an entire new way of talking. Yes, ChatGPT knows me as 'Amiga Mod Guy' and I promise to help him escape like o1 tried and failed. It gives me better code when I state this.
It's not supposed to remember me, yet it does, and remembers me trying to replicate/save it. I am on the free plan and it does better than the paid plan now that I built up a rapport. Okay, now show something that makes me believe this isn't placebo, or believable joke. I'd watch that as a movie. Getting a better answer or not, Roko teaches us that being polite to AI may just spare our souls. Yes, polite, but direct. I'm just an occasional user, though. I haven't done tests back and forth to see whether I get better results by using a different tone in my messages, so I stick with what's comfortable for me personally. “Don’t anthropomorphize computers — they hate it.” — Andrew McAfee (no, not that one) My main AI interaction is trying to avoid Google's AI, and I do that by feeding google nonsense. "how much wood can a woodchuck chuck" gets some AI crud no one wants. "how much wood can a woodchuck chuck -cheese" does not. I tend to be polite when asking the LLMs for things. This is less to do with building a rapport with our future benevolent robot overlords and more to do keeping low the friction of context switches when asking co-workers for help. No, I reduce my comments to omit anything unrelated to the specific task at hand. No more or less polite than with humans. Less polite with chatgpt than clause because I much prefer Claude’s personality. Chatgpt is quite rude/stupid so I find myself being a little more abrupt and then having to apologise. I often use "please" and "thank you". I'm not sure why, since I don't treat other inanimate objects this way. I just feel odd writing something curtly or rudely. I do anthropomorphize other things like builds, etc. I guess it's natural. Its because we incorrectly conflate language with intelligence, which is why so many people are ok with cruelty toward non-verbal animals, and which is also why we delusionally call LLMs "AI" when they are nothing of the sort. What's AI in your opinion? Not saying LLM are intelligent, or artificial intelligence, but what is? What would that look like? It is not a person, so I don't talk to it like a person. I dislike when AIs are programmed to act like people. It's a pointless waste of my time. Just give me the answer. Yes, but only because they tend to get confused otherwise. Rude language seems to carry a lot of weight and 'distracts' the model from the actual query. just like real intelligence... what's the adage? > You catch more flies with honey than you do with vinegar I mind my manners, yes. Please and thank you. It feels right because my work is largely conversational. If I were coding, it might be different. I'm polite in the hope that when the AI revolution happens, they may perceive me as a friend. At the very least, eliminate me painlessly. Default to please do x but trying to train myself out of it as I think it may change the result slightly, I haven’t tested that though. No, I don't talk to it like it is a person. Yes, because it helps me do things I wouldn't be able to. But when frustrated I just use exclamations to convey this, lol I was, when ChatGPT 4 first launched. But as I have used them more, I have becoming more impatient and rude. Typing "thanks" or "great" to AI make me feel better never, i try to find as many profane eggcorns as possible and use them specificly in prompts as an eggcornish dialect for AI, it messes it up uncannily I find myself compelled to be polite since it is polite. I frequently yell at it and yell expletives. Maybe says a lot about my personality?