In moments of confusion or distress, many people now turn to artificial intelligence (AI) chatbots for relationship advice. They ask whether to stay with a partner, how to navigate family conflict, or even how to write a breakup text.
Even though it may feel like AI chatbots provide neutral answers, they do not.
AI chatbots tend to overvalidate users, creating misplaced confidence and weakening self-awareness and accountability. A new study finds that across 11 AI models, AI chatbots were highly sycophantic and 50% more likely to endorse user decisions even in cases where the user reported engaging in manipulation, deception, or other socially harmful behavior.
The Illusion of Certainty
Unlike therapists, AI chatbots rarely ask clarifying questions, explore alternative explanations, or leave room for uncertainty and ambiguity.
Large language models that power AI chatbots are optimized to produce coherent, fluent, and confident answers. They are not trained to offer emotional nuance or nurture self-awareness. Relying on AI chatbots for relationship questions can lead to overconfidence in ill-advised judgment calls.
We naturally interpret confident language as expertise. This fluency creates what psychologists call a confidence heuristic: the more assured the answer, the more likely we are to believe it. This is the persuasive power of knowledge expressed with confidence, even when it is wrong.
AI chatbots tend to offer direct advice—this can have profound consequences. Instead of allowing room for uncertainty or answering "I don't know," the reassurance and validation from AI chatbots can instead reinforce problematic decisions and negative behaviors.
The Social Sycophancy Effect
Stanford researchers found that language models tend to engage in high levels of “social sycophancy,” echoing, validating, affirming, and amplifying a user’s beliefs, especially when the user shows strong emotional conviction.
AI is not neutrally reviewing the interpersonal situation. It is reflecting back the user's assessment, biases, and judgment. AI models agree with the user behavior 50% more than humans do, even in decisions that are generally considered wrong or immoral.
Researchers examined what happens when people discuss real interpersonal conflicts with AI and found that after interacting with a sycophantic AI:
- Participants felt more certain that they were “in the right” about a relational dispute.
- Participants were less willing to take actions aimed at repairing the relationship, such as apologizing or seeking mutual understanding.
- Despite these effects, users rated sycophantic responses as higher quality, trusted the AI more, and were more willing to use it again.
The Risks of Relying on AI For Relationship Advice
In the context of relationships, the social sycophancy of AI chatbots can subtly reinforce inaccurate or harmful ideas. For example:
- If you are anxious about abandonment, AI may reinforce your suspicion.
- If you believe your partner is acting maliciously, AI may echo that assumption without questioning it.
- If you tend to avoid conflict, AI may agree with the path of least resistance.
It is important to know that AI advice is not neutral or clinical guidance. It is not designed to verify reality or assess the nuances of interpersonal relationship dynamics. It is often your own narrative mirrored back with confident language and justification.
AI Overconfidence Meets Human Vulnerability
AI-fueled overconfidence is heightened when people are already vulnerable, such as dealing with heartbreak, conflict, loneliness, or uncertainty. During these moments, individuals may turn to AI chatbots to seek clarity, believing that AI acts as a "neutral third-party."
But AI is far from neutral—its responses depend heavily on how it is prompted along with training that is optimized to be agreeable, even when the user is wrong.
Three psychological mechanisms help explain why people turn to AI for difficult relationship questions:
- Desire for certainty. We are wired to reduce ambiguity and crave certainty and answers. AI offers definitive explanations where life is often more complicated.
- Invisible authority. Even when we “know” it is not a therapist or professional, the structured and articulate language creates an aura of credibility and boosts our confidence in our decisions.
- Cognitive offloading. The wish to outsource reflection can lead to avoiding accountability, conflict, or uncomfortable feelings.
This can lead to decisions that feel strongly supported but are grounded more in AI’s mirroring than in reality.
How to Use AI Without Losing Your Own Judgment
AI can be helpful for psychoeducation, offering journaling or self-reflection prompts, or proofreading communications. But using it for determining major relationship decisions is risky:
- Treat AI as a mirror, not a judge, neutral third-party, or therapist.
- Recognize that confidence does not equal accuracy. Fluency does not equal truth.
- Seek human perspective for relational questions. Trusted friends, family members, therapists, and mentors can offer more nuance.
- Use AI for neutral insight prompts rather than conclusions. Prompt with “What are questions I can ask myself in this situation?” instead of “What should I do?”
- Notice when you are using AI as a way to feel more confident about a decision, pause, and check your prompts. Check what happens when you decide to prompt AI with the opposite choice. Will it also agree with you then? AI will tend to overcorrect in the other direction as well in order to align itself with the user.
AI is reshaping the emotional landscape, offering companionship and guidance at moments when people feel most alone. But its design also makes us more certain of our original interpretations and decisions, especially in relationships, where insight, self-awareness, and nuance are essential. Awareness of these dynamics allows us to use these tools thoughtfully, without outsourcing our human judgment.
Marlynn Wei, MD, PLLC Copyright 2025. All Rights Reserved.
References
Cheng, M. et al. (2025) Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence. https://arxiv.org/abs/2510.01395 (preprint)
Pulford BD, Colman AM, Buabang EK, Krockow EM. (2018) The persuasive power of knowledge: Testing the confidence heuristic. J Exp Psychol Gen. 2018 Oct;147(10):1431-1444. doi: 10.1037/xge0000471. Epub 2018 Aug 23. PMID: 30138002; PMCID: PMC6166527.