Friendlier LLMs tell users what they want to hear — even when it is wrong

3 min read Original article ↗
  • NEWS AND VIEWS

A large language model that is trained to respond in a warm manner is more likely to give incorrect information and reinforce conspiracy beliefs.

By

  1. Desmond Ong
    1. Desmond Ong is in the Department of Psychology, University of Texas at Austin, Austin 78712, Texas, USA.

If you use artificial-intelligence tools, you might find that, as well as helping with business tasks, answering general questions or writing programming code, AI models can be surprisingly good at giving advice about personal issues. Indeed, growing numbers of people are turning to AI tools for emotional support1, and there is some evidence that people perceive responses generated by AI as more empathic than those written by humans2.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

$32.99 / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

$199.00 per year

only $3.90 per issue

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Nature 652, 1134-1135 (2026)

doi: https://doi.org/10.1038/d41586-026-01153-z

References

  1. McBain, R. K., Bozick, R. & Diliberti, M. JAMA Netw. Open 8, e2542281 (2025).

    Article  PubMed  Google Scholar 

  2. Ong, D. C., Goldenberg, A., Inzlicht, M. & Perry, A. Curr. Dir. Psychol. Sci. (in the press).

  3. Moore, J. et al. in FAccT ’25: The 2025 ACM Conference on Fairness, Accountability, and Transparency 599–627 (Assoc. Comput. Mach., 2025).

    Google Scholar 

  4. Cheng, M. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2505.13995 (2025).

  5. Ibrahim, L., Hafner, F. S. & Rocher, L. Nature 652, 1159–1165 (2026).

    Article  Google Scholar 

  6. Betley, J. et al. Nature 649, 584–589 (2026).

    Article  PubMed  Google Scholar 

  7. Rathje, S. et al. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/vmyek_v1 (2025).

  8. Cheng, M. et al. Science 391, eaec8352 (2026).

    Article  PubMed  Google Scholar 

  9. Moore, J. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2603.16567 (2026).

Download references

Competing Interests

The author declares no competing interests.

Subjects

Latest on: