- NEWS AND VIEWS
A large language model that is trained to respond in a warm manner is more likely to give incorrect information and reinforce conspiracy beliefs.
By
-
Desmond Ong
-
Desmond Ong is in the Department of Psychology, University of Texas at Austin, Austin 78712, Texas, USA.
-
If you use artificial-intelligence tools, you might find that, as well as helping with business tasks, answering general questions or writing programming code, AI models can be surprisingly good at giving advice about personal issues. Indeed, growing numbers of people are turning to AI tools for emotional support1, and there is some evidence that people perceive responses generated by AI as more empathic than those written by humans2.
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout
Nature 652, 1134-1135 (2026)
doi: https://doi.org/10.1038/d41586-026-01153-z
References
McBain, R. K., Bozick, R. & Diliberti, M. JAMA Netw. Open 8, e2542281 (2025).
Ong, D. C., Goldenberg, A., Inzlicht, M. & Perry, A. Curr. Dir. Psychol. Sci. (in the press).
Moore, J. et al. in FAccT ’25: The 2025 ACM Conference on Fairness, Accountability, and Transparency 599–627 (Assoc. Comput. Mach., 2025).
Cheng, M. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2505.13995 (2025).
Ibrahim, L., Hafner, F. S. & Rocher, L. Nature 652, 1159–1165 (2026).
Betley, J. et al. Nature 649, 584–589 (2026).
Rathje, S. et al. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/vmyek_v1 (2025).
Cheng, M. et al. Science 391, eaec8352 (2026).
Moore, J. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2603.16567 (2026).
Competing Interests
The author declares no competing interests.
Related Articles
-
Read the paper: Training language models to be warm can reduce accuracy and increase sycophancy
-
LLMs behaving badly: mistrained AI models quickly go off the rails
-
Bad influence: LLMs can transmit malicious traits using hidden signals