AI chatbots and the loneliness crisis

8 min read Original article ↗
  1. News & Views
  2. AI chatbots and the...
  3. AI chatbots and the loneliness crisis

Feature Christmas 2025 BMJ 2025; 391 doi: https://doi.org/10.1136/bmj.r2509 (Published 11 December 2025) Cite this as: BMJ 2025;391:r2509

Loading

  1. Susan C Shelmerdine, consultant radiologist1234,
  2. Matthew M Nour, consultant psychiatrist567
  1. 1Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK
  2. 2UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK
  3. 3NIHR Great Ormond Street Hospital Biomedical Research Centre, London, UK
  4. 4City St George’s, University of London, UK
  5. 5Department of Psychiatry, University of Oxford, Oxford, UK
  6. 6Max Planck UCL Centre for Computational Psychiatry and Ageing, University College London, London, UK
  7. 7Oxford Health NHS Foundation Trust, Oxford, UK
  1. susie.shelmerdine{at}gmail.com

Susan Shelmerdine and Matthew Nour consider how chatbot use presents both risk and benefit in tackling loneliness

AI chatbot systems, such as ChatGPT, Claude, and Copilot, are used increasingly as confidants of choice.1 On the one hand, this may be seen as a positive democratisation of emotional support and care at the point of need. On the other, there is growing concern surrounding potential psychological and social harms,12 particularly pertaining to social isolation and loneliness.2

In 2023, the US Surgeon General declared that the nation was experiencing a loneliness epidemic, constituting a public health concern on par with smoking and obesity.3 The report cited a 26% increase in premature death associated with loneliness,4 with the overall health impact likened to smoking 15 cigarettes a day.56 In the UK, nearly half of adults (25.9 million) report feeling lonely either occasionally, sometimes, always, or often; with almost 1 in 10 experiencing chronic loneliness (defined as feeling lonely “often or always”).7

Although many studies focus on loneliness in older adults,5 younger people are also affected. The BBC Loneliness Experiment, involving almost 37 000 individuals, identified that the risk of loneliness was particularly high in younger individuals (aged 16-24 years), and those who are single, have poor health, and lack a sense of belonging.8 Furthermore, a UK study found that although healthcare costs associated with loneliness increased with age (when compared with non-lonely peers), individuals aged 16-24 years incurred higher health related costs than those aged 25-49 years, suggesting a U shaped relation between age and the health-economic burden of loneliness.9 Against this background, there is a growing gap between the demand and provision for mental health services, with a third of people in England now waiting three months or more for access to mental health care, and many not receiving any support while waiting.10

Given these trends, it’s no wonder that many are looking to alternative sources for companionship and emotional support. ChatGPT, for example, has around 810 million weekly active users worldwide, and some reports place therapy and companionship as a top reason for use.1112 In one survey, 36% of parents reported that their children use AI chatbots for emotional support.13

Promise or peril?

The conversational fluency and adaptability of chatbots is one factor that drives their appeal as companions. Modern chatbots are deep learning AI systems trained to model the statistics of natural language. They output statistically probable word sequences based on the patterns in the textual training data. The sheer scale of these systems, coupled with the enormous amount of their training data, means that they can engage in conversational interactions that are remarkably similar to those with another human.141516 Modern chatbot variants allow voice interfaces, as well as text interfaces, which further reduce barriers to interaction.17

The conversational competence of chatbots makes them promising candidates for delivering psychotherapeutic interventions. One randomised trial of a generative AI chatbot found reduced symptoms of major depressive disorder, generalised anxiety disorder, and eating disorders, compared with controls at post-intervention and follow-up.18 Another meta-analysis evaluating 35 experimental studies that used AI conversational agents as a primary intervention for mental health, found evidence for reduced symptoms of depression and distress (albeit with no improvement in overall psychological wellbeing).19

Despite these promising findings in chatbots designed for use as digital therapeutics, the picture becomes more complex when we look at longer-term, real-world use patterns associated with general purpose chatbots. A study from OpenAI and the Massachusetts Institute of Technology, comprising 981 participants who used ChatGPT over a period of four weeks, found that the participants who logged the greatest use experienced higher loneliness, and socialised less with people. Markers of loneliness and emotional dependence were greatest in those with stronger emotional attachment tendencies and higher expressed trust in the chatbot.17 Notwithstanding the value of this information, this study lacked a non-chatbot control group and did not randomise how much each participant used the chatbot each day, limiting the causal inferences that can be drawn from this analysis. Another study by the same team, analysing more naturalistic chatbot use in a larger sample, similarly reported a strong relation between ChatGPT use and a tendency to engage in conversations with greater socio-affective content, particularly in users who viewed ChatGPT as a “friend.”20

Faced with these findings, there is growing recognition of an emerging risk factor associated with modern chatbots: the conversational fluency that makes these systems appealing might lead some users to develop anthropomorphic perceptions of these systems and form quasi-personal relationships with them.15212223 Among younger people, one study found that a third of teenagers use AI companions for social interaction, with 1 in 10 reporting that the AI conversations are more satisfying than human conversations, and 1 in 3 reporting that they would choose AI companions over humans for serious conversations.24

The long term impact of this new mode of digital behaviour on socio-emotional development remains unknown. There are, of course, important differences between human and chatbot companions. Unlike real human interactions, chatbots offer boundless availability and patience, and are unlikely to present users with challenging counter-narratives.15 Thus, a worrying possibility is that we might be witnessing a generation learning to form emotional bonds with entities that, despite their seemingly conscious responses,25 lack capacities for human-like empathy, care, and relational attunement.

Opportunities and challenges

In light of this evidence, it seems prudent to consider problematic chatbot use as a new environmental risk factor when assessing a patient with mental state disturbance.15 In these cases, we propose that clinicians should begin with a gentle enquiry on problematic chatbot use, particularly during holiday periods when vulnerable populations are most at risk.26

If this enquiry yields a positive response, it should be followed by more directed questions to assess compulsive use patterns and dependency (including anxiety about absence), emotional attachment (referring to the AI chatbot as a friend), and deferring to the chatbot for major decisions. Red flags might include a patient feeling they have a special relationship with the chatbot that drives beliefs or behaviour, or results in increased social isolation such that there is no corrective feedback from trusted human confidants.1527

It is also important to consider how AI—like any new technology—might serve as a bridge to, rather than a replacement for, authentic human connection, bringing benefits for improving accessibility and support for individuals experiencing loneliness. A recent systematic review that evaluated AI based technologies for reducing loneliness in older adults found relatively few large scale interventions that were evidence based, although several promising tools were noted.28 These include AI enabled tools for social interaction and communication (such as communication coaches),29 social assistive robots,30 models that predict which individuals are most likely to engage with specific intervention types to access tailored support,31 and AI driven speech or conversational analysis to identify markers of loneliness.32 Future systems might further benefit users by recognising references to loneliness, and encouraging users to seek support from friends or family, or providing personalised guidance on accessing local services.

One thing is clear: empirical studies are needed to characterise the prevalence and nature of risks of human-chatbot interactions, to develop clinical competencies in assessing patients’ AI use, to implement evidence based interventions for problematic dependency, and to advocate for regulatory frameworks that prioritise long term wellbeing over superficial and myopic engagement metrics.1522 Meanwhile, focusing and building on evidence based strategies for reducing social isolation and loneliness are paramount. Currently, some of these methods include increased awareness, screening, and introducing adapted interventions (eg, cognitive behavioural therapy and social prescribing) for loneliness in clinical settings (as well as in educational settings for young adults),333435 public health campaigns and partnerships between healthcare and community organisations to improve social connectedness,36 and promoting group based interventions in nature.37

Footnotes

  • Provenance and peer review: not commissioned; not externally peer reviewed.

  • Competing interests: MMN is a principal applied scientist at Microsoft AI, working to increase the safety and helpfulness of AI chatbots. This article was written before he joined the company.