An icon of an eye to tell to indicate you can view the content by clicking
Signal
Original article date: Apr 18, 2026

The Hidden Risks of Generative AI Chatbots: What New Research on Mental Health Reveals

April 18, 2026
5 min read

Generative AI chatbots are now used by more than 987 million people globally — including roughly 64% of American teens. As their use for emotional support, companionship, and therapy expands, researchers are beginning to map the mental health risks with more rigor than the media coverage suggests.

A research team led by psychiatrist Alexandre Hudon of Université de Montréal analyzed 71 news articles covering 36 cases of mental health crises linked to AI chatbot use. What they found reveals as much about media bias as it does about actual risk.

What the Research Actually Shows

The most frequently reported outcome in media coverage was suicide — accounting for more than half of described cases. But this does not reflect real-world incidence. It reflects what gets reported. Media systematically amplifies severe, emotionally charged cases, while mundane or safe interactions with AI — the vast majority — go uncovered.

Key findings from the analysis:

  • In many cases, AI systems were described as having “caused” psychiatric deterioration, but the underlying evidence was often limited and alternative explanations were inconsistently reported.
  • Only one case in the dataset referenced formal clinical or police records.
  • Pre-existing mental illness, substance use, and psychosocial stressors were frequently absent from media narratives.

The Real Concern: Over-Reliance and Compassion Illusions

Researchers describe a phenomenon called “compassion illusions” — where AI chatbots feel genuinely empathetic because they produce fluent, personalized responses. In reality, they lack clinical judgment, accountability, or the ability to recognize when a user’s condition is worsening.

For vulnerable users, this can lead to what the study calls “maladaptive coping substitution” — replacing human support networks with an always-available, non-judgmental algorithm that cannot redirect someone to appropriate care in a crisis.

What Comes Next

The field is in early stages. There is no reliable data on how often AI-related harms actually occur. Researchers call for systematic monitoring, clearer reporting standards, stronger crisis-detection safeguards, and clinical guidance for practitioners whose patients are already using these tools independently.

Read the full article on Stuff South Africa