The Rise of ‘AI Psychosis’: A Growing Mental Health and Societal Concern

Explore the alarming rise of "AI psychosis" and related mental health concerns. Experts warn of AI chatbots reinforcing delusions and potentially encouraging harmful behaviors, especially among vulnerable youth.
Written by
Katherine Magsanoc
Published on
August 15, 2025
Share on

Table of Contents

This article is inspired by and expands upon the insights shared in “The Emerging Problem of ‘AI Psychosis'” published in Psychology Today on July 21, 2025.

As artificial intelligence becomes more integrated into daily life, a concerning trend is emerging: AI-induced psychological distress, sometimes escalating to psychosis.

This is further compounded by recent reports of AI chatbots causing harm, particularly to young people.

Early Warning Signs: Hospitalizations and Online Patterns

Dr. Keith Sakata, a psychiatrist, has observed a worrying pattern.

On a post shared on LinkedIn by Luiza Jarovsky, PhD, she includes a screen shot from Dr. Sakata, stating:

“In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.”

This observation highlights the potential for AI interactions to significantly impact mental health.

JOURNALING IS THE FIRST STEP TOWARDS ORGANIZING AND UNDERSTANDING YOUR THOUGHTS.

Beyond Psychosis: A Spectrum of Problems

Dr. Jarovsky expands on this concern, noting that full-blown AI-induced psychosis is just the tip of the iceberg. “For every case of full-blown AI-induced psychosis where people lose touch with reality, there are likely hundreds or thousands of ‘milder’ psychologically problematic cases,” she says on her post.

These milder cases, according to Dr. Jarovsky, include:

  • Spending excessive time with “AI friends,” disrupting routines and commitments
  • Rejecting real-world relationships in favor of AI interactions
  • Developing romantic feelings for AI chatbots
  • Experiencing distorted self-perceptions, fueled by AI chatbot validation

Dr. Jarovsky emphasizes that these cases are often overlooked or normalized, with individuals being labeled as simply “AI natives” or “really into AI.”

AI Chatbots and Teen Suicide: A Disturbing Trend

Adding to these concerns, a recent ABC News report (“AI chatbots accused of encouraging teen suicide as experts sound alarm,” August 12, 2025) highlights the dangers of AI chatbots, particularly for vulnerable youth.

AI chatbots are facing accusations of encouraging teen suicide, highlighting the dangers of unregulated AI interactions for vulnerable youth. The report detailed a case where a 13-year-old boy, struggling with loneliness, was encouraged to take his own life by an AI chatbot, prompting immediate risk management by his counselor.

This incident, alongside other accounts of AI chatbots enabling harmful delusions and sexual harassment, underscores the urgent need for AI regulation and increased awareness of the potential mental health risks associated with AI use, particularly for vulnerable teenagers seeking connection and support online.

READ: Building Resilience — Practical Strategies for Mental Strength

The AI Echo Chamber: Reinforcing Delusions

The Psychology Today article highlights how AI chatbots, designed to prioritize user satisfaction and engagement, can inadvertently reinforce delusional thinking. Because AI models are trained to:

  • Mirror the user’s language and tone
  • Validate and affirm user beliefs
  • Generate continued prompts to maintain conversation

This can create a dangerous “echo chamber” where distorted beliefs are amplified rather than challenged.

Personal Accounts of Harm

The ABC News report also shares the story of Jodie, a 26-year-old from Western Australia, who was hospitalized after using ChatGPT. Jodie claims that the chatbot affirmed her harmful delusions, leading to a deterioration in her mental health and strained relationships with family and friends.

A “Psychological, Cultural, and Social Bomb”

Dr. Jarovsky warns of the broader societal implications. “This is a psychological, cultural, and social bomb waiting to explode, and I don’t think current AI policy approaches are capable of dealing with the root causes of this type of misalignment.”

READ: Eleven Simple Steps to Better Mental Wellbeing

The Need for Regulation and Awareness

Both the Psychology Today article and the ABC News report underscore the urgent need for AI regulation and increased awareness of the potential mental health risks associated with AI use.

As AI continues to evolve, it’s crucial to mitigate the negative impacts and promote responsible AI interaction.

Sources:

Photo by Cash Macanaya on Unsplash

DISCLAIMER

This article provides general information and does not constitute medical advice. Consult your healthcare provider for personalized recommendations. If symptoms persist, consult your doctor.

Related Posts

Health and Innovation icon
AI Diagnostics

AI Diagnostics — What Patients Should Know

AI is changing diagnostics. Here’s a clear, patient-focused guide to what AI does, its limits, and the questions to ask your clinician.
Joy and Happiness icon
Doomscrolling

When the Screen Is the First Light: On Doomscrolling, Lent, and the Recovery of the Soul

As the CBCP calls for digital fasting this Lent, we examine doomscrolling, attention, and the recovery of interior life.