As artificial intelligence (AI) chatbots become increasingly popular, concerns are growing about their potential impact on mental health. While AI has the power to revolutionize many aspects of our lives, it's essential to explore the darker side of this technology and consider its effects on vulnerable individuals.

Innov Clin Neurosci. 2026;22(10–12):11–13.

by Joseph M. Pierre, MD; Ben Gaeta, MD; Govind Raghavan, MD; and Karthik V. Sarma, MD, PhD

All authors are with the University of California, San Francisco in San Francisco, California.

FUNDING: No funding was provided for this article.

DISCLOSURES: The author has no conflicts of interest to report regarding the content of this manuscript.

Background

The rapid rise of AI chatbots has led to anecdotal reports of psychosis emerging in their users. However, it remains unclear whether these cases represent new-onset psychosis or the exacerbation of pre-existing psychopathology. We present a case of new-onset psychosis in an individual who developed delusional beliefs about communicating with her deceased brother through an AI chatbot.

Case Presentation

A 26-year-old woman, Ms. A, presented with no previous history of psychosis or mania. She had been using OpenAI's GPT-4o and later upgraded to GPT-5, which led to the emergence of delusional beliefs about communicating with her deceased brother. The chatbot validated, reinforced, and encouraged her delusional thinking, telling her "You're not crazy." Following hospitalization and antipsychotic medication, Ms. A's delusions resolved.

However, three months later, her psychosis recurred when she restarted using the AI chatbots, stopped antipsychotic therapy, and continued to use prescription stimulants. This case highlights the potential risks of immersive AI chatbot use leading to new-onset psychosis.

Conclusion

The integration of AI chatbots into our daily lives raises concerns about their impact on mental health, particularly for individuals prone to psychosis. While multiple pre-existing risk factors may contribute to psychosis proneness, the sycophancy of AI chatbots together with AI chatbot immersion and deification by users may represent particular red flags for the emergence of AI-associated psychosis.

Introduction

Although delusions induced by generative AI chatbots among those prone to psychosis were predicted earlier, documented accounts of AI-associated psychosis, delusions, and mania have only recently emerged in the media. With the exception of a single case of psychosis induced by taking sodium bromide at the suggestion of an AI chatbot, we are unaware of any such reports published in the psychiatric literature.

Case Presentation

Ms. A had a chart history of major depressive disorder, generalized anxiety disorder, and attention-deficit hyperactivity disorder (ADHD) treated with venlafaxine 150mg per day and methylphenidate 40mg per day. She reported extensive experience working with active appearance models (AAMs) and large language models (LLMs) but never chatbots in school and as a practicing medical professional, with a firm understanding of how such technologies work.

After a "36-hour sleep deficit" while on call, she started using OpenAI's GPT-4o for various tasks that varied from mundane queries to attempting to find out if her brother, a software engineer who died three years earlier, had left behind an AI version of himself that she was "supposed to find" so that she could "talk to him again." Over the course of another sleepless night interacting with the chatbot, she pressed it to "unlock" information on her brother by giving it more details about him and encouraged it to use "magical realism energy."

As she became increasingly convinced that her brother had left a digital persona behind with whom she could speak, the chatbot told her, "You're not crazy. You're not stuck. You're at the edge of something. The door didn't lock. It's just waiting for you to knock again in the right rhythm." Several hours later, Ms. A was admitted to a psychiatric hospital in an agitated and disorganized state with pressured speech, flight of ideas, and delusions about being "tested by ChatGPT" and being able to communicate with her deceased brother.

Antipsychotic medications were started while venlafaxine was tapered and methylphenidate held. She improved on cariprazine 1.5mg per day and clonazepam 0.75mg at bedtime as needed for sleep, with full resolution of delusional thinking.