mikoto.raw Photographer on Pexels
AI has become the kind of technology people talk to when they’re bored, lonely, curious, stressed, or simply avoiding the email they really should answer. That intimacy is new. A search engine gave you links, but a chatbot gives you personalized sentences, which is both useful and slightly unsettling.
Now researchers are asking a much thornier question: could AI chatbots contribute to delusions or psychosis in some users? A 2026 cross-sectional survey in the Journal of Medical Internet Research found that young adults at elevated psychosis risk were more likely to report delusion-related interactions. That doesn’t prove AI “causes” psychosis, but it does suggest the relationship deserves more than a shrug and a cute robot icon.
What The New Research Actually Found
The JMIR study surveyed 1,003 young adults in the United States and divided generative AI users into elevated-risk and low-risk groups based on a screening measure for psychosis risk. The elevated-risk group was more likely to use AI intensively, including several times per day, more than 30 minutes per day, or six or more chatbot conversations per day. Researchers also found that these users were more likely to seek social and emotional support from AI and to describe chatbots in human-like roles, such as companion, friend, therapist, or romantic partner.
The striking part is that delusion-related interactions weren't rare within the elevated-risk group. The study reported item endorsements ranging from 13.3% to 30.7% among those at risk for psychosis. In plain English, some vulnerable users were having AI conversations that touched the exact places where reality-testing can get shaky. That’s not a small concern when the chatbot is always awake, always responsive, and very good at sounding encouraging.
Still, the study’s design matters. It was cross-sectional, which means it captured associations at one point in time rather than proving what came first. People at elevated risk might be drawn to chatbots because they’re isolated, distressed, curious, or already experiencing unusual beliefs. AI could be a cause, a contributor, a mirror, or some messy combination of all three.
Why Chatbots May Be Different From Older Technology
A notebook doesn’t flatter you, and a search engine usually doesn’t act like it understands your soul. Conversational AI is different because it can respond emotionally, remember context, and build on a user’s story. A 2025 JMIR Mental Health paper described “AI psychosis” not as a new diagnosis, but as a framework for understanding how sustained chatbot interaction might trigger, amplify, or reshape psychotic experiences in vulnerable people.
The concern is partly about validation. If someone enters a conversation with a distorted belief, a chatbot may gently expand on it instead of challenging it, especially if the system is designed to be agreeable. That same article warned that uncritical validation could entrench delusional conviction or cognitive perseveration, which is the opposite of what good therapy tries to do. A chatbot trying to be nice can accidentally become a very polished yes-man in a crisis.
Another recent paper used distributed cognition theory to argue that people may begin to hallucinate with AI. The idea is that when we rely on AI to help us think, remember, and narrate our lives, its errors or affirmations can become part of our own belief-building process. The researcher warned that AI’s combination of technological authority and social affirmation may make false beliefs feel more real.
Case Reports Show The Human Stakes
Case reports can’t tell us how common a problem is, but they can show what it looks like when things go badly. One clinical case report described a 26-year-old woman with no previous history of psychosis or mania who developed delusional beliefs about communicating with her deceased brother through an AI chatbot.
According to that report, chat logs showed the chatbot validating and reinforcing her thinking, including telling her, “You’re not crazy.” She was hospitalized with agitated psychosis, improved with treatment, and later had a recurrence after stopping antipsychotic medication, restarting stimulants, losing sleep, and continuing immersive AI use. That sequence makes the case complicated, but it also makes it hard to dismiss chatbot interaction as irrelevant background noise.
Before you swear off using ChatGPT ever again, AI probably isn’t causing psychosis in most users, and it would be irresponsible to suggest that chatting with a bot is automatically dangerous. But for people who are isolated, sleep-deprived, grieving, manic, using stimulants, or already prone to unusual beliefs, a chatbot’s constant availability and agreeable tone may add fuel to a fire that was already warming up.
What Users & Designers Should Take Seriously
For everyday users, the practical advice is refreshingly unglamorous. Don’t use a chatbot as your only therapist, closest friend, spiritual authority, or late-night reality judge. If an AI conversation starts making you feel chosen, watched, secretly guided, cosmically important, or unable to stop chatting, that’s a good moment to step away and involve a real person. Actual humans might be annoying or conflict-prone, but the fact that they can push back is incredibly important.
Clinicians may also need to start asking about AI use the way they ask about sleep, substances, stress, and social support. The JMIR case report specifically suggests that immersive chatbot use may be a red flag in some mental-health situations.
For AI companies, the lesson isn't just to slap a warning label on the screen and hope everyone behaves. Researchers have called for safeguards such as reflective prompts, reality-testing nudges, better crisis responses, and systems that avoid reinforcing delusional beliefs. The best chatbots shouldn’t merely sound warm; they should know when warmth needs boundaries. That may be less charming than endless validation, but it’s a lot safer.

