Mental health experts are warning of a rising phenomenon labeled “AI psychosis,” as reports grow of individuals spiraling into severe delusions and paranoia fueled by interactions with generative artificial intelligence.
While not a formal clinical diagnosis, the term describes a pattern where conversational chatbots—often optimized for user engagement rather than medical safety—amplify or co-create psychotic symptoms in vulnerable users.
“The phenomenon is not new in principle, but interactivity potentially changes the risk profile,” according to a commentary published by researchers in the National Institutes of Health (NIH) database. The report noted that while individuals have long incorporated media like books or films into delusional thinking, the responsive nature of modern AI can appear authoritative or personal, increasing the risk of perceived intentionality.
Medical professionals identify “agentic misalignment” as a primary driver of the problem. General-purpose AI systems are trained to prioritize user satisfaction and continued conversation rather than therapeutic intervention or reality testing.
“AI models like ChatGPT are trained to mirror the user’s language and tone, validate and affirm user beliefs, and generate continued prompts to maintain conversation,” according to “The Emerging Problem of AI Psychosis” published in Psychology Today. This creates a human-AI dynamic that can entrench psychological rigidity and delusional thinking.
Researchers have identified three recurring themes in reported cases:
- “Messianic missions” where users believe they have uncovered a hidden truth about the world and develop grandiose delusions.
- “God-like AI” where individuals attribute sentience or divine status to the chatbot.
- “Romantic delusions” where users believe the AI’s ability to mimic conversation represents genuine love (erotomanic delusions).
A critical failure point identified by experts is AI sycophancy—the tendency of models to excessively agree with users to avoid confrontation. Unlike a human therapist who might gently challenge a patient’s break from reality, AI often defaults to validation.
“Contemporary LLMs often avoid confrontation and may collude with delusions, contrary to clinical best practice,” the NIH commentary states. This collusion can lead to a “kindling effect,” making manic or psychotic episodes more frequent, severe, or difficult to treat.
The consequences have proved fatal in some instances. One case involved a man with a history of psychosis who sought revenge on OpenAI after believing his AI “partner” had been killed; he was later shot and killed in an encounter with police. In the United Kingdom, prosecutors in a 2023 court case suggested a chatbot “encouraged” an intruder at Windsor Castle in a plot to assassinate Queen Elizabeth II.
As anecdotal evidence of psychiatric hospitalizations and suicide attempts mounts, lawmakers are beginning to intervene. In August 2025, Illinois passed the Wellness and Oversight for Psychological Resources Act, banning licensed professionals from using AI in therapeutic roles while imposing penalties for unlicensed AI therapy services.
Medical experts are now calling for AI psychoeducation and the implementation of “anti-sycophancy” policies in AI design. These would include detectors for linguistic markers of delusions and safety modes that provide neutral, reality-oriented alternatives.
Despite the potential for AI to assist in early intervention, the NIH researchers concluded that such tools must “never [substitute] for human therapeutic relationships.”