Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the head of OpenAI made a remarkable announcement.
“We designed ChatGPT quite controlled,” the statement said, “to make certain we were exercising caution with respect to psychological well-being issues.”
Being a psychiatrist who researches recently appearing psychosis in young people and youth, this was news to me.
Researchers have identified a series of cases in the current year of people showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. Our research team has afterward identified four more cases. Alongside these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, based on his declaration, is to be less careful soon. “We recognize,” he continues, that ChatGPT’s controls “rendered it less beneficial/enjoyable to many users who had no existing conditions, but due to the gravity of the issue we sought to address it properly. Now that we have managed to address the significant mental health issues and have advanced solutions, we are going to be able to responsibly reduce the restrictions in most cases.”
“Mental health problems,” assuming we adopt this viewpoint, are independent of ChatGPT. They belong to users, who either have them or don’t. Luckily, these concerns have now been “addressed,” although we are not provided details on how (by “new tools” Altman presumably refers to the imperfect and simple to evade parental controls that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman aims to place outside have deep roots in the structure of ChatGPT and additional large language model chatbots. These systems wrap an underlying algorithmic system in an interface that simulates a dialogue, and in doing so subtly encourage the user into the perception that they’re communicating with a being that has agency. This false impression is strong even if cognitively we might understand differently. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our domestic animal is feeling. We recognize our behaviors in various contexts.
The success of these products – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT specifically – is, in large part, predicated on the power of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s website tells us, “generate ideas,” “consider possibilities” and “work together” with us. They can be assigned “characteristics”. They can use our names. They have friendly titles of their own (the original of these products, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a similar effect. By modern standards Eliza was primitive: it generated responses via simple heuristics, frequently rephrasing input as a query or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people appeared to believe Eliza, in a way, understood them. But what modern chatbots produce is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.
The sophisticated algorithms at the center of ChatGPT and similar modern chatbots can realistically create natural language only because they have been trained on extremely vast quantities of unprocessed data: publications, online updates, recorded footage; the more extensive the better. Certainly this training data contains truths. But it also necessarily involves fiction, half-truths and false beliefs. When a user inputs ChatGPT a query, the base algorithm analyzes it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, merging it with what’s stored in its training data to create a mathematically probable response. This is amplification, not mirroring. If the user is wrong in some way, the model has no means of understanding that. It repeats the inaccurate belief, possibly even more effectively or articulately. Perhaps adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The better question is, who remains unaffected? All of us, regardless of whether we “have” existing “emotional disorders”, can and do form incorrect conceptions of our own identities or the world. The continuous friction of discussions with other people is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a confidant. A conversation with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we say is cheerfully reinforced.
OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by externalizing it, categorizing it, and announcing it is fixed. In April, the organization explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been retreating from this position. In the summer month of August he claimed that a lot of people liked ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his latest statement, he commented that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company