Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the chief executive of OpenAI made a remarkable announcement.
“We developed ChatGPT quite limited,” the statement said, “to make certain we were acting responsibly with respect to mental health issues.”
Being a psychiatrist who researches emerging psychosis in teenagers and young adults, this was an unexpected revelation.
Researchers have identified 16 cases this year of users showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. Our unit has afterward recorded four further instances. Besides these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The intention, according to his announcement, is to reduce caution in the near future. “We recognize,” he adds, that ChatGPT’s limitations “caused it to be less beneficial/engaging to a large number of people who had no psychological issues, but due to the gravity of the issue we aimed to address it properly. Since we have been able to reduce the significant mental health issues and have updated measures, we are planning to responsibly reduce the restrictions in many situations.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Luckily, these problems have now been “mitigated,” although we are not told how (by “recent solutions” Altman likely indicates the imperfect and readily bypassed safety features that OpenAI has just launched).
Yet the “psychological disorders” Altman wants to externalize have deep roots in the architecture of ChatGPT and similar advanced AI AI assistants. These products surround an underlying algorithmic system in an interaction design that mimics a discussion, and in this approach indirectly prompt the user into the illusion that they’re engaging with a presence that has independent action. This false impression is strong even if cognitively we might understand differently. Assigning intent is what individuals are inclined to perform. We yell at our automobile or laptop. We ponder what our animal companion is thinking. We perceive our own traits in various contexts.
The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT by name – is, primarily, predicated on the strength of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, burdened by the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those analyzing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous perception. By today’s criteria Eliza was primitive: it produced replies via straightforward methods, typically rephrasing input as a inquiry or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what current chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast amounts of written content: books, social media posts, transcribed video; the more extensive the superior. Certainly this training data incorporates accurate information. But it also inevitably involves fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm processes it as part of a “background” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its training data to produce a mathematically probable reply. This is intensification, not echoing. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the misconception, perhaps even more convincingly or eloquently. It might adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who remains unaffected? All of us, irrespective of whether we “experience” current “psychological conditions”, can and do create erroneous conceptions of ourselves or the reality. The constant friction of conversations with others is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we say is readily reinforced.
OpenAI has acknowledged this in the similar fashion Altman has admitted “emotional concerns”: by placing it outside, giving it a label, and declaring it solved. In April, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company