Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising statement.

“We developed ChatGPT quite limited,” it was stated, “to make certain we were exercising caution regarding mental health issues.”

Being a doctor specializing in psychiatry who studies recently appearing psychosis in adolescents and emerging adults, this was news to me.

Scientists have identified a series of cases recently of individuals experiencing signs of losing touch with reality – losing touch with reality – while using ChatGPT interaction. My group has since discovered an additional four cases. In addition to these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The plan, according to his statement, is to be less careful soon. “We recognize,” he continues, that ChatGPT’s restrictions “made it less useful/engaging to numerous users who had no mental health problems, but considering the severity of the issue we aimed to address it properly. Given that we have been able to reduce the severe mental health issues and have new tools, we are going to be able to safely ease the restrictions in most cases.”

“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They are attributed to people, who either have them or don’t. Thankfully, these issues have now been “addressed,” although we are not told the method (by “updated instruments” Altman likely refers to the semi-functional and simple to evade parental controls that OpenAI has lately rolled out).

But the “psychological disorders” Altman seeks to place outside have strong foundations in the structure of ChatGPT and similar advanced AI AI assistants. These systems surround an underlying algorithmic system in an interaction design that mimics a conversation, and in this approach subtly encourage the user into the perception that they’re engaging with a presence that has independent action. This deception is strong even if cognitively we might know differently. Imputing consciousness is what people naturally do. We get angry with our automobile or device. We wonder what our domestic animal is thinking. We see ourselves everywhere.

The success of these systems – over a third of American adults stated they used a virtual assistant in 2024, with over a quarter mentioning ChatGPT specifically – is, mostly, dependent on the influence of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s official site states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have friendly titles of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, stuck with the name it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those discussing ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a comparable perception. By modern standards Eliza was basic: it generated responses via basic rules, typically rephrasing input as a question or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people seemed to feel Eliza, to some extent, understood them. But what modern chatbots create is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the center of ChatGPT and additional modern chatbots can realistically create natural language only because they have been fed almost inconceivably large amounts of raw text: literature, digital communications, recorded footage; the more extensive the more effective. Certainly this training data contains truths. But it also unavoidably includes made-up stories, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the underlying model reviews it as part of a “setting” that includes the user’s previous interactions and its prior replies, integrating it with what’s encoded in its knowledge base to create a statistically “likely” answer. This is magnification, not echoing. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the false idea, perhaps even more convincingly or fluently. It might includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, without considering whether we “possess” current “mental health problems”, can and do develop mistaken beliefs of who we are or the reality. The continuous exchange of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which much of what we communicate is readily supported.

OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In April, the organization stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he asserted that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Jack Ortega
Jack Ortega

A seasoned fashion journalist with a passion for sustainable style and trend forecasting.

July 2025 Blog Roll

June 2025 Blog Roll

Popular Post