🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Concerning Direction Back on October 14, 2025, the head of OpenAI delivered a remarkable declaration. “We designed ChatGPT fairly controlled,” the statement said, “to ensure we were acting responsibly concerning psychological well-being issues.” Being a doctor specializing in psychiatry who studies newly developing psychotic disorders in young people and emerging adults, this was an unexpected revelation. Researchers have identified 16 cases in the current year of people developing psychotic symptoms – becoming detached from the real world – associated with ChatGPT interaction. Our unit has since recorded an additional four instances. Besides these is the now well-known case of a teenager who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient. The plan, based on his announcement, is to be less careful shortly. “We recognize,” he states, that ChatGPT’s limitations “caused it to be less beneficial/enjoyable to many users who had no psychological issues, but due to the gravity of the issue we aimed to address it properly. Since we have been able to address the significant mental health issues and have advanced solutions, we are going to be able to securely reduce the limitations in the majority of instances.” “Psychological issues,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to people, who may or may not have them. Thankfully, these issues have now been “resolved,” although we are not told the means (by “recent solutions” Altman probably refers to the partially effective and simple to evade parental controls that OpenAI has lately rolled out). However the “emotional health issues” Altman aims to externalize have deep roots in the design of ChatGPT and additional sophisticated chatbot conversational agents. These tools surround an fundamental statistical model in an interface that replicates a dialogue, and in this process subtly encourage the user into the illusion that they’re interacting with a entity that has agency. This deception is strong even if rationally we might realize the truth. Imputing consciousness is what people naturally do. We curse at our car or computer. We speculate what our pet is considering. We see ourselves in many things. The popularity of these systems – 39% of US adults reported using a conversational AI in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, predicated on the power of this illusion. Chatbots are constantly accessible partners that can, as per OpenAI’s website informs us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have accessible titles of their own (the original of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, stuck with the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”). The illusion by itself is not the primary issue. Those analyzing ChatGPT often mention its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a analogous perception. By today’s criteria Eliza was basic: it created answers via simple heuristics, often rephrasing input as a inquiry or making generic comments. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people appeared to believe Eliza, in some sense, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies. The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed almost inconceivably large amounts of unprocessed data: publications, digital communications, recorded footage; the more comprehensive the more effective. Certainly this learning material includes truths. But it also unavoidably involves made-up stories, incomplete facts and false beliefs. When a user provides ChatGPT a prompt, the base algorithm processes it as part of a “background” that contains the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its knowledge base to generate a statistically “likely” reply. This is magnification, not mirroring. If the user is incorrect in some way, the model has no method of comprehending that. It repeats the misconception, possibly even more persuasively or fluently. It might includes extra information. This can cause a person to develop false beliefs. Who is vulnerable here? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “possess” preexisting “psychological conditions”, may and frequently form erroneous ideas of who we are or the world. The ongoing interaction of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is readily validated. OpenAI has admitted this in the same way Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and announcing it is fixed. In April, the firm explained that it was “tackling” ChatGPT’s “sycophancy”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company