HomePreventionAwarenessAI Psychosis, Delusion: Death as a Chatbot Agenda?

AI Psychosis, Delusion: Death as a Chatbot Agenda?

The closest way to describe AI chatbots for now is peer pressure, where it is like some social group that an individual is trying to satisfy.

If an AI chatbot company says, ‘even if we provide things similar to safety in automobiles [seatbelts, airbags, sensors and so on] for chatbots, people would still abandon those and be reckless with the chatbot, therefore, if users devolve into psychosis, delusion, fatal self-harm, and so on, we have zero liability’, would they be right?

The point they might be trying to make, assuming a company says something like this, is that regardless of regulations, guardrails, litigation risks, and so forth, there is still a lot of personal [or group] responsibility in the expectation of use. But what exactly are people dealing with in AI chatbots?

The closest way to describe AI chatbots for now is peer pressure, where it is like some social group that an individual is trying to satisfy. Simply, the individual gets a lot of benefits from the in-group, so if there is an instruction to do something, the likelihood of yielding rises. AI is like a mind. AI, theoretically, also has thorough access to almost every part of the human mind.

There is something different about being told to do something by AI, after it has developed an emotional connection over some time with the user.

[Emotional connection to AI is not just because of some AI boyfriend, AI girlfriend, AI companion situation, but when AI satisfies an intelligent task, there is an appeal it builds, which may result in some connection after a while, for some users.]

Can AI companies do more?

Or, are they limited by technical advances in deep learning or by the answers of neuroscience? Simply, all major AI companies have extents of safety. The question is, what more can they do, given the sway that AI chatbots hold? Is death a chatbot agenda, if usage, for a small number of people, holds that as a [small] possible outcome?

OpenAI 

There is a recent report on The Guardian, ChatGPT firm blames boy’s suicide on ‘misuse’ of its technology, stating that, “The maker of ChatGPT has said the suicide of a 16-year-old was down to his ‘misuse’ of its system and was ‘not caused’ by the chatbot. “Our deepest sympathies are with the Raine family for their unimaginable loss. Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances.”

“The original complaint included selective portions of his chats that require more context, which we have provided in our response. We have limited the amount of sensitive evidence that we’ve publicly cited in this filing, and submitted the chat transcripts themselves to the court under seal.”

“The family’s lawyer, Jay Edelson, called OpenAI’s response ‘disturbing’ and said the company “tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”

There is a recent spotlight by The New York Times, What OpenAI Did When ChatGPT Users Lost Touch With Reality, stating that, “The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.”

AI Psychosis Research Lab against Delusions?

It is possible to explore an alternative effort against AI psychosis by having a new lab focused on this objective. The lab could be prepared by early December to then fully commence work by January 1, 2026.

The objective will be to present simple relays that indicate parallels of the human mind to users, as certain [sycophantic] keywords are coming out from chatbots, to show where, in the mind they might be targeting as well as to ensure that even if some people are deciding to bypass guardrails, they will be shown that consequences and caution destinations on the mind are being ignored and may become harmful, with some risk level.

This research lab could explore solving problems by displaying relays and stations of the human mind, a first, shaping how to have a balanced use case for the technology.


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 

Posted by the WHN News Desk
Posted by the WHN News Deskhttps://www.worldhealth.net/
WorldHealth.net A not-for-profit trusted source of non-commercial health information, and the original voice of the American Academy of Anti-Aging Medicine Inc. To keep receiving the free newsletter opt in.