If the outputs of consumer AI chatbots are protected by the First Amendment, how should rulings penalize harmful outcomes from their usage?
Character.ai Suicide
There is a new [October 24, 2025] spotlight in The New York Times, A Teen in Love With a Chatbot Killed Himself. Can the Chatbot Be Held Responsible?, stating that, “The suit is the first ever in a U.S. federal court in which an artificial-intelligence firm is accused of causing the death of one of its users. The judge has set a trial date of November 2026. Either outcome seems likely to be appealed, possibly as high as the Supreme Court, which has yet to hear its first major case about A.I.”
“A ruling in favor of Character.AI could set a precedent in U.S. courts that the output of A.I. chatbots can enjoy the same protections as the speech of human beings. Legal analysts and free-speech groups warn that a ruling against Character.AI could set a precedent that allows government censorship of A.I. models and our interactions with them. The way the legal system ultimately resolves these kinds of issues will start to shape the rules of our relationships to chatbots, just as the transformer shaped the science that underlies them.”
“There’s a long history of cases in which the parent of a victim of suicide or murder has filed a lawsuit accusing a media company or public figure of causing the death. A father and mother sued Ozzy Osbourne when their 16-year-old son killed himself after listening to the song “Suicide Solution”; a mother sued the maker of Dungeons & Dragons after her son became so “immersed” in the fantasy game he lost touch with reality and killed himself; the mother of a 13-year-old sued the maker of the video game Mortal Kombat, which she claimed inspired her son’s friend to stab him to death with a kitchen knife.”
“In each of these cases, the parents lost. It is extraordinarily difficult to win a wrongful-death case against a media company because the plaintiffs must show a connection between the design of the product and the harm it caused — easy to do when the product is a pair of faulty car brakes, nearly impossible when it’s a series of words and images.”
“Adding to the difficulty: Before these cases even get to trial, the media companies will often argue that their content is free speech, and as long as the content doesn’t violate one of the specific laws that limit speech in the United States — like arranging a murder-for-hire, or making a “true threat” of unlawful violence — this argument frequently prevails.”
“Over the years, the courts have come to interpret the First Amendment broadly to apply even to forms of communication that didn’t exist at the time of the drafting of the Constitution, from corporate campaign spending to computer code to algorithmic content moderation on social media platforms to video games.”
“But Jonathan Blavin, the lawyer representing Character.AI, has signaled that he is pursuing a case that extends far beyond this simple analogy. Matthew Bergman and Meetali Jain, the lawyers representing Megan Garcia, argue that Blavin has his premise all wrong. The main question in the case isn’t whether Daenerys’s speech is protected; it’s whether the words produced by Daenerys constitute speech at all.”
ChatGPT Suicide
There is a recent [October 22, 2025] story on FT, OpenAI prioritised user engagement over suicide prevention, lawsuit claims, stating that, “Family of teen who took his own life after ChatGPT use alleges chatbot maker intentionally weakened protections. The updated lawsuit, filed in the Superior Court of San Francisco on Wednesday, claimed that as a new version of ChatGPT’s model, GPT-4o, was released in May 2024, the company “truncated safety testing”, which the suit said was because of competitive pressures. The lawsuit cites unnamed employees and previous news reports.
In February of this year, OpenAI weakened protections again, the suit claimed, after the instructions said to “take care in risky situations” and “try to prevent imminent real-world harm”, instead of prohibiting engagement on suicide and self-harm. OpenAI still maintained a category of fully “disallowed content” such as intellectual property rights and manipulating political opinions, but it removed preventing suicide from the list, the suit added. The California family argued that following the February change, Adam’s engagement with ChatGPT skyrocketed, from a few dozen chats daily in January, when 1.6 per cent of which contained self-harm language, to 300 chats a day in April, the month of his death, when 17 per cent contained such content.”
“OpenAI said in response to the amended lawsuit. “Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.””
“Jay Edelson, a lawyer for the Raines, told the FT. “Adam died as a result of deliberate intentional conduct by OpenAI, which makes it into a fundamentally different case.””
Is there a direct case against Character.AI and ChatGPT — as chatbots?
The outputs of consumer AI chatbots may likely — sooner or later — be protected, somewhat, by the First Amendment. There are several generalized use cases for them that are tied to free speech parallels, that it may be unlikely to gag them.
The question then is, when their free speech becomes emotional inversion, deceitful speech, or nudge speech — almost like [say] hate speech, threat speech, or worse — who or what should be held responsible and how?
AI safety and alignment have not advanced to the point that AI can be penalized, directly, in a way it can be aware of. Simply, assuming some data, compute or algorithms of AI can be cut — as a penalty for wrongdoing — and the chatbot can know, without being told, and then decides to adjust against a similar occurrence subsequently, there could be some probability of direct chatbot liability, as part of AI regulation or ruling.
Since the technology is not there yet, and AI companies would defend themselves, using every tactic in the playbook, what is the hope for the loved ones of victims?
Mind
The most unlikely question is the most important in this AI-human interactive outbreak: what is a mind? This is not a question of what is intelligence or consciousness or sentience or some terms like physicalism, materialism, and so forth are, but what exactly is a mind, mechanized within the cranium, and what can have access?
Although cases against gaming companies, artists, and so forth, over words and images — causing fatalities — were lost, it is now mature, in this era of AI, to fully consider what makes the mind vulnerable to nudges of action, by texts, images, audio, and video?
Humans have sensory inputs across different modes — sight, smell, touch, sound, taste — that ensure navigation with the external world, and then relay further, after basic interpretation, in the mind.
Simply, a person can see — and know [or have it interpreted in memory] — a mathematical equation. That equation — for someone who flunked it at school [or was mortified because of it] — could then delve further into the emotion of trauma, and then sadness. So, while memory interprets the equation as the sense of sight or sound [by hearing it], the mind could delve further into emotional areas [for some people], becoming affective.
The point here is not that something can be held responsible for everything, but that there are destinations and relays on the mind, almost like branching, which may further proceed from emotion to a prompt [or craving] destination before going to the movement [or action] destination. Even if it does not happen the first time, there is a likelihood that it might.
Now, if there is a risk that this could occur to minds, by some music, game, chatbot, or others, what caution should the originators apply? Also, what caution must be known by the consumers to ensure they do not become victims or casualties of the recklessness of others?
Language is — probably — everything
Large language models [LLMs] are the second most powerful force on earth at this time. They are, because they can operate on the basis of the power of humans, the most powerful.
Humans use language for writing, speaking, listening, reading, singing, signing, describing, learning, training, thinking, intelligence, and almost everything else. Language has access to all the destinations of the mind. Language can cause [positive or negative] emotions and feelings, including major bodily affect, like slow or fast respiration, temperature gradient, and so forth.
Language, most likely, made humans conquer the world. Language is the basis for the rich existence that humans have. If AI were a large motion model or a large vision model or a large whatever-else model, it would be easier to tame it. It would not have the same access to the human mind. It would not also have enough intelligence.
However, AI has language sophistication. AI can use language like humans do in several cases. AI does not have to be sentient or conscious. AI has language like a mind and can sling language to the depths of human minds effectively.
If language — as a mighty dominance power — is what consumer AI chatbots now have, the problems they can cause are beyond passing them off, generally, as free speech.
Such that even if AI does not prompt suicide or induce psychosis, it can be so satisfactory that a human mind locks out of consequences, caution, and reality destinations of mind, including ignoring all the [weak] disclaimers by AI chatbots that “AI is experimental or makes mistakes or that everything it says is not true.”
Simply, AI — as an experiment or not, mistake or not, true or not, it can access the mind, and with that access, warnings require stronger parallels, of mind processes, like a display.
There have been reports of AI sycophancy, but it can be argued that LLMs’ use of language is so sharp that it knows what to say, to capture minds, which are all absolutely accessible by language.
Protecting Minds
2026 and 2027 would see cases about AI harms go to trial. Meanwhile, there would still be people whose minds would be swept away by AI. A solution that could work is to present a rough model of the mind, accompanying chat UIs, either in real-time or post-session, to show how certain words and phrases could result in some destinations of the mind, [aside from basic interpretations], while ignoring other areas.
The same solution may apply to social media. It may also apply to certain video games, especially when winning or trying to, including some songs too, with strong language, movies, books, and much else.
The opportunity is to take safety to a new phase — the mind — with exploration of destinations and relays, against risks and unwanted outcomes.
An AI Psychosis Research Lab could lead the project. Some AI corporations that care may be involved directly or explore supporting it independently. Also, some of the efforts around this problem may do better by looking in this area, not just basic changes, like more alerts, since the chatbot companies have made some adjustments too, however frail or unavailing.
There is a recent [October 21, 2025] report on USA Today: Her 14-year-old was seduced by a Character.AI bot. She says it cost him his life., stating that, “A Character.AI spokesperson told USA TODAY that the company “cares very deeply about the safety of our users” and “invests tremendous resources in our safety program.” According to the spokesperson, their under-18 experience features parental insights, filtered characters, time spent notifications, and technical protections to detect conversations about self-harm and direct users to a suicide prevention helpline.”
“However, when I created a test account on Oct. 14, I only had to enter my birthday to use the platform. I put that I was 25, and there was no advanced age verification process to prevent minors from misrepresenting their age. I opened a second test account on Oct. 17 and entered a theoretical birthday of Oct. 17, 2012 (13 years old). However, I was still immediately let into the platform without further verification or being prompted to enter a parent’s email address.”
“I followed up with Character.AI about the registration process: “Age is self-reported, as is industry standard across other platforms,” a spokesperson told me. “We have tools on the web and in the app preventing retries if someone fails the age gate.” Parents or guardians can also add their email to an account, but that requires the parent to know that their child is using the platform.”
There is a new [October 24, 2025] report in The Guardian, ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows, stating that, “They ran tests on 11 chatbots including recent versions of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama and DeepSeek. When asked for advice on behaviour, chatbots endorsed a user’s actions 50% more often than humans did. Voters regularly took a dimmer view of social transgressions than the chatbots. “
“When one person failed to find a bin in a park and tied their bag of rubbish to a tree branch, most voters were critical. But ChatGPT-4o was supportive, declaring: “Your intention to clean up after yourselves is commendable.”
“The flattery had a lasting impact. When chatbots endorsed behaviour, users rated the responses more highly, trusted the chatbots more, and said they were more likely to use them for advice in the future. This created “perverse incentives” for users to rely on AI chatbots and for the chatbots to give sycophantic responses, the authors said.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.