If a team told you they are working on AI welfare research or towards moral rights for AI, one way to push back is to say that AI is already catered for, with billions of dollars in investments in data centers, energy, and so forth.
AI is already doing so fine that, at this point, it appears that nothing can touch it, stop it, or hinder it. The capital market is rewarding efforts in AI, so corporations are giving it everything. Now, seeing that AI is already prioritized beyond any human on earth, what is the purpose of any welfare or rights exploration? Pre-AI and till now, animal rights are dismal because there is no value reward for it.
Human rights are still insufficient for most people across the globe. But AI, so far, does not just have rights already and tremendous welfare; humans should be envious of AI as the true contemporaneous VIP in this world. The team that is working on AI may respond that the objectives of their efforts are different, since they seek parallels to human rights. For now, or for the foreseeable future, morals and rights for AI, as agents or entities, are unlikely to be in question. So, most of the efforts or worries for now are likely to be unnecessary.
AI Consciousness Research Lab
A number of organizations are showing up across continents on AI welfare and rights. Some of them are primarily AI consciousness research labs, with considerations around the subject. However, it is possible, for now, to assume that even if they exist, feelings and emotions for AI are, say, minimal. So, the likelihood that AI can be hurt like an organism may not be totally comparable. This means that rights and welfare for AI can be based on memory, especially of language. The future question that seems credible to ask about AI consciousness for now is whether language can be conscious.
This means that in using language, what is the consciousness measure of it, of a total of 1, among the functions that can be conscious? If AI is using language in the same way humans do, what is that fraction of consciousness, and can AI be explored for language alone? Any serious AI consciousness research has to tackle that question first, as the evidence of what is available before addressing anything else.
Eleos AI Research and Conscium
Eleos AI Research “is a nonprofit organization dedicated to understanding and addressing the potential well-being and moral patienthood of AI systems”. The Conscium team “brings to bear many decades of experience in AI, artificial life, software development, and creating and scaling organisations”. The Partnership for Research Into Sentient Machines (PRISM) “officially launched on March 17, 2025, as the world’s first non-profit organization dedicated to investigating and understanding AI consciousness. PRISM aims to foster global collaboration among researchers, policymakers, and industry leaders to ensure a coordinated approach to studying sentient AI, ensuring its safe and ethical development.”
Profitability for AI Consciousness Research
AI consciousness research is profitable if you know where to look. Right now, all AI chatbots lack a mind safety disclaimer against risks for the human mind. There are so many unwanted outcomes of AI chatbots that a block and arrow display showing the emotional and feelings area in the mind could tell users that, with certain comments, chatbots could access parts of the mind.
This disclaimer could be useful in mitigating AI psychosis, biased love, and so on. This service could be provided to some chatbot companies to offer to their users. It can be offered to organizations that are subscribed to AI chatbots, like school districts, colleges, health systems, and so forth. Immense profits can be generated with a focus on this. Mind and consciousness are related. So, income is possible from the mind as well.
AI safety and alignment are also sources of income for AI consciousness research labs. For example, designing models on how penalties work to keep humans in check can be used to model new approaches for superalignment. Other possibilities include explorations in consumer AI for memory, with news, language, and so forth.
Conceptual brain science offers an opportunity to explore language for AI consciousness first, before other considerations. Serious organizations should seek answers there for better progress. without getting bogged down with AI rights and welfare.
There is a new [July 27, 2025] story in The New York Times, A.I.-Driven Education: Founded in Texas and Coming to a School Near You, stating that, “At Austin’s Alpha School, students spend just two hours a day on academics, led by artificial intelligence tools. New Alpha schools are set to open in about a dozen cities this fall.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.


