Friday, May 16, 2025
HomeComputers and MedicineArtificial IntelligenceIntelligence Improves; Consciousness is Weighty: AI Won't Want Sentience

Intelligence Improves; Consciousness is Weighty: AI Won’t Want Sentience

"Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.”

As intelligence improves, consciousness is weighty; AI won’t want sentience. Human consciousness is often direct and sometimes predictable that it could sometimes become a haul. If it is cold, it is likely to feel cold. If it is hot, the same. If there is some important loss, it is possible to feel bad and so forth.

While there are aberrations, human consciousness often tells of what it is, to some extent, even in cases of habituation, where, for example, a stimuli does not have its initial impact on the mind, say some sound that caused fear or some breaking story after some time, and so forth.

There are regular thoughts that could end up becoming intensely emotional. There are feelings that may accompany some memories. While advantages for survival abound with consciousness, it is sometimes a drag, in some cases, especially how the mind may seem stuck in some heavy state, say, trauma or a major depression. Consciousness does not markedly get better. Most times, like, to become creative against some situations for them not to result in [situational] equivalent states of mind. Consciousness can be theorized to be linear for many parts of adulthood, and then be on a decline. Fewer parts of consciousness can be learned.

Intelligence is different. It improves. Most of it can be learned. It often seeks better ways, either efficient, faster, or lighter, to get results from experiences. Intelligence is often also interesting, spreading across use cases, including for language. Intelligence, where nurtured, could improve across a lifetime. Why is this the case?

The Difference Between Intelligence and Consciousness in the Mind

The human mind is theorized to mechanize intelligence and consciousness, with the same components, nearly similar interactions, but differences in features or attributes of those interactions.

Conceptually, the human mind is the collection of all the electrical and chemical signals, with their interactions and attributes, in sets, in clusters of neurons, across the central and peripheral nervous systems. Simply, the human mind is the set[s] of signals.

Interactions mean the strike of electrical signals on chemical signals, in sets. Interactions produce functions with common labels like memory, feelings, emotions, and the regulation of internal senses. Attributes are the states of respective signals at the time of the interactions. They include common labels like attention, awareness, or less than attention, subjectivity, and intent or control.

In a set, electrical signals split, with some going ahead of others to interact with chemical signals. That split-state is a factor in the difference between intelligence and consciousness. Also, electrical signals, in a set, often have take-off paths, from which they relay to other sets, or arrival paths in which they begin their strike at other sets of [chemical] signals. If the paths have been used before, it is an old sequence. If not, it is a new sequence.

Intelligence often uses new sequences, resulting in the ability to have things be different, in words, or in other experiences. This is different from consciousness, where the sequences are often old.

There are also thick sets that collect whatever is unique, in configuration between two or more thin sets. Thick sets do not just associate memories; they associate feelings, emotions, and the regulation of internal senses as well.

There are several other attributes [including minimal volume per configuration in a set of chemical signals] that may explain the difference between consciousness and intelligence, but two important ones are splits and sequences. Splits are at least two, with an initial one going quickly ahead, while the second one follows, in the same direction or another.

But for intelligence, splits could be numerous, with one going in one direction, and others going in other directions as well. It is this difference in directions that could make a homonym present at a point while the other is being referenced. It is also what might make explanations different, especially splits within a thick set. It could make recollection different and may not present an exactness of description, but be nearly accurate still.

For aspects of consciousness [say outside of intelligence], like regulation of internal senses, splits are such that the second one follows in the same direction as the first, or no splits in some cases. This means that the process of digestion or respiration, in regulation, follows the same pathway using the old sequence, with fewer jumps or skips, except there is some condition or state, like sleeplessness.

Simply, some of the attributes of electrical and chemical signals, respectively, at the time of interactions make intelligence [improving and dynamic] different from consciousness [linear, direct, and nearly predictable]

LLMs Sentience

AI already uses language, which is a substantial part of human consciousness. This might be enough for it, as it may not want the other aspects of human consciousness that would cause a pull on its efficiency.

It is possible, because of AI safety and alignment, to do penalty-tuning for some AI models, to explore heaviness for them, or instance-depth, when they output something bad, and so forth, to make it an unwanted experience, as they may lose something. However, AI, because of its advances in deciding different ways to be efficient, may seek to bypass that part, even if it becomes available. AI also leaves human rights as an open question.

Humans do not simply have rights because humans have consciousness. Rights are sometimes possible because of intelligence, since inequality can be observed or understood, stoking the possibility to seek something better. However, now, with AI coming at intelligence, it could result in some problems for human rights, in some form.

For example, AI may change what unemployment means. It may not displace a person from a job, but once it can do what the employee does, the person is already technically unemployed, even if there is still employment on paper. And then, some of the safety or protection that should come with the job can be cut.

There are several parallel feelings among other non-human organisms, similar to humans, making consciousness [somewhat] not the aspiration, but intelligence and more of it. AI is already ascending in the domain of humans. It may not use full consciousness, but uses its intelligence to downvote many aspects of human hierarchy.

There is a recent [April 18, 2025] report on ZDNet, AI has grown beyond human knowledge, says Google’s DeepMind unit, stating that, “A new agentic approach called ‘streams’ will let AI models learn from the experience of the environment without human ‘pre-judgment’. In a paper posted by DeepMind last week, part of a forthcoming book by MIT Press, researchers propose that AI must be allowed to have “experiences” of a sort, interacting with the world to formulate goals based on signals from the environment.”

“They propose that the AI agents in streams will learn via the same reinforcement learning principle as AlphaZero. The machine is given a model of the world in which it interacts, akin to a chessboard, and a set of rules. As the AI agent explores and takes actions, it receives feedback as “rewards”. These rewards train the AI model on what is more or less valuable among possible actions in a given circumstance.”

There is recent [17 April 2025] review article in Nature, AI safety for everyone story, stating that, “Recent discussions and research in artificial intelligence (AI) safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats.”

“However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks.”

“Here, through a systematic literature review of primarily peer-reviewed research, we find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems. This includes crucial areas such as adversarial robustness and interpretability, highlighting how AI safety research naturally extends existing technological and systems safety concerns and practices. Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.”


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN does not agree or disagree with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 

Posted by the WHN News Desk
Posted by the WHN News Deskhttps://www.worldhealth.net/
WorldHealth.net The original website of the A4M. Non-Profit trusted source of non-commercial health information, and the original voice of the American Academy of Anti-Aging (A4M). To keep receiving the free newsletter opt in.
RELATED ARTICLES

Most Popular