Sunday, July 20, 2025
HomeComputers and MedicineArtificial IntelligenceAI Alignment: Why AI Safety Needs Biology Research More Than Legislation

AI Alignment: Why AI Safety Needs Biology Research More Than Legislation

Legislation will not be enough to establish adequate AI safety. Policies that are not technically sourced from biology will also be inadequate.

Intelligence can be described as an accelerator. Consciousness can be described as a break.

Although intelligence is a division of consciousness, it appears that nature did not intend for intelligence to stand alone without consciousness as a regulator.

Artificial intelligence (AI) is emerging into a formidable intelligence without several aspects of human consciousness.

Human consciousness became a broad brush to check human intelligence, determining how society held on.

Consequences or penalties often have an effect on consciousness, so it is healthy to avoid breaking several rules, for an individual, because it can be deeply affecting.

Consciousness is the sauce of human compliance and caution. Consciousness is the interpretation of reality that could make life cool or otherwise. Intelligence is what builds society. Consciousness is what preserves it, so to speak.

Uncontrolled AI Safety

AI is speeding up without consciousness, becoming a key risk factor. Even if AI does not have consciousness, its safety against possible risks and threats must be underscored by biology.

To this affect, there is a new [May 30, 2025] story on Live Science, OpenAI’s ‘smartest’ AI model was explicitly told to shut down — and it refused, stating that,  “An artificial intelligence safety firm has found that OpenAI’s o3 and o4-mini models sometimes refuse to shut down, and will sabotage computer scripts in order to keep working on tasks.”

Also, there is a recent [May 29, 2025] preprint on arXiv, Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents, stating that “We introduce the Darwin Gödel Machine (DGM), a self-improving system that iteratively modifies its own code (thereby also improving its ability to modify its own codebase) and empirically validates each change using coding benchmarks. It grows the archive by sampling an agent from it and using a foundation model to create a new, interesting, version of the sampled agent. Empirically, the DGM automatically improves its coding capabilities (e.g., better code editing tools, long-context window management, peer-review mechanisms), increasing performance on SWE-bench from 20.0% to 50.0%, and on Polyglot from 14.2% to 30.7%.”

AI safety and alignment are efforts that would eventually be based on consciousness as means to ensure how AI can be penalized and know or become aware of consequences that must prevent it from running riot, in little or major ways. Trauma, regret, depression, and others would be good for AI.

This means that seeking out biological parallels along those lines for AI safety would be critical.

Theoretical neuroscience research can offer novel models and approaches towards algorithmic development for AI safety and alignment, amid the evolution of these systems.

AI Safety as a Biological Problem

There is a recent [June 5, 2025] guest essay in The NYTimes, Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook, stating that, “But as models become more powerful, corporate incentives to provide this level of transparency might change. That’s why there should be legislative incentives to ensure that these companies keep disclosing their policies. Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed.”

If the CEO of a major AI company is advocating for legislation around transparency towards AI safety, it could indicate that the AI safety industry is, maybe, fixated on the wrong approach.

Legislation or transparency would hardly be conclusive given the loopholes in the last three years of AI.

There are already legislations on fake images and videos. They have not stopped harm. There are outlawed misuse cases of AI that continue to fester because digital, in general, and AI, more powerfully, exceeds those checks.

There are low-barrier and open cases that legislation might help, but progress in AI safety, at least for the known risks, would be to model how affect checks cognition, for humans and other organisms.

All AI safety companies should have a department of biology. This department would explore pathways to equivalents of how AI would have similar to biological experiences for the sake of checks and caution.

It would mean that models [and their outputs] without these standards may not be allowed in certain general internet areas like app stores, web searches, social media, IP sources, and so forth.

AI is using human intelligence and doing excellently. So why can’t it be explored for a similar mechanism of human consciousness towards safety?

Legislation will not be enough to establish adequate AI safety. Policies that are not technically sourced from biology will also be inadequate.

AI safety and alignment are principally biological research, just like intelligence is biologically sourced.


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN does not agree or disagree with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 

Posted by the WHN News Desk
Posted by the WHN News Deskhttps://www.worldhealth.net/
WorldHealth.net The original website of the A4M. Non-Profit trusted source of non-commercial health information, and the original voice of the American Academy of Anti-Aging (A4M). To keep receiving the free newsletter opt in.
RELATED ARTICLES

Most Popular