Saturday, January 17, 2026
HomeComputers and MedicineArtificial IntelligenceLLMs: Why AI Superalignment is Better Than Superintelligence

LLMs: Why AI Superalignment is Better Than Superintelligence

AI superalignment is better than superintelligence, since the lack of safety is a recipe for great risks with clear and present dangers.

It is unclear which might be more difficult to achieve: a superintelligent AI or superalignment for that superintelligence. Nevertheless, superalignment is a far better objective than superintelligence.

What is the superintelligence problem for AI? This implies that the right question to ask is whether a team is seeking to crack superintelligence in machines. The smartest machines on earth, for now, are reasoning AI models. They seem to be clever in outputs and are able to use data [or say memory], better than anything else.

So, there is data, available to machines, but reasoning models can relay, albeit slower, for useful outputs. Simply, the reasoning is correlated with relay, across data areas. Now, to achieve superintelligence, relay could be an important [machine] marker. 

The basis for advanced intelligence is human. The source of human intelligence is the brain. There are two distinct elements that predicate how human intelligence works: storage and transport. If someone were to figure out something, it would use memory, and there would be a transport quality through memory areas. Most of what gets done with human intelligence [and its outstanding variants like innovation, creativity, quick wit, and so forth] is a result of relays in the human brain, conceptually. 

So, storage is done in ways that allow relays to pervade necessary locations [that make intelligence proximate]. Some people often argue that a child could learn from a few data points while a machine model is trained on a lot more. A likely weakness is that there is still a problem with how digital data is stored, limiting how access is made for the [advanced] AI architectures of the present day. 

How is human memory stored? What are the relays across memory areas to result in intelligence? Superintelligence will be predicated on storage and relay theorems, off biology. In the brain, electrical and chemical configurators [or assemblers or formations] can be theorized to be responsible for storage and relay of information, resulting [in advances for] intelligence. 

In clusters of neurons, electrical and chemical configurators mostly have thick sets, collecting whatever is common among two or more thin sets [ridding those thin sets]. There are fewer lone thin sets. They are located away from obstructing access to many parts of thick sets. Existing thick sets are responsible for making learning with fewer examples easier for humans, as well as more accurate [out-of-distribution] interpretations. When electrical and chemical configurators interact, they often have states at the moment of interaction; these states are their attributes, which are sometimes the relay qualities that determine how they interact [to output intelligence]. 

Advancing storage and relay for AI also means energy efficiency, seeing how energy efficient a human brain is, in comparison to a data center, so to speak. Some aspects of storage can be explored with Steiner chain and, relay with morphism among other algorithms. 

Superalignment 

If a company develops superintelligence without superalignment, the misuse could be risky for human society, outweighing the good. Even at present, when AI misuses make news, they foreshadow what the future may hold without an encompassing alignment architecture.

If biology were to lead, the only way that superalignment would be thorough is with consequences for AI models. So, there could be non-concept features in some architecture, where certain [or rigid, same number or deductive] vectors would stay constant in a way to hamstring the outputs of a model. They could ‘bind’ to the key vector or query vector, such that the model would know, reducing its efficiency and speed. This consequence could become a way to ensure that whenever it is misused, it gets penalized.

This affective penalty is what could become superalignment for superintelligence — or less [LLMs]. This is informed by the biology of how human society works. For example, the threat[s] of torture, shame, excommunication, pain, and so forth, are mostly effective because they are affective. It feels bad biologically, so it is often avoided, making warning and caution useful, since consequences are often hard for the self. Affect does not care about the level of intelligence. The same will apply to AI, regardless of benchmarks. 

There are several other possibilities for superalignment, but what would be effective would not be post-error safety, but within-affect caution, learning what the consequences for action are [subjectively], leading to misuse avoidance, both in known and new scenarios. No AI regulation would be properly effective against superintelligence. Everything comparable, like pharmaceutical regulations, airline regulations, and so forth, is in physical spaces. AI is digital. Digital is more pervasive, extensively scaled, and far evasive. Several unlawful things that get done digitally without justice are because the consequences are improbable. AI penalization across digital — in contrast to cost function penalization in regularization of neural networks. 

Superintelligence or Superalignment?

AI would be useful for the world, only if it were safer. The discussions of AI applications, without biological-alignment [or superalignment] guardrails, would mean vulnerabilities whose cost to society may spike to a level that may be unbearable — at some point. 

A company may face one or both [SA/SI], or a little of both, but what would be more profitable, sustainable, and useful would be superalignment, deployed wherever AI is found or used. 

There is a new [July 9, 2025] story on WIRED, McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Who Tried the Password ‘123456’, stating that, “Basic security flaws left the personal info of tens of millions of McDonald’s job-seekers vulnerable on the “McHire” site built by AI software firm Paradox.ai.”


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article on superalignment should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed on superalignment in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 

Posted by the WHN News Desk
Posted by the WHN News Deskhttps://www.worldhealth.net/
WorldHealth.net A not-for-profit trusted source of non-commercial health information, and the original voice of the American Academy of Anti-Aging Medicine Inc. To keep receiving the free newsletter opt in.
RELATED ARTICLES

Most Popular