HomeBrain and Mental PerformanceNeuroscienceQuantum Computing: AI World Models, Spatial Intelligence Need a 2026 Classical Memory

Quantum Computing: AI World Models, Spatial Intelligence Need a 2026 Classical Memory

The trajectory of artificial intelligence towards artificial superintelligence may stall without a new classical memory architecture for storage, similar to the human brain.

Why does AI answer questions the same way every time? Why can’t AI at least understand the connections between certain questions and answers, then get better or become more diversified in how it helps to solve problems?

AI is a kind of intelligence. However, for all organisms, even when they do the same things, they do so differently, often showing that biological intelligence explores improvement, even in small cases, or at least contrasts with too many similarities, even if the outcomes are the same.

One possible reason for the collective ways that memory is stored is that specificity is not the focus, but commonality. So, instead of every fan being a separate memory, a fan is a collection with everything common among fans collected within it. Also, because memory is a collection, there could be different sides or spots [of the collective storage] that relay starts from, so the same thing is often different mostly. Also, memory is often seeking new collections, easing how relays make improvement of processes [including intelligence].

Human Memory Architecture 

What are the structural foundations of human intelligence in the brain? Simply, if intelligence is the use of memory, what is the architecture of human memory that makes intelligence, as an outcome, exceptional?

If AI were to at least match human creativity and innovation, at the measure of extraordinary advancement, it may require more than just scale [of compute and data], which large language models [LLMs] currently have.

The trajectory of artificial intelligence towards artificial superintelligence may stall without a new classical memory architecture for storage, similar to the human brain.

Humans do not have complex intelligence because humans have a unique memory of every sensation. No. Human memory, conceptually, is mostly a collection of many similar things, such that the interpretation of anything is done with the collection, not with specificity, for the most part.

If an individual sees a door or hears the sound of a vehicle, it is almost immediately interpreted, so that the relay [for what to do with it or not] proceeds, without intricate visits, to respective [unique] storages.

This fast interpretation objective ensures that it is possible to make quick decisions on a number of things using a general mode, so that when they are to be operated or improved, it is not always with intricacies, delaying efficiency.

Also, because the interpretation came from the collective storage of doors or from the sound of a vehicle. This does not mean that there isn’t specific knowing of things, there are, but they are generally fewer — aside language — and exist separately from the pack. Still, what gets used [say in language] may come from collections.

An example of this is speaking, where, even though words are specific, what is presented sometimes may not just be what was expected, but something within the collection.

However, language is still easier because of learning early. How so? Several memories exist separately from early on, but tend to collect, because of similarities, conceptually. Yet, language stays mostly that way, even though there are collections with images, sounds, scents, and other similarities of the same thing.

A disadvantage of collection is that learning [say language or advanced physics, for a non-physics person] as an adult has to join collections, not just exist alone. That process is slower than early on, resulting in delays. Specificity, on the other hand, as an adult, too, makes it tough to know many faces more easily, and so forth.

Collection

Now, because the group is used for interpretation, it is easier, generally, to make decisions faster, and have relays [or transport] within the brain get around with little barrier for whatever results are sought.
Also, most collective storage systems have overlays, where it is not just the collection but where a collection overlaps with another one. Simply, aside from a collection of doors, there is an overlay of a part of it with wood, or with safety, and so forth.

Human Intelligence

If the goal of an individual is to improve something, say an art, by some creative action, it is generally easier to have lots of relays across collective storage and their overlays. Simply, storage in the mind are structures that allow to pick what is vital and also re-combine them.

Some overlays may not even be obvious, but storages might set them, so that by the time relays get there, it is possible to find something new. Some overlays are not fixed as there might be several options they are connected to, so they rotate from one to another, from time to time.

This is a reason that even when people do the same thing often, they still do it in slightly different ways.

Aside from storage, relays are also excellent, shaping how reaches are found, using different dimensions, toward goals of improvement or operational intelligence.
Simply, storage is a major factor in what makes human intelligence excellent.

AI Superintelligence, Spatial Intelligence, and World Models

It would be possible that as computing gets better and algorithms, AI would improve. However, classical storage, or how the data that AI uses is stored, would need to mirror the brain for much better results.

This means that groups and overlays of groups for what is similar. This could be done at the hardware level, especially with, say, collective magnetic directions or electrical charges of memory cells. It may be done with new memory protocols. But data must be organized like the brain for collectives and overlays.

Already, deep learning architectures are so excellent that they are pervasive relays over data. However, the present storage structure of digital data is too specific, limiting how it can collect groups of trees, like in a forest, not singular trees.

Innovation towards superintelligence, beyond neurosymbolic AI, as well as neuromorphic computing and world models, would require a new memory architecture, without which it may be tougher to achieve AI superintelligence.

It is possible to accelerate this concept in a research design to be ready before June 30, 2026, while also laying the ground for new modalities in quantum computing towards 2030.
There is a recent paper in Nature, Ferroelectric transistors for low-power NAND flash memory, stating that:

“NAND flash memory is essential in modern storage technology, amid growing demands for low-power operation fuelled by data-centric computing and artificial intelligence. Its unique ‘string’ architecture, where multiple cells are connected in series, requires high-voltage pass operation that causes a large amount of undesired power consumption. Lowering the pass voltage, however, poses a challenge: it leads to an associated reduction in the memory window, restricting the multi-level operation capability.”

“Here, with a gate stack composed of zirconium-doped hafnia and an oxide semiconductor channel, we report ultralow-power ferroelectric field-effect transistors (FeFETs) that resolve this dilemma. Our FeFETs secure up to 5-bit per cell multi-level capability, which is on par with or even exceeds current NAND technology, while showing nearly zero pass voltage, saving up to 96% power in string-level operations over conventional counterparts.”

“Three-dimensional integration of FeFET stacks into vertical structures with a 25-nm short channel preserves robust electrical properties and highlights low-pass-voltage string operation in scaled dimensions. Our work paves the way for next-generation storage memory with enhanced capacity, power efficiency, and reliability.”


This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.  

Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration. 

Posted by the WHN News Desk
Posted by the WHN News Deskhttps://www.worldhealth.net/
WorldHealth.net A not-for-profit trusted source of non-commercial health information, and the original voice of the American Academy of Anti-Aging Medicine Inc. To keep receiving the free newsletter opt in.