How does the human brain store language, process speech, and memory, in general?
When an individual is saying anything, what is going on in the brain in that instance? It can be assumed that what is said is prepared for speech, the output. Speech comes out in an accent and with a certain cadence. Now, what is the role of language in thoughts, and how does human memory really work?
There are ways that the brain stores information that make it easy to access and use, almost simultaneously, in several different areas. So, it is not just about the possibility to relay but the [pervasive] liberty to do so across sources. In computing, there are already advances in storage, processing, and so forth. There are also major advances in neural network architectures.
However, the gaps between how the brain stores and processes information differ widely from computers. To advance AI into superintelligence, one approach could be storage, underscored by how the human brain stores information.
Conceptual Brain Science
The first postulate is that in the brain, storage and transport of information are done by electrical and chemical configurators, in sets, in clusters of neurons. Simply, what is termed electrical and chemical signals are actually configurators [or assemblers] of information, doing so in sets [or in loops], wherever there are clusters of neurons. Signals are not for communication of neurons [or transmission], conceptually.
So, information is specified electrochemically. Any memory is a specific [electrical and chemical] configuration, in a set. Transport works in summaries, enabling only electrical configurators or chemical configurators to do so. Electrical configurators interact with chemical configurators to mechanize functions, including memory.
Chemical configurators are almost at a ready configuration, so it is when they are struck by electrical configurators that the configuration is completed. Electrical configurators have states at the instance of the interaction [like chemical configurators]. These states determine the extent to which they interact. Electrical configurators transport across with summaries of sets, mostly terminating where they are able to completely interact [or say fit]. So, a large percentage of how functions and attributes are mechanized happens inside the sets.
Human Memory and Computer Memory
There are at least two obvious ways that human memory storage outmatches computers: one is the totality of sets [of configurators]. The next is what is called a thick set. Processors have cores for parallel computations. But cores run processes in the moment, and they do not store information. For GPUs, V-RAM, a neighbor of cores, are responsible for storage. This has worked well for video games and AI, but also remains limited towards achieving superintelligence, as well as better energy efficiency.
The second is called thick sets, where any configuration that is common between two or more thin sets are collected into a thick set. This ensures that there is an efficiency of storage and a kind of access that grants intelligence, since a relay into that set for some configuration may output something similar but different, in what might be novel or interesting. So, the brain avoids storing repetitions, ensuring better relay access. There are still some thin sets with the most unique information about anything, but most sets [of configurators] in the brain are thick sets.
Computers do not have this, at least in general, though neural networks have features, with related concepts around them. Features like a city, with the [host] country, neighboring cities, the continent, and some others, as neighboring features.
Already, access to data by neural network architectures are deep, but access in a large pool, with too many memory repetitions, limits the reach of machine intelligence. Research and innovation for superintelligence could be determined by storing in a large pool and extent with similarities to the brain.
Human Brain: thoughts, language, and speech
In the brain, there is a thick set for ‘round’, conceptually. There is a thick set for motion. There is a thick set for tall. There is a thick set for speed. And so forth. In whatever language, there is a word for round. There is a pronunciation for round. There is a spelling for round. There is a signing for the round. There is movement for round and so forth. What consists of the thick set [of round] and what doesn’t?
Language can be described as a thick set, mostly. However, several sets have segments that send off summaries for outputs into a language. So, language is not only present as a thick set, for several thick sets they have aspects with language bases.
There is also a thick set of accents, where the outputs are structured by. This thick set takes shape from infancy. There is a thick set of movements as well, with minor segments in several other thick sets, where outputs are coordinated. So, it is possible to write with the hand, the feet, sign or describe by motion, the shape of a letter, alphabet, or a word.
Simply, the brain is full of thick sets, collecting all similarities. However, some thick sets have segments [as specialized highways and configuration customization for other thick sets, like those of language, accent, and movement]. Sometimes, the reason some people have an accent in some languages is because their accent thick set is modified for one language, making other languages filter through those.
Language is a dominant set, in the reach of its set and how the language segment in many thick sets comes to dominate some sets. While there are several other ways to understand what round is, without the word round, the language sometimes dominates the set to ensure that some of the sets are language-totalized, not just compliant. This means that because of the usage of round, frequently with language, the thick set often allows its language-segment, the first landing area, and then the takeoff point, for anything to do with the set. It makes it seem that thought is based on language, which is not necessarily the case.
There are lots of relays across sets in any conversation, involving the language area, the speech area, and the thick set for the memory. The main thick set also tries to retain some information from several thick sets of memory, so that it may be independent, to some extent, for some interpretation and output tasks. When an individual hears a word in a language the person does not understand, it often goes through the language thick set, to send matches to memory thick sets, with both firing blank.
Within thick sets, there are also often several useful paths within the set for transport that allow access to information for quick availability and quick distribution as well.
AI Superintelligence
Storage architectures, stoked by conceptual brain science, may tip advances over, into superintelligence AI. The answer may not simply be better ML architectures, given how transformers and several optimization algorithms are resulting in major AI benchmarks. This means that it may not just be the machine learning from data that is the problem, but how data is stored. This may also determine energy efficiency.
There is a new [July, 10, 2025] story on RCR Wireless News, Geopolitical pressures are impacting AI server growth – but shipments rising, stating that, “AI momentum – Global AI server shipments are projected to rise 24.3% in 2025, slightly below forecasts due to U.S. export restrictions and geopolitics. Cloud strategies – AWS, Google, Microsoft, Meta, and Oracle are expanding AI infra with varying mixes of Nvidia GPUs and in-house chips. OEM shifts – Server vendors are revising H2 2025 strategies, while sovereign cloud and regional AI projects are boosting demand in EMEA, Asia.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN does not agree or disagree with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN/A4M. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.