Memory, Loop Geometry, and the Shape of Conscious ThoughtFigure 1 — Temporal Sculpting of Attractors: Recursive loops reshape the cortical probability landscape over time. Through repeated cycles of prediction and reinforcement, shallow probability fields evolve into deep, stable attractor basins — forming the building blocks of long-term memory and conscious recall.I. Introduction: Memory Isn’t a File — It’s a Shape in Time
What if memories aren’t stored, but sculpted? Not filed away like a photograph, but carved into the dynamic folds of brain activity — shapes in a probability landscape that emerge, deform, and reappear with each act of remembering.
In traditional neuroscience, memory has long been associated with the elusive concept of the engram — a physical trace of learning somewhere in the brain. But as our understanding deepens, the idea of a static trace becomes less satisfying. In its place, new models — like the Probability Clock theory — suggest that memory is not a place, but a pattern.
And not just any pattern. An engram, in this view, is a constellation of attractors, each shaped by recursive loops of brain activity flowing through the Synaptic Probability Field of the Cortex (SPFC).
II. What Is an Engram, Really?
The term engram was first proposed by Richard Semon in the early 20th century, describing a hypothetical physical change in the brain that encodes memory. Later, researchers like Karl Lashley and Wilder Penfield searched for it — unsuccessfully, in the form of discrete “memory centers.”
Today, we understand that memories are distributed. They don’t reside in one place, but in networks of neurons that fire together when an experience is recalled. Optogenetics has shown that activating certain neural assemblies can evoke learned behaviors in mice. That’s as close as we’ve gotten to “seeing” an engram.
But even this modern view misses something. If memories can shift, update, fade, and return altered — how can they be fixed entities?
III. Attractors: The Brain’s Hidden Geometry
In complex systems like the brain, an attractor is a stable configuration that neural activity tends to fall into — like a groove in the brain’s activity landscape. But these grooves aren’t fixed. They can deepen with reinforcement or fade with disuse, and they aren’t purely spatial — they’re spatiotemporal, shaped through recursive loops.Figure 2 — Engram as a Pattern of Attractors: In the SPFC, attractor basins represent neural configurations that are recursively reinforced. A memory, or engram, is composed of multiple attractors forming a stable geometric pattern in probability space.IV. The Synaptic Probability Field of the Cortex (SPFC)
The SPFC is a conceptual model: a dynamic probability landscape representing the readiness of neurons to fire together. Recursive reinforcement from loops like emotion, attention, and context modulates this field.
In PC theory, reinforcement is not static — it’s temporal sculpting. Time isn’t a backdrop. It’s the very medium that gives attractors their structure and durability.Figure 3 — Synaptic Probability Field of the Cortex: This stylized “bubble wrap” landscape shows how some synaptic sites are more likely to activate than others. Depressions in the surface represent attractors forming under recursive reinforcement. Over time, this probability field evolves to stabilize patterns of memory and cognition.V. Loop Geometry: How Memory Takes Shape
Here’s the key: recursive loops sculpt the shape of the SPFC over time.
TAPP, MAPP, and recursive attractor evolutions (rAEs) don’t merely route data — they reshape the terrain.Figure 4 — Recursive Loop Pathways: Recursive cycles deepen attractor basins by reinforcing activation patterns.VI. Engrams as Topological Objects
If attractors have shape, then engrams are topologies — they are structures, not snapshots. Each engram reflects not just spatial distribution, but recursive time evolution.Figure 5 — Stable Engram in the SPFC: Multiple attractors distributed across the SPFC form the core of a stable memory trace.Sidebar — Deepening the Basin: How Emotion Shapes Memory
Emotionally intense moments reinforce attractors. Neuromodulators deepen synaptic grooves. These emotionally-weighted loops are replayed more frequently — during sleep, reflection, or trauma — embedding them deeper in the SPFC.
VII. Implications: Memory as Momentum
We often think of memory as something static — like a file we “open” when needed. But in the Probability Clock (PC) model, memory is more like momentum moving through time. It’s not just stored information; it’s a pattern of neural activity that continues to loop, evolve, and shape future thought.
Each memory is a trajectory, not a location. It’s built through recursive loops that revisit and reinforce certain attractors — regions in the brain’s probability landscape where patterns of activity tend to settle. These attractors become more stable the more often they’re used.
Trauma: Hyper-Stabilized Attractors. Trauma doesn’t just “get stored” in the brain — it gets looped into. It becomes a set of deep attractor basins that are revisited repeatedly, sometimes involuntarily. These attractors are emotionally weighted and so stable that even small cues can pull the brain back into that pattern — like falling into a groove that’s been worn too deep.
Therapy: Perturbing the Loop. Therapeutic interventions work not by erasing memories, but by perturbing those deep attractors — introducing new emotional context, new attention patterns, or alternate interpretations. This weakens the old loop and allows new ones to form. Therapy doesn’t “fix” memory — it changes its momentum. It redirects the flow of recursive activity toward more adaptive paths.
AI: What It’s Missing. Current artificial intelligence systems don’t operate with recursive momentum. They store information as static parameters — not as attractor patterns that loop, stabilize, and evolve. That’s why AI can “remember” facts but doesn’t truly “relive” them. If future AI were built with recursive loop structures like those found in the brain — dynamic attractors in time — it could begin to develop experiential memory, where past events shape future thinking through active re-entry and emotional weighting.
Why Memory Feels Like Something. In this model, memory isn’t accessed — it’s relived. You don’t just “look up” the past. Your brain flows back into a familiar shape. That recursive loop is what gives memory its qualia — the feel of remembering.
VIII. Conclusion: The Shape of Remembering
To truly understand memory — and maybe even consciousness — we must think topologically. You don’t just recall a moment. You revisit a loop. You reshape the basin. You carve your mind forward in time. Consciousness, in this view, is recursive attractor geometry in motion.
.
And by a prudent flight and cunning save A life which valour could not, from the grave. A better buckler I can soon regain, But who can get another life again?
Archilochus
Sunday, April 19, 2026
The Platonic Realm of 'Forms'...
Chris Reynolds, MD, "Engram as a Pattern of Cortical Attractors"
Subscribe to:
Post Comments (Atom)







21 comments:
BS produced by ones who unable to get how modern artifical neuronets working...
well, not a big sin, as there nobody who knows it anyway. ;-p
If Gravity is curved Space-Time, why not Consciousness?
Curved Thought-Time. ;)
It's ALL in the geometry!
Both Audio and Visual tune to frequency. Just match them harmonically.. ;p
...and give them a "Rosetta Stone" via language to correlate concepts written visual - and oral uadio.
Teach them to communicate by giving them a common "language".
Google AI:
Researchers have altered the vocalizations of mice by introducing a human-specific variant of the NOVA1 gene, which is linked to brain development and speech. Using CRISPR, scientists found that mice with this humanized gene produced more complex, higher-frequency vocal patterns compared to normal rodents, providing clues about the genetic evolution of human speech.
Key Findings on the NOVA1 Gene Experiment
The Gene: The study focused on the NOVA1 gene, a master regulator of RNA splicing in the brain. A specific mutation—swapping one amino acid—is present in all modern humans but absent in Neanderthals and other primates.
Altered Squeaks: Mice carrying this human variant displayed different vocalizations when calling to their mothers (pups) or trying to attract mates (adult males).
Complexity: The "humanized" mice produced a more complex range of sounds, suggesting this genetic shift may have driven the evolution of spoken language.
Protein Impact: The human-type NOVA1 protein changed the structure of other genes involved in vocalization and speech, indicating it acts as a regulator for speech-related neural pathways.
Background and Similar Research
FOXP2 Studies: Before NOVA1, researchers previously experimented with the FOXP2 gene, which is also associated with language. Mice with humanized FOXP2 also showed changes in vocal communication and, in some cases, enhanced learning abilities.
Significance: The NOVA1 study suggests that a single, crucial genetic change occurred in the last ~300,000 years, helping to distinguish human communication from that of our closest relatives.
The study, which suggests how a small genetic tweak can have a big impact on how these rodents communicate, was covered in detail by NPR and Nature.
More from Google:
Audio-visual correlation refers to the deep learning and analysis of intrinsic relationships between sound and visual data (images/video) to improve machine understanding, speech recognition, and content generation. It leverages synchrony—such as lip movements matching audio—to enhance speech separation, localization, and multimedia interpretation in complex, noisy, or multi-speaker environments.
Key Aspects of Audio-Visual Correlation:
Speech Enhancement/Separation: Models, like those described in this ACM Digital Library article, use visual information to isolate specific voices from background noise by matching lip movements with audio, often overcoming the limitations of audio-only systems.
Audio-Visual Speech Recognition (AVSR): Combining audio with visual data (e.g., lip-reading) significantly improves speech transcription accuracy in noisy environments.
Synchronized Generation: Advanced techniques involve creating video and audio that are tightly coupled, using joint latent diffusion models to ensure visual elements (e.g., instrument playing) correspond accurately with audio outputs.
Multimodal Learning: This involves finding correspondences between video, audio, and sometimes text, aiming to understand the underlying semantic and temporal links between these different types of data.
Applications: This technology is critical in applications like video conferencing, video question answering (Video QA), speaker tracking, and creating realistic, synchronized virtual avatars.
A Survey of Recent Advances and Challenges in Deep Audio-Visual Learning (2025) outlines how these systems are increasingly used to bridge the gap between auditory and visual signals.
Just coordinate the timing.... with audio-visual path delays in all the right places.
Using Foucauldian "Archeology of Knowledge" to unearth the social Epistemes needed for understanding and correlating cross-temporality.
Yawn... but not meta-thinking?
\\If Gravity is curved Space-Time, why not Consciousness?
No meta-thinking.
So you self-deceptively merging modelled with modeling.
Yawn.
\\Significance: The NOVA1 study suggests that a single, crucial genetic change occurred in the last ~300,000 years, helping to distinguish human communication from that of our closest relatives.
\\The study, which suggests how a small genetic tweak can have a big impact on how these rodents communicate, was covered in detail by NPR and Nature.
Yeah.
But not a "genetic tweak", but hibridization with more advanced population (neandertaks with sapience).
Yawn.
It's gooood... to be a gaaawd. ;-)
A Rosetta Stone for interpreting and understanding multi-temporal events. Thinking about how 'others' thought, and why they thought that way.
No merging and modelling the already modelled with different (and formerly incompatible and perhaps even incommensurable) models.
Examining relatively fixed Oedipal categories to perform schizo-analysis... de-territorializing them, creating new lines of flight, and then reterritorializing them. Altering the psychic flows of inputs and outputs... in a controlled manner (non-chemically) ala establishment of Foucauld's "Medical Gaze" or Luhmann's "Second Order Observation".
It's what philosophers do. Create "concepts"
from Google: Philosophers create or invent new concepts, tools, and frameworks—often described as "conceptual engineering"—to structure, analyze, and reframe our social reality, language, and methods of inquiry. While some, like Plato, may view this as discovering pre-existing truths, modern philosophy often sees the process as a human invention designed to solve specific problems or articulate new ways of understanding the world, such as inventing concepts like "social justice" or "consciousness.
Whatever you say, Ben... ;)
RE: AI: What It’s Missing"
More crucial is what 95+% of people are missing about AI ...
"Ignorance is the root cause of all Evil. Since only Knowledge eradicates ignorance, it is our duty and moral obligation to educate ourselves, as well as the masses around us." --- Anonymous
Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the "awake" public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.
The 2 major OFFICIAL deceptive fake FEAR-MONGERING narratives or phony pretexts (ie, lies, propaganda) nearly everyone, including "alternative news" sources, have been spreading is (1) that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans (therefore it must be regulated, ie monopolized by the typical criminal governments); and (2) that we, the US, have to invest heavily in AI technological development so as to stay ahead of other nations, such as China (https://archive.is/pBzAt).
The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that's long been ongoing in front of everyone's "awake" nose .... https://www.rolf-hefti.com/covid-19-coronavirus.html
The proof is in the pudding... ask yourself, "how is the hacking of the planet going so far? Has it increased or crushed personal freedom?"
"AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy." ---Unknown
"Almost all AI systems today learn when and what their human designers or users want." ---Ali Minai, Ph.D., American Professor of Computer Science, 2023
“Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” --- Klaus Schwab, at the World Government Summit in Dubai, 2023
“Around 2014-2015 the US National Security Agency (NSA) deployed an AI system called Skynet that placed people on a ‘suspected terrorists’ list, based on the electronic patterns of their communication, writings, travel and social media postings [...] a former director of both the CIA and the NSA proclaimed that ‘we kill people based on metadata’” (Unknown)
“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.” --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
"The whole idea that humans have this soul, or spirit, or free will ... that's over." --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [hhttps://archive.md/vrZGf
"It looks to me like AI has been tasked with creating literally millions of fake videos tailored for every possible interest, to snare you in an endless loop, waste as much of your time as possible, stir your brain, create fear and unease, and make it impossible for you to tell reality from fiction. Why? TO KEEP YOU FROM STARTING THE REVOLUTION." --- Miles Mathis, American author, 2026
Still... not multi-domain. Not meta.
Naaah.
Only thing you should ask yourself (if you are trully conservative): Do I want that SOME OTHER NATION government aka "psichos-in-control" have advanced AI and control USAians with it... or, let it be government of USA?
For at least I have influence... access to it (imagine HOW you'd cross oc#an to reach some..c China?
Post a Comment