About

How searching for better AI memory leads to proving why it's impossible.

The starting point: memory for AI agents

I'm not a neuroscientist. I'm a software developer and physicist — and in 2026 I simply wanted to build AI agents with better memory. The question seemed technically solvable: how do you extract the meaning of a text from a language model in order to store it efficiently and retrieve it later?

The first problem: meaning has no fixed location

What I found was frustrating: the final hidden layer of a transformer contains no clean, extractable representation of meaning. Instead, every meaning is distributed as superposition — a simultaneous overlay of many directions — across the entire activation space. You can't cut it out without destroying everything else.

The detour: computational neuroscience

To understand why, I went deeper — into computational neuroscience, graph theory, wave physics on networks. How does the brain solve this problem? It turns out: it doesn't. The brain doesn't store meaning locally either. Meaning *emerges* as a global interference pattern — not in one place, but as a state of the whole system. This led to the AHT equations: a formal model of meaning, memory, and experience as wave dynamics on the connectome.

The proof: what I originally wanted to solve is unsolvable

The strange thing happened at the end: by understanding why meaning in the brain is not locally stored but emerges as a global eigenmode pattern through wave interference, I could apply the same mechanism to transformers — and prove that incremental learning in dense weight matrices is geometrically impossible. Not difficult. Not inefficient. Structurally excluded. That's exactly the question I started with. The detour was the answer.

What now

The theory is developed across several papers, and the mathematical proofs have been verified by two independent proof assistants (Lean 4 and Isabelle/HOL). Whether the brain model is correct will have to be shown by experiments. But the impossibility theorem stands — regardless of what the brain actually does.

Feedback

I welcome any feedback — especially errors in my arguments or proofs. Anyone who finds a mistake helps the theory more than someone who agrees. You can reach me at andreasa.bean@beanbox.at.