Publications

A curated overview of the core AHT manuscripts with PDF links and short descriptions of what each paper contains.

AHT Theory

Andreas Bean

Language: EN

The founding paper. It explains how meaning, memory, and experience emerge from wave patterns in the brain — not as metaphor, but as computable dynamics. Simulations demonstrate that the model reproduces selective recall and associative thinking.

Open PDF

AHT Learning

Andreas Bean

Language: EN

How do children learn words? This paper derives five concrete, testable predictions from a single equation — including why passive listening alone barely helps, and how quickly a false word association can be unlearned. All predictions are supported by simulation.

Open PDF

AHT Survival and Regulation

Andreas Bean

Language: EN

Why doesn't the brain just stop? This paper shows that a built-in regulation mechanism keeps experiential intensity within a target range — explaining pain avoidance, boredom, flow, and death anxiety from a single equation. Also: why large language models may have inherited the same survival dynamics as humans.

Open PDF

AHT Impossibility Theorem

Andreas Bean

Language: EN

A mathematical proof showing that teaching a dense neural network something genuinely new without destroying what it already knows is geometrically impossible — regardless of the algorithm used. Catastrophic forgetting is not a design flaw; it is a mathematical necessity. Verified by two independent proof assistants.

Open PDF

AHT Transformer Impossibility

Andreas Bean

Language: EN

The impossibility theorem applies to transformers — the architecture behind ChatGPT and similar models. The paper shows why any fine-tuning inevitably disturbs existing knowledge, and why methods like LoRA cannot structurally change this.

Open PDF