Signal Theory of Intelligence
for the European Union’s Human Brain Project
Foreword
Why the Transformer architecture fulfils the vision of the Blue Brain Project
When Henry Markram launched the Blue Brain Project in the early 2000s, many people scoffed at him. The idea of fully simulating the brain as a numerical model seemed bold, overambitious, even almost megalomaniacal. Yet Markram had an intuition that was far ahead of its time:
The brain is a rule-based, predictable machine. Its architecture is algorithmic. And it can be simulated.
What was missing back then was not the will, nor the volume of data, nor the computing power. What was missing was the right abstraction.
Neuroscience was caught up in biophysical details: ion channels, dendrite models, membrane conductivities. AI was not yet capable of delivering functional architectures that went beyond simple feedforward networks. And neuroanatomy was too fragmented to identify a global functional architecture.
Today, two decades later, the situation has changed fundamentally. With the introduction of the Transformer architecture in artificial intelligence, a model was created for the first time that:
hierarchically
recursive
context-sensitive
parallel
and emergent
— just like the brain.
Transformers possess:
Token
Query signals
Key structures
Value contents
Position encoding
Multi-level attention
Recursive loops
And it is precisely these elements that are found — anatomically, functionally and topologically — in the human brain.
Although the highest level of neural signal processing in the human brain is realised through Transformer-like architectures, the brain is not a pure Transformer.
All the basic network forms we know from AI — KNNs, CNNs, RNNs — also exist in the nervous system of vertebrates. They form the evolutionarily older layers of signal processing. Their outputs feed the Transformer modules, just as in modern AI systems, pre-processing, feature extraction and the recurrent loops provide the input for the attention mechanisms.
Nature did not invent these architectures to separate them from one another, but to combine them. The transformer structures of the human brain stand at the end of a long evolutionary chain of signal processing systems — and they only function because the underlying networks prepare the signals.
It is no different in artificial intelligence: there, too, transformers only reach their full potential once the input signals have been structured by other network architectures. AI experts know this. Biology has been practising it for millions of years.
This makes it clear: Markram’s vision was not wrong — it was simply ahead of its time. He sought a numerical architecture capable of functionally mapping the brain. This architecture exists today. It is called a transformer.
The theory of biological transformers presented here shows that the brain is not merely a biological network, but an algorithmic machine whose structure is reflected in modern AI models. The numerical simulation that Markram sought is now possible — not through the complete replication of every ion channel, but through the functional reconstruction of the brain’s signal architecture.
The irony of the story is remarkable: whilst many considered Markram overambitious at the time, AI has since produced precisely the models that fulfil his vision. This biological transformer theory shows that nature has been using an architecture for millions of years that we are only now beginning to understand.
Markram wanted to simulate the brain. Today we can say:
The brain simulates itself — as a transformer.
If the brain is a transformer, then we must understand intelligence as signal processing — not as abstract information processing.
This foreword is intended to set the historical context for the theory that follows. It is not merely a neurobiological hypothesis, but an answer to a question that has been hanging in the air for decades:
How does intelligence work?