for the European Union's Human Brain Project
Monograph by Dr. rer. nat. Andreas Heinrich Malczan
Why the Transformer architecture fulfills the vision of the Blue
Brain Project
When Henry Markram initiated the Blue Brain Project in the early
2000s, many skeptics doubted it. The idea of fully simulating the brain as a
numerical model seemed bold, overly ambitious, almost reckless. However,
Markram had an intuition that was ahead of its time:
The brain is a rule-based, computable machine.
Its architecture is algorithmic. And it can be
simulated.
What was missing back then was not the will, the data volume, or
the computational power. It was the correct abstraction.
Neuroscience was trapped in biophysical details: ion channels,
dendrite models, membrane conductances. Artificial intelligence had not yet
developed functional architectures beyond simple feedforward networks. And
neuroanatomy was too fragmented to recognize a global functional
architecture.
Today, two decades later, the situation has fundamentally changed.
With the introduction of the Transformer architecture in
artificial intelligence, a model was created that:
— just like the brain.
Transformers possess:
And these elements are reflected—anatomically, functionally, and
topologically—in the human brain.
Although the highest level of neural signal processing in the human
brain is realized through transformer-like architectures, the brain is not
purely a transformer.
All fundamental network types known from AI—KNNs, CNNs, RNNs—also
exist within the vertebrate nervous system. They form the evolutionarily
older layers of signal processing. Their outputs feed into the transformer
modules, just as in modern AI systems, preprocessing, feature extraction,
and recurrent loops provide input for attention mechanisms.
Nature did not invent these architectures to keep them separate but
to combine them. The transformer structures of the human brain are the
result of a long evolutionary chain of signal processing systems—and they
only work because the underlying networks prepare the signals.
In artificial intelligence, it is no different: transformers only
reach their full potential when input signals are structured by other
network architectures. AI experts understand this. Biology has practiced it
for millions of years.
This makes it clear: Markram's vision was not wrong—it was simply
ahead of its time. He sought a numerical architecture capable of
functionally modeling the brain. That architecture exists today. It is
called Transformer.
The presented theory of biological transformers shows that the
brain is not only a biological network but also an algorithmic
machine, whose structure is reflected in modern AI models. The
numerical simulation Markram aimed for is now possible—not by fully
replicating every ion channel but through the functional
reconstruction of the brain's signal architecture.
The irony of history is remarkable: while many considered Markram
overly ambitious at the time, AI has now produced exactly the models that
fulfill his vision. This biological transformer theory demonstrates that
nature has been using an architecture for millions of years that we are only
now beginning to understand.
Markram wanted to simulate the brain. Today, we can say:
The brain simulates itself—as a transformer.
If the brain is a transformer, then intelligence must be understood
as signal processing—not as abstract information processing.
This foreword aims to set the historical context for the subsequent
theory. It is not only a neurobiological hypothesis but also an answer to a
question that has persisted for decades:
How does intelligence work?
The answer is: as a signal theory—and as
a transformer.
Monograph by Dr. rer. nat. Andreas Heinrich Malczan