Signal Theory of Intelligence
for the European Union’s Human Brain Project
17 Outlook: The next stage – Transformer principles in the vertebrate brain
The mechanisms presented on this website regarding KNNs, CNNs and RNNs in the vertebrate brain demonstrate that biological neural systems utilise a remarkable variety of computational principles that recur in modern artificial intelligence. Feedforward processing, convolution, recurrent loops, population coding and temporal memory mechanisms together form a flexible, robust and energy-efficient foundation for complex information processing.
In recent years, however, another architectural form has fundamentally transformed AI research: transformers. Today, they dominate language models, image processing, multimodal systems and, increasingly, scientific applications. Their success is based on a combination of:
- attention mechanisms (self-attention)
- context-sensitive weighting
- residual signals
- layer normalisation
- modular processing paths
At first glance, these principles seem far removed from classical neural circuits. However, on closer inspection, it becomes apparent that the vertebrate brain possesses structures capable of performing functionally similar tasks. In particular, the interaction between cortical and subcortical systems, the role of population-coded signals, and the dynamic weighting of parallel information pathways point to mechanisms that are conceptually reminiscent of Transformer principles.
17.1.1 State of research and outlook
As things stand today (2 April 2026), it can be established that all the substructures required for a complete transformer circuit are present in the vertebrate brain. What is more, these structures utilise precisely those mathematical operations and signal processing principles in their function that also underpin artificial transformers.
This fundamentally shifts the focus of the inquiry. This paper is not concerned with the question of whether transformer-like mechanisms exist in the brain, but rather with providing evidence of their existence.
Artificial intelligence and biological intelligence – having evolved independently of one another – make use of the same mathematical framework and corresponding signal processing algorithms. This remarkable convergence suggests that transformer principles are not merely a technical artefact, but a fundamental organisational principle of efficient information processing.
17.1.2 Reference to further work
A comprehensive presentation of a possible biological transformer circuit – including its functional architecture, its signal pathways and its systems-theoretical embedding – is covered in detail in a separate research article. This paper is currently being prepared for submission to a prestigious journal. An accompanying preprint will be published shortly to facilitate scientific discussion.
This document thus bridges the gap between classical neural network architectures in the brain and the modern principles of AI – whilst simultaneously opening the door to the next, more comprehensive step: a mechanistic link between biological and artificial intelligence at the level of transformer architecture.