Signal Theory of Intelligence
for the European Union’s Human Brain Project
Introduction
The recent successes of artificial intelligence have led to a remarkable development – whilst simultaneously revealing its limitations. The extreme energy requirements of modern systems, their dependence on gigantic training datasets, the demand for remuneration for intellectual creators, and the phenomenon of ‘ ’ ‘hallucinations’ make it clear that the functioning of these systems increasingly appears as a black box.
Yet neurology, too, is reaching its limits. The European Human Brain Project has not even come close to achieving its ambitious goal of a numerical simulation of the brain. The focus on the cortex and the microscopic view of synapses has pushed understanding of how the brain actually works into the background. Although the American Connectome Project provided significant insights into neural connections, this data alone does not reveal how intelligence actually arises. Added to this is the competition for leadership within the disciplines and the increasing specialisation that is driving neurology and AI further and further apart.
It is therefore time to put this scientific divergence on hold, at least temporarily. The signal theory of intelligence sets itself precisely this goal: it aims to bridge the gap between neurology and AI by understanding intelligence as a signal phenomenon – thereby creating a common frame of reference that opens up new perspectives for both disciplines.
In this monograph, we demonstrate that the algorithms governing the brain and artificial intelligence systems are, in principle, identical. We therefore do not merely discuss the question of how the brain works or how AI functions, but we answer it. Step by step, of course. And bearing in mind the fact that the representatives of one field, namely the neurologists, and the representatives of the other, namely the AI experts, understand too little of the other side’s specialist area. We must therefore present the fundamental neurological and AI-related prerequisites for the other side in an easily understandable way in each chapter.
Signal processing in the brain and in artificial intelligence follows the same fundamental principles. Hebbian learning, combined with lateral inhibition, leads to the decomposition of inputs into principal or independent components (PCA/ICA), thereby generating orthogonal or independent directions. Without inhibition, feedforward networks (KNNs) develop that recognise statistically significant patterns; with local receptive fields and weight sharing, CNNs emerge for spatial patterns. By extending the receptive fields and incorporating temporal echoes, RNNs emerge that process sequences and memory. With full feedback, in which outputs act on inputs, transformers emerge that model global dependencies via self-attention. This demonstrates that the brain and AI in principle use the same system architecture, whereby the various network types in the brain can be spatially localised and together form a coherent signal theory of intelligence.
In my "Brain Theory of Vertebrates", I wrote the following sentence: "We do not know too little, but too much. The abundance of facts obscures the connections that need to be recognised."
Given the abundance of facts already known, it is becoming increasingly difficult for everyone to maintain an overall view. Furthermore, it has become fashionable to develop into an expert who knows more and more about less and less. We must escape this trap.
Monografie von Dr. rer. nat. Andreas Heinrich Malczan