The map is not the territory
We often hear that the human brain works like a neural network, that it computes, that it manipulates vectors, or that it averages values. These statements are appealing, convenient, and sometimes pedagogical. But taken literally, they raise a fundamental problem.
The map is not the territory.
The brain does not behave like an artificial neural network.
It does not compute vector averages, and does not optimize a cost function.
It is exactly the opposite: we observe a complex, biological, material reality, and then attempt to propose a modeling that makes certain phenomena usable, simulable, and reproducible.
Artificial neural networks are not explanations of the brain; they are mathematical approximations, constructed after the fact, to make certain phenomena manipulable. Confusing the two means mistaking the tool for the object, the representation for the thing represented.
Doing the reverse (inferring properties of reality from the model) leads to a form of reification: we attribute to the world intentions, mechanisms, or capacities that in fact belong to our formal framework. This is how anthropic discourses about computation, artificial intelligence, or cognition arise, where we project onto reality what is merely an artifact of modeling; sometimes even drifting toward a quasi-Platonic dualism in which numbers are endowed with intention or consciousness, and computation becomes the equivalent of a “vital force”.
A model is, by nature, false.
It is simplifying, incomplete, and biased by its assumptions. Its value does not lie in its truth, but in its temporary explanatory power. A good model is meant to be challenged, refined, or replaced by a less false one, never by reality itself.
To recall that the map is not the territory is not to weaken science or computer science.
On the contrary, it is to give them their full rigor: that of disciplines aware of their limits, and therefore capable of progress.