What AI is Not
Discussions around artificial intelligence often suffer from a persistent confusion: the confusion between reality, the model, and the simulacrum.
This confusion leads to excessive discourse (sometimes enthusiastic, sometimes anxious) that ultimately says more about our relationship to models than about the systems themselves.
An artificial intelligence is not an intelligence in the human sense.
It is neither a mind, nor a consciousness, nor a subject; it is not an entity that understands what it manipulates. It is прежде all a model (Aristotle would have called it an artifact, not a substance).
A formal, algorithmic model, built from observations of reality, and designed to produce certain behaviors deemed relevant within a given framework. What we then observe (recognition, classification, generation, or decision) belongs to the realm of the simulacrum: a functional imitation of behaviors that we qualify, in humans, as intelligent.
Let us be clear: I do not deny that a model can be described as “intelligent”, provided we specify what we mean by that term.
If by intelligence we mean the ability to produce behaviors adapted to a given environment, to satisfy constraints, or to generalize from examples, then the term can have an operational meaning. In that sense, speaking of intelligence is acceptable, so long as we do not give it more scope than it deserves.
The difficulty arises when we shift from this functional definition to an ontological one: when we attribute to the model intentions, understanding, will, or subjectivity. At that point, we leave the scientific domain and enter the realm of anthropic projection.
A model is not the reality it describes.
A simulacrum is not the entity it imitates.