Human intelligence has never existed!
Pure human intelligence has never existed. From the first flint tool to artificial neural networks, we have always been cognitive cyborgs in perpetual reconfiguration.
The dominant view of human cognitive evolution rests on a seductive but fragile assumption: that of a gradual decline from a golden age (the Paleolithic, supposedly) toward a technological dependence that increasingly alienates us. This interpretation conceals a much more complex reality: human intelligence has never been a fixed essence but a dynamic process of co-constitution with its environments. We have always been hybrid assemblages, distributing our cognitive capacities into our surroundings.
In fact, technical externalization is not a loss but a constitutive feature of human evolution. The flint tool was already a cognitive extension, just like cuneiform writing or a machine learning algorithm.
Writing, often presented as the first major cognitive rupture, does not simply replace memory; it fundamentally reconfigures our relationship to time, thought, and power. It reprograms our consciousness. The alphabet is not neutral: it carries within it a particular organization of the world. This fundamental dimension of cognitive externalization intensifies with digital technologies. The “Google effect” or the spatial atrophy linked to GPS are only the visible surface of a deeper transformation.
Faced with these challenges, and with the democratization of AI, we need to imagine a new model of co-evolution with the machine. The two intelligences (human and artificial) could evolve mutually in a productive tension. The issue is not to preserve a totemized notion of human intelligence but, on the contrary, to draw the best from AI while maintaining our capacity for action and critique in this ongoing reconfiguration.
How to do this? How to design AIs that enhance our collective agency rather than cognitively proletarianize us? To return to simple practices, I propose developing a conscious approach to our uses: becoming aware of the cognitive processes at play each time we use AI; valuing effort as a sign of growth when we decide to do without it; accepting the cognitive slowness of exclusively organic reflection processes, as it signals work and thus improvement of our capacities; being responsible for preserving our cognitive abilities (and especially those of our children). This does not mean stopping the use of AI, far from it. Rather, it means being aware of its potential effects and using it as a support for cognitive growth rather than for decline.
Here’s a personal example: one practice of mine is not to ask the machine to reason, write, or generate content directly, but first to produce a manual draft and then ask the AI to challenge it (find weaknesses, counter-argue, identify errors…). In short, to make it a cognitive sparring partner rather than a crutch.
The intelligence of the future will be neither human nor artificial but will emerge from their perpetual confrontation. Our responsibility is not to preserve a mythologized past but to invent the conditions for an emancipatory rather than alienating hybridization. Because if we are condemned to become cyborgs, we might as well collectively choose the kind of cyborgs we want to be.
