Adoption and uses of generative artificial intelligence
Context note
This post is based on answers originally written for an interview with Les Échos, conducted on September 25, 2025.
The responses have been slightly reorganized to form a coherent article, while preserving the substance and tone of the original exchange.
1. How do you observe the evolution of the adoption of generative AI (GAI) in people’s private lives over recent years?
The speed of adoption has been striking. Just three or four years ago, these technologies were perceived as laboratory tools, reserved for a knowledgeable audience. Today, they have entered everyday life, sometimes without us even noticing: writing an email, generating an image, summarizing a document, preparing a presentation.
I remain very cautious about the term artificial intelligence. It is fascinating, but misleading. It works a bit like the term undead in fiction: everyone thinks they know what it means, yet the word itself does little to clarify what is really at stake. It attracts attention, impresses, but also creates confusion.
On one side, we hear "this is not real intelligence", as a way of protecting human uniqueness. On the other, "AI does everything instead of humans", as if human effort were erased. These extreme views are largely fantasy and mainly reflect a lack of understanding of how these systems actually work.
2. What motivates people to use these tools outside any professional context?
Ease of use is often mentioned in professional or academic settings: saving time at work or reducing workload at home. In fact, a "*good” assignment today should be designed with the assumption that students may use a generative AI.
Outside professional contexts, motivations are different. Curiosity plays a central role: experimenting with a tool that intrigues, asking for a rephrasing, or exploring a topic rather than directly searching for information.
There is also creativity: I personally cannot draw, but a generative AI allows me to give visual form to ideas I could never represent myself.
Finally, there is a psychological dimension: overcoming the blank page, daring to start a creative project, and being reassured by a tool that does not judge.
3. What concrete and immediate benefits do GAIs bring to everyday activities?
I am very optimistic on this point. Like any tool, generative AI has flaws, but its potential is immense. It helps structure ideas more quickly, stimulate creativity, and provide assistance with everyday tasks.
I am particularly sensitive to its inclusive potential. These tools can support people with cognitive or language-related disorders, help visually or hearing-impaired individuals, and assist older adults dealing with memory issues.
More simply, they democratize practices such as translation: a text becomes accessible to someone who does not know the language.
Another major benefit lies in popularization. One can ask a generative AI to explain a complex concept in accessible terms: a difficult medical diagnosis, a specialized research article, or an obscure technical notion. It is a powerful aid to understanding and transmission.
4. What personal vulnerabilities emerge when people integrate GAIs massively into their daily lives?
The first vulnerability concerns data. Users do not always realize that what they write into a generative AI may be stored and reused.
The second is cognitive dependence: excessive delegation can lead to the erosion of intellectual effort habits.
There is also a more social vulnerability: loneliness. Some studies indicate that certain users develop emotional dependence on conversational agents. This may create an illusion of human connection, but it risks further isolating people who are already vulnerable.
5. How do you assess the impact of GAIs on people’s ability to distinguish truth from falsehood?
I do not see a radical break. As with the arrival of the Internet or Wikipedia, there is a before and an after. What really changes is our relationship to effort. Perhaps the main difference with generative AI is our own laziness: we tend to trust because it is easy, because it "*sounds right.”
These tools produce convincing answers, even when they are wrong. That is where the danger lies. Yet, as with previous technologies, they can also strengthen critical thinking: provided users learn how to question them, compare sources, and cross-check information. Everything depends on education and training.
6. Do GAIs change our relationship to cognitive effort and personal creativity?
Yes, they do, but they transform it rather than eliminate it. We no longer "*produce” directly; we "*make produce.” This is a subtle but important shift.
In programming, for example, one can code much faster with a generative AI, but only if one understands how the code works. The tool amplifies existing skills; it does not replace them.
As a teacher, I also observe that this forces us to rethink communication: how can we be better understood? Generative AI is fundamentally very simple in its operation. To use it effectively, one must think clearly, structure requests precisely, and develop a form of pedagogical rigor.
7. Do you observe disparities in usage or mastery across different populations?
Yes, these disparities already exist and are significant. Younger people (especially students) adopt these tools quickly and often with great ease. By contrast, older generations or individuals less comfortable with digital technologies may struggle more.
Educational background also plays a key role: the more accustomed one is to reading, writing, and structuring ideas, the more benefit one derives from these tools. Access to hardware and connectivity remains decisive. For people with disabilities, proper support is essential so that AI becomes a lever for inclusion rather than an additional barrier.
8. Do you see a risk of an increased digital divide due to the rapid spread of GAIs?
Not necessarily. I observe this with people who have long struggled with digital tools: generative AI sometimes allows them to "*break” the computer. The intimidating keyboard/screen relationship gives way to something more natural: conversation.
That does not mean human–machine interaction is equivalent to human interaction. It is not, and that is precisely why I remain cautious about the term artificial intelligence.
However, the impression of a benevolent and reassuring presence is real and facilitates adoption. While new divides will undoubtedly emerge (linked to tool quality or access) I believe these technologies can also reduce certain barriers and open digital spaces to people who were previously excluded.