Training Engineers in the Age of AI: Uses, Limits, and Responsibilities

This article is based on an interview conducted for Usine Nouvelle about how engineers are being trained to face the rise of generative artificial intelligence. What follows is a personal reformulation and reflection on the ideas discussed during that exchange.

AI: a term that fascinates (and misleads)

"Artificial intelligence" is a deeply polysemous term. It fascinates, it worries, but ultimately explains very little about what it truly refers to. Historically, what was called AI in the 1950s consisted of symbolic and logical systems. Later came expert systems, then statistical approaches. Today, large language models (LLMs) dominate the public conversation.

At every stage, the same claim resurfaced: "this is not real AI." Much like magic, AI seems to designate whatever we do not yet fully understand. Once the mechanisms become explicit, the label fades.

That is why, in both my teaching and outreach activities, I emphasize a simple idea: AI is not an autonomous entity with intentions, but a collection of powerful statistical tools whose effects entirely depend on how we use them.

Teaching prompt engineering: focusing on usage

At 3iL, we made the deliberate choice to introduce prompt engineering very early in the engineering curriculum, during the first year. The goal is not to train students to design models, but to teach them how to use existing tools intelligently.

The course is structured around several key components:

We also work on concrete examples of prompt injection attacks, hallucinations, and real-world professional misuse of AI tools.

Costs, ecology, and ethics at the core of the discussion

Teaching AI without addressing its impacts would be a serious mistake. Current models have real energy costs, raise significant ecological concerns, and often rely on invisible human labor under precarious conditions in economically vulnerable regions.

Beyond that lies a crucial issue: responsibility. Generative AI makes it possible to create convincing fake texts, images, and videos at scale. Learning to use these tools also means learning when not to use them.

Hands-on workshops: experimenting, not delegating thought

Students take part in several practical workshops: AI-assisted revision, exam preparation, critical document analysis, and interactions with agents designed to resist prohibited requests (such as extracting a secret password).

The goal is never to outsource thinking to the machine, but rather to make its limits explicit. AI is a tool. Like any powerful tool, it requires mastery, otherwise, it becomes misleading.

AI does not replace engineers, it reshapes their role

It is often claimed that AI will replace developers and engineers. Experience suggests this view is overly simplistic. Code generation is only a small fraction of engineering work. Understanding requirements, communicating with clients, managing constraints, and assuming legal and ethical responsibility remain deeply human tasks.

Industrial history offers a useful parallel: automated looms did not eliminate textiles; they transformed the required skills. AI follows the same pattern. It accelerates certain tasks, but makes global understanding even more essential.

A head start... btu only under certain conditions

Students who already understand what they are trying to achieve gain immense benefits from AI tools. Those who do not grasp the underlying problem produce mediocre results, with or without AI. In this sense, AI acts as both an accelerator and a revealer of inequalities.

It allows people to go further and faster, provided they have solid foundations. That is precisely why future engineers must be trained today, within a structured and critical framework.

Documentation, explainability, and accessibility

Beyond code, AI is particularly valuable for tasks often neglected: documentation, reformulation, vulgarization, and translation. These uses intersect with issues I deeply care about, especially explainability and accessibility.

In contexts involving disability, cognitive difficulties, or language barriers, conversational agents can become powerful assistive tools; provided they are not granted unwarranted authority.

Training teachers as well

Integrating AI into curricula also requires teachers themselves to adopt these tools. I use AI daily to improve my courses, design exams, verify clarity, and propose more ambitious projects.

This allows me to raise expectations during presentations and evaluations: students must explain, justify, and argue. The tools evolve, but intellectual rigor remains essential.

An open question

AI is here to stay. The real question is not whether it will disappear, but how we will learn to live and work with it, without abandoning critical thinking, responsibility, or human intelligence.

That, ultimately, is the core challenge of engineering education today.