Historically, the term “digital twin” refers to the idea of building a digital representation of a system, feeding it with data, developing it, and using it to understand, predict, and optimize (Digital twins: what are we really talking about?). In industry, the concept is valued but comes at a price, as it requires modeling, tools, data flows, and, above all, a decision-making discipline.
Since then, generative AI has arrived, and with it the temptation to add a layer of language and automation, then rename the whole thing twin. The result is ambiguous because AI can, indeed, make some twins more useful, but it can also transform the concept into a catch-all term that ends up meaning very little.
In short:
- The digital twin, historically designed as a faithful representation of a system powered by data, aims to understand, simulate, and optimize, relying on rigorous modeling, appropriate tools, and data governance.
- Generative AI enriches twins by facilitating the use of unstructured data, improving certain predictions, and making interaction more accessible, but it can also dilute the rigor of the initial concept by promoting overly broad or vague uses.
- Despite its contributions, AI does not eliminate data quality requirements, the need to define a clear model, or decision-making rules, which it sometimes even tends to make less transparent.
- The term “cognitive twin” refers to systems that integrate learning and adaptation functions, but it can be overused when any interactive interface is equated with a twin, at the risk of confusing rigorous modeling with approximate social simulation.
- It is crucial to distinguish between digital twins and AI agents: the former inform decision-making by representing a system, while the latter act with relative autonomy toward a given goal, raising important issues of responsibility, control, and security.
What AI changes: learning, completing, recommending
First and foremost, let’s recognize what AI brings when used wisely.
It facilitates the ingestion and formatting of heterogeneous knowledge. Where a traditional twin relies on structured data, physical models, and explicit rules, AI makes it easier to exploit less clean sources: reports, tickets, procedures, field observations, and feedback. In contexts where there is no perfectly established model, it can help to generate hypotheses, suggest correlations, propose explanations, and thus accelerate the transition from “what we think we know” to “what the data tells us”.
It also improves certain predictive capabilities, particularly when the behavior of the system is difficult to capture using equations alone. This is a topic that has been widely discussed in recent literature: the integration of learning methods into twins is progressing, but remains highly dependent on the context, the available data, and the operational objectives targeted (What is a Digital Twin anyway? Deriving the definition for the built environment from over 15,000 scientific publications and A review of digital twins in smart industries: Concepts, milestones, trends, applications, opportunities and challenges).
Finally, it can transform the use of the twin by offering more direct interaction. A well-designed twin becomes not only an object of simulation, but also a system that makes recommendations and provides diagnostics to users who would never have opened a modeling tool. The gain is both cognitive and operational: it broadens the circle of those who can use the twin without being specialists.
What AI does not change: data, model, decision rules
However, we must remain cautious and avoid getting carried away, because AI is not a magic wand that will make all constraints and prerequisites disappear.
A digital twin remains dependent on the quality of the data and its suitability for what we are trying to represent. In the historical vision of the concept, particularly at NASA, which pioneered it, the twin is explicitly based on the integration of simulation, maintenance history, and all available data on the life cycle (The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles). In other words, even when AI is useful, it does not replace data governance or the ability to correctly link an observation to a system state.
Second point: AI does not replace the design of the model, even if it shifts it. A digital twin in the original sense of the term is a representation of an entity or system (Definition of a Digital Twin). But representation is not enough, because we need to decide what matters, at what level of detail, with what assumptions, and for what purpose. We can, of course, add learning, knowledge graphs, or hybrid reasoning, but that does not eliminate the primary question: what are we modeling, and why?
Third point: AI does not eliminate decision-making rules and sometimes even makes them less visible. In a twin, the decision is always somewhere, and if we delegate part of these mechanisms to an AI model, we are not eliminating the rule, but making it more difficult to audit. And a decision that is more difficult to explain is rarely a step forward for those who operate.
Cognitive twins: useful or a misapplication of the concept?
This is where the term “cognitive digital twins” comes in, since nowadays the word “cognitive” has to be used everywhere, even though its relevance in terms of AI is debatable. In any case, the term does exist in research and is defined: it describes twins enriched with perception, reasoning, learning, and self-adaptation capabilities (Cognitive Digital Twin frameworks in manufacturing—A critical survey, evaluation criteria, and future directions, Cognitive Digital Twin for Manufacturing Systems, and Cognitive Digital Twins in the Process Industries). In other words, we are not simply talking about a chatbot attached to a system, but rather an architectural evolution that seeks to better manage complexity.
The problem is therefore not the expression itself, but the ease with which it can be appropriated or even misused. As soon as we can communicate with a model in natural language, we are tempted to say that we have a “cognitive twin” of the organization, the customer, the team, or the manager. However, the further we move away from a system fed by continuous measurements and linked to observable states, the more we move away from the realm of twins and into that of social simulation, contextual assistants, and even decision-making tools. These are interesting objects, but they do not have the same requirements or the same risks.
The word “twin” then becomes the promise of being able to represent complex human behaviors as one represents a machine, a whim that, as you know, has been close to my heart for a long time (The open space is not a factory but sometimes you should look at it that way). At this stage, the debate becomes less technical and more philosophical, because it is no longer just about sensors and models, but about intentions, motivations, and collective dynamics in other words, what we believe we can predict in organizations that consider themselves deterministic, even though they are permeated by fundamentally non-deterministic human behavior.
Digital twin vs. AI agent
It is therefore useful to distinguish between the twin and the agent.
A digital twin is, first and foremost, a representation linked to an entity, fed by data flows, and used to understand, simulate, and optimize. Even in the most open industrial definitions, the central idea remains that of a software object that reflects an entity or a system.
An AI agent, on the other hand, is defined by its ability to act to achieve a goal, with limited supervision, using tools, APIs, and software environments. Definitions vary, but there is a common foundation: planning, executing, learning, and interacting with external systems. An agent can rely on a twin, query it, and trigger actions by referring to it, but that is not the same as being a twin.
We are not playing with words or getting lost in academic considerations here: we are talking about how we conceive of responsibility. A twin serves to inform a decision, while an agent begins to make and implement it. If we follow this line of reasoning, we quickly come back to the issues of control, security, and possible abuse, which are already very concrete in business agent environments (Experts warn Microsoft Copilot Studio agents are being hijacked to steal OAuth tokens and Ai Risk Management Framework).
On this point, I would emphasize that a system that chains together actions is not necessarily a system that understands what it is doing, nor is it a system that can be controlled with confidence when connected to operational tools, insofar as the decision chain, the scope of actions, and the control mechanisms remain largely opaque.
Is the Dassault Systèmes/NVIDIA partnership a game changer?
The announcement of the recent partnership between Dassault Systèmes and NVIDIA may give the impression of a reversal of perspective: AI, now coupled with digital twins, is on the verge of surpassing the limits usually attributed to these devices. The discourse evokes “industrial world models,” agents capable of assisting decision-making, and an unprecedented acceleration of engineering cycles.
Upon closer inspection, however, this partnership says nothing more than what we have already discussed here. AI is never presented as a substitute for models, but as a means of exploiting more quickly and more widely representations that are already formalized, scientifically validated, and rooted in known disciplines. The agents announced do not act in a cognitive vacuum but rely on twins built from explicit rules, constraints, and assumptions.
What this comparison illustrates is therefore not a magical fusion between AI and digital twins, but confirmation of the principle that without a model of the system to be controlled, AI only makes random statistical proposals, but that with a robust model, it becomes an accelerator of exploration and decision support. The core of the twin remains unchanged; the contribution lies in the accelerated exploration of scenarios and the analysis of their effects within a given set of constraints.
Bottom line
AI can make twins more efficient and adaptive, but it also reinforces the temptation to pass off as modeling what is sometimes an interpretation, and to pass off as representation what is already an implicit theory of behavior.
The more we claim to model cognitive activities, human decisions, or social dynamics, the more the object becomes laden with assumptions about what matters, what causes what, and what is normal or abnormal. And these assumptions, whether we like it or not, are choices, and therefore a form of management ideology encapsulated in a system, which brings us back to some of my favorite topics (Taking back control of enterprise design: intention before tools and If your business isn’t designed for AI, it will end up being designed by AI).
In this article, I don’t want to settle a debate over vocabulary, but rather to show that the real issue is not whether we should call an AI-powered system a twin, but rather what we are trying to model, with what level of ambition, and with what limitations and responsibilities when we claim to represent people’s work as we represent a machine.
To answer your questions…
A digital twin is a digital representation of a real system, powered by data and designed to understand, simulate, and optimize its operation. In its historical definition, particularly in industry, it is based on explicit models, controlled data flows, and clear decision-making rules. Its value comes less from the technology than from the rigor with which data, model, and usage are aligned. Without this modeling effort and decision-making discipline, the twin loses its ability to truly inform action.
AI facilitates the use of heterogeneous and unstructured data, such as field feedback or documents, and can improve certain predictive capabilities when physical models are insufficient. It also speeds up access to the twin through more natural interfaces and recommendations. Used correctly, it broadens the use of the twin without changing its fundamental nature.
AI does not replace data quality, model design, or decision rules. It does not eliminate choices about what is modeled, why, and with what assumptions. It can even make certain decisions less visible and more difficult to audit, which poses a control issue.
The concept exists in research and refers to twins enriched with learning and reasoning abilities. The risk comes from its misuse: as soon as a system communicates in natural language, it is labeled a cognitive twin, even without a strong link to measurable states. We then change the subject without saying so.
A twin is used to represent and inform decision-making. An AI agent acts to achieve a goal, performing a series of actions with partial autonomy. An agent can use a twin, but it is not the same thing. The difference engages responsibility, control, and risks.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







