Digital twins are successful and attract attention because they give the impression that a complex system can finally be understood, represented, and controlled without approximation. This promise has proven itself in industry, where modeling a physical asset makes it possible to improve its performance without exposing it unnecessarily (Digital twins: what are we really talking about?). But this success has also given rise to a kind of dream, if not a fantasy: if it works for machines, why not for work itself?
For thirty years, however, the service sector has been pursuing a goal that has remained largely out of reach: understanding, formalizing, and improving knowledge work with the same rigor as industrial work. Knowledge management, processes, collaborative tools, and successive platforms have all, in their own way, carried this ambition forward, and digital twins are now reviving this old project, without always measuring its difficulty. In fact, looking back, I must admit that this has been my unconscious guiding principle for at least 20 years, even if it was unconscious (The open space is not a factory but sometimes you should look at it that way and Just because work is invisible, it doesn’t mean that it can’t be improved).
Note that despite the apparent difficulty of the exercise, I have nevertheless noticed that concepts such as the theory of constraints explain very well, with a few adaptations, things that we observe in terms of collaboration and productivity among knowledge workers (Local optimum vs. global optimum and the theory of constraints: why your productivity gains sometimes serve no purpose and The Goal, by Eliyahu Goldratt: another perspective on productivity) and that the principles of the great Deming work well beyond factories (“Every system is perfectly designed to achieve the results it achieves” – W. Edwards Deming).
But to return to the subject of this article, the question is not whether cognitive work can be modeled, but what we are willing to lose or distort when we try to fit it into a model.
In short:
- Digital twins have proven their usefulness in industry by modeling measurable physical systems, but their transposition to cognitive work raises more complex questions of representation.
- For thirty years, the service sector has sought to model knowledge work with the same rigor as industrial work, without fully succeeding despite successive tools and approaches.
- Cognitive work is difficult to model because it relies on trade-offs, human interactions, and invisible efforts that escape the traces recorded by digital tools.
- Organizational digital twins simplify cognitive work to make it simulatable, deliberately avoiding modeling individual mental processes, which limits their scope for understanding real work.
- Representing knowledge work involves non-neutral choices about what to make visible and measurable, thus guiding how we design, govern, or transform the organization.
Nothing introduces the subject better than this excerpt from an article in The New Yorker that I never tire of quoting:
“Peter Drucker noted that during the twentieth century, the productivity of manual workers in the manufacturing sector increased by a factor of fifty as we got smarter about the best way to build products. He argued that the knowledge sector, by contrast, had hardly begun a similar process of self-examination and improvement, existing at the end of the twentieth century where manufacturing had been a hundred years earlier.” (Slack Is the Right Tool for the Wrong Way to Work)
And one last semantic clarification before getting to the heart of the matter. Historically, we have used the term “knowledge work” to refer to the economic shift towards activities based on information and expertise, but today, when we ask ourselves what digital twins are trying to represent, it is cognitive work that we are talking about: decisions, trade-offs, attention, and coordination under constraints. But I admit that the boundary is still blurred, at least for me, and that I am allowing myself to use both terms for the time being, even if it means drawing the wrath of experts.
Why digital twins first became established in the physical world
Digital twins developed in areas where representation posed few conceptual questions. A machine, production line, or piece of industrial equipment produces observable, measurable signals that are relatively consistent over time. The variables are identifiable, the states comparable, and the deviations interpretable, which means that even when the complexity is high, it remains technical in nature.
In this context, the digital twin has established itself as a logical extension of simulation and data-driven control. A system is represented as it is supposed to function, its deviations are observed, and then adjustments are made. Improvements are made through more sensors, data, and computing power without calling into question the scope of what is being modeled.
This trajectory has established a reassuring and even appealing vision of the digital twin: a control tool based on a relatively direct correspondence between the model and the observed system.
Things get complicated when the object becomes the organization
In recent years, the vocabulary has evolved. We now talk about the “digital twin of the organization” or “organizational digital twin”. The goal is no longer just to represent a physical asset, but to help understand and test decisions in an organizational system.
An often-cited article on the subject is that of Parmar, Leiponen, and Thomas, which lays the conceptual foundations for this extension (Building an organizational digital twin).
It explains that the digital twin of an organization is primarily used to test management decisions before implementing them. It allows changes in structure, rules, or coordination methods to be simulated in order to anticipate their effects.
Cognitive work is not modeled directly. It is taken into account indirectly, through “average” behaviors and supposedly rational decisions. The authors do not seek to represent attention, everyday trade-offs, or mental load. This is not a shortcoming, but a methodological choice that makes the model usable, while clearly setting its limits. Conversely, if we tried to describe precisely how people think, prioritize, and make decisions (known as cognition), this would make the model too unstable and too dependent on situations to remain usable as a decision-making tool.
Taking all these elements into account is not impossible, but it is not effective for the objective pursued. On the other hand, and we will come back to this in the future, it can be relevant at other levels, such as exploring specific expertise, roles, or activities, whereas it is useless for simulating a reorganization.
All of this is fundamental because as soon as the object of the twin is no longer a machine but an organization, the question is no longer just technical but becomes a question of representation. What do we decide to model, and what do we leave out of the picture?
Knowledge work cannot be reduced to its traces
Unlike physical systems, knowledge work is neither linear nor stable. It consists of interruptions, changes in priorities, decisions made under pressure, informal coordination, and implicit compromises.
Digital tools record traces: messages, meetings, documents, tickets, notifications (What data do we need to understand how people work?). These traces are useful, but they do not describe the cognitive effort, the attention mobilized, the logic of trade-offs, and even less so the complexity of interpersonal relationships. They show what has materialized in a system, not the intellectual path that led to it, and even less so the impact of human relationships in the process.
This is a recurring blind spot in many approaches. We think we are observing the work, when in fact we are mainly observing the traces it leaves in the tools, whereas in cognitive professions, a significant part of the value is determined precisely by what does not appear in the tools: the reasoning and trade-offs that precede the decision or action, and the impact of personalities in collaborative work.
Modeling prescribed work is not enough to understand the work actually performed
When attempting to represent work, there is a strong temptation to describe what it should be. Processes, roles, steps, validations. This description is necessary for coordination, but it never provides an accurate picture of daily practice.
The gap between work as it is described and work as it is performed is not a flaw in the system, but is an integral part of cognitive work. Individuals are constantly adapting to conflicting constraints, incomplete information, and changing priorities, not to mention imperfect tools and other processes (Work about work: when the reality of work consists of making things that don’t work work). .
A digital twin applied to knowledge work therefore lies in an uncomfortable zone between seeking to understand how work is done and reinforcing a normative model under the guise of control.
As long as we do not choose a side, the risk is to produce a highly elaborate twin of theoretical work that exists mainly in procedures but not in reality.
A holy grail pursued for thirty years
Behind the current interest in digital twins lies a long-standing ambition. Since the 1990s, numerous approaches have sought to model, formalize, and capitalize on knowledge in cognitive professions, with the hope of replicating in the service sector the gains achieved in industry.
Knowledge management, expert systems, ontologies, workflow, BPM, collaborative platforms, and more recently, process mining. At each stage, we believe we are getting closer to the goal: making knowledge work more structured, more explainable, and therefore more improvable.
What the digital twin adds to this story is the promise of a closer link between representation, simulation, and control, but it does not erase the fundamental difficulty of representing work that derives much of its value from what cannot be formalized.
What research says about cognitive twins
You have probably gathered that I am fascinated by this subject, so I have spent quite some time researching and studying what has already been said and attempted on the subject.
There is indeed academic literature on “cognitive digital twins”, but it most often aims to enrich industrial twins with reasoning capabilities, rather than to model cognitive work in the organizational sense.
These “cognitive digital twins” aim to provide classic digital twins with reasoning capabilities by combining data, physical models, and representations of knowledge. The goal is to improve the understanding, simulation, and adaptation of complex systems, mainly industrial ones, but human cognition is treated as a capacity to model or assist, rather than as a description of cognitive work as it occurs in organizations (The emergence of cognitive digital twin: vision, challenges and opportunities). The cognitive is there to improve the industrial, not to improve the cognitive.
Technically speaking, in these digital twins, the AI used is primarily predictive and optimizing, and generative AI, when present, plays an interface and mediation role, not a system control role (Technically speaking, in these digital twins, the AI used is primarily predictive and optimizing, and generative AI, when present, plays an interface and mediation role, not a system control role (The digital twin in the age of AI: progress or illusion of progress?).
There is also work on the use of ontologies and knowledge graphs in digital twins, which clearly shows the challenges of formalizing vocabulary and categories that are vital for business applications, but I will not go further on this subject, which will be covered in more detail in a future article (Ontologies in Digital Twins: A Systematic Literature Review).
But all this research converges on one point: as soon as we seek to capture knowledge, we must make explicit assumptions, concepts, and boundaries. In other words, we never model work as it is, but as we agree to describe it.
Put another way, the twin is based on deterministic assumptions, the business likes to think of itself as such, whereas human work largely escapes this logic.
Cognitive twins at a dead end?
You must think that it wasn’t worth going into such detail on the subject only to reach the bottom line that we were, if not at a dead end, then at least obliged to recognize that, while potential existed, it was not unlimited.
But it was a necessary step before moving on to the next part of my reasoning and changing my perspective, looking not at machines, but at people, expertise, and organizations.
The rest of the series will therefore not start from a technological promise but from a question of organizational design. Under what conditions can a twin applied to knowledge work become a tool for understanding and improvement rather than a flawless model that mainly documents what we would like to see?
Bottom line
Digital twins do not fail when they venture outside the physical world, but they do change in nature. As soon as the object is no longer a machine but a cognitive activity, the challenge is no longer to refine the model but to recognize what cannot be fully formalized.
Seeking to represent knowledge work means making choices about what is considered relevant, measurable, and improvable. These choices are never neutral and influence how work is understood and governed.
The next articles will therefore start from this observation to examine the conditions under which twins applied to people, expertise, and organizations can help to design more coherent work and better mobilize expertise instead of trying to control work or impose an imperfect model.
To answer your questions…
In industry, digital twins rely on physical systems that are observable, measurable, and relatively stable. Variables are identifiable and deviations interpretable. Knowledge work, on the other hand, relies on decisions, attention, trade-offs, and informal coordination, which are not easily measured. Digital traces capture only part of reality. The problem is therefore not technological but conceptual: it is not possible to establish a direct and reliable correspondence between a model and the complexity of cognitive work.
It is possible to model cognitive work, but never without simplification. Any attempt requires choosing what to represent and what to ignore. However, much of the value of knowledge work lies in elements that are difficult to formalize, such as reasoning, changing priorities, or implicit compromises. Models are therefore inherently partial. They can be useful for shedding light on certain phenomena, but they should not be confused with an accurate description of actual work.
Digital twins of organizations are primarily used to test management decisions before they are implemented. They can be used to simulate changes in structure, rules, or coordination methods in order to anticipate their overall effects. Individual cognitive work is not described in detail, but is indirectly integrated through average behaviors and simplifying assumptions. This choice makes the models usable, while clearly setting their limits for understanding everyday work.
Digital tools capture traces such as messages, documents, or meetings, but not the cognitive effort that precedes them. They show what is produced, not how or why a decision was made. However, in knowledge work, the essential aspects often lie in reasoning, trade-offs, and informal human interactions. Limiting oneself to traces therefore provides an incomplete and sometimes misleading view of the actual work.
Les jumeaux cognitifs ne sont pas une impasse, mais ils changent la nature du problème. La recherche montre qu’ils servent surtout à améliorer des systèmes industriels par des capacités de raisonnement, plutôt qu’à modéliser le travail cognitif organisationnel. Leur principal intérêt est de rendre explicites les hypothèses de représentation. L’enjeu n’est plus de tout modéliser, mais de concevoir des modèles utiles pour mieux comprendre et améliorer l’organisation du travail, sans prétendre la contrôler entièrement.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







