Is AI the new Lean?

-

We often need to believe that a new idea will fix what previous ones failed to correct. This is a constant in the life of organizations: as soon as one concept runs out of steam, another replaces it, supposedly delivering on the promises that were broken. This is even more true when the promise lies in or is supported by technologies that we know follow a roller coaster model, alternating between phases of excitement and disappointment before finally settling into a certain normality. Artificial intelligence is now arriving as a technological savior, a booster of efficiency, a catalyst for transformation. It succeeds Lean, which, twenty years ago, carried the same momentum. And even if one is more of an approach and a philosophy and the other a technology, we see a repetitive effect with the same slogans and the same hopes, often ending in the same dead ends.

This is no coincidence but, on the contrary, a sign that in business, one prefers to reinvent tools, whether methodological or technological, rather than question how they are used.

In short:

  • Organizations regularly adopt new concepts or technologies such as AI, hoping that they will correct the failures of previous approaches, but often repeat the same mistakes due to a lack of questioning of practices.
  • The parallel between AI and Lean shows a repetition of the same promises of transformation (efficiency, time savings), followed by disillusionment due to implementations that are disconnected from the field and focused on control rather than trust.
  • The failures of AI projects are not so much due to the technology as to the inability of organizations to experiment, iterate, and involve the right people, replicating the abuses observed with Lean (cost reduction instead of people development).
  • Employee disengagement is often a rational response to the gap between managerial rhetoric and reality, where digital tools, supposed to help, become instruments of surveillance and standardization of work.
  • For AI to become a real driver of progress, it must be inspired by the founding principles of Lean: observation in the field, continuous learning, respect for people, and an explicit engagement not to link productivity gains to job cuts.

Same play, new set

Organizations show remarkable consistency when it comes to repeating the same mistakes while believing they are reinventing the world. The words, consultants, and software may change, but the story remains the same, with a promise of transformation and collective enthusiasm followed by slow disillusionment. Lean was once that promise. Today, artificial intelligence embodies it in new guise, and already we can see the same drift: the tool as a totem, its spirit forgotten along the way.

This parallel is hardly surprising. In both cases, the rhetoric is identical: improve efficiency, eliminate tasks with no added value, free up time for people. And in both cases, the implementation betrays the promise: systems designed far from the field are imposed, measurements are taken without understanding, and controls are applied without support. The result: mistrust, fatigue, and sometimes even rejection.

The mirage of the new miracle

The mistake does not lie in the tools or concepts, but in how they are used. Time and again, the scenario is the same: announcement of a breakthrough, frenzied mobilization, then disappointment. A much-discussed MIT report notes that 95% of AI initiatives in business have had no measurable impact on the P&L (State of AI in business 2025). Five percent are successful. The rest collect pilots, use cases, and dashboards, without any real transformation.

I have already had the opportunity to point out the limitations of this study, limitations that are not those highlighted by its detractors. Indeed, it notes the lack of impact of AI on revenue, which should come as no surprise since the promise of AI is not revenue but productivity (Technologies sell productivity, but businesses want revenue). Moving from one to the other is another issue, one that has more to do with the transformation of organizations and value chains than the adoption of technologies (Who is handling your artificial intelligence projects? Probably not the right people.).

Mark Graban explains another aspect of the problem. For him, it is not the technology that fails, but the organization: its inability to experiment, iterate, and learn. It is therefore a failure of the method and not of the technology (95% of Enterprise AI Pilots “Fail”–Just Like Lean? Not So Fast).

This observation could be copied and pasted about Lean twenty years earlier: continuous improvement was confused with cost cutting and collective engagement with forced transformation. Each time, the same reflex: the tool becomes a symbol that we hope will be self-propelling and that then replaces reflection.

The poorly digested legacy of Lean

Returning to Lean means talking about a project that sought to reconcile productivity and respect for work, but most Western businesses imported the tool without the philosophy, whereas Lean is not about reducing headcount but about developing people (Lean Isn’t About Cutting Heads–It’s About Growing People and Cultures of Improvement) and, moreover, in many successful experiments, it has been used with the formal promise that improvement would not lead to any layoffs (Lean Without Layoffs: The Commitment That Makes Continuous Improvement Work).

Conversely, linking Lean to job cuts destroys the trust that makes any improvement possible.

What Lean experienced in the 2000s is now being replicated by AI: behind the talk of growth, many perceive a desire to let employees go. When management announces that AI will save time, everyone instinctively understands that this time saved will be used to reduce the workforce. This is not paranoia but organizational memory.

Disengagement, a rational symptom

Employees do not reject innovation out of conservatism, but because they have learned to decode weak signals. According to Gallup, only 21% of employees worldwide say they are engaged (Engagement Recedes for the First Time in Four Years), resulting in a global productivity loss of $438 billion.

This disengagement stems largely from a persistent misalignment between promise and reality. Yves Caseau analyzes this gap as follows: we celebrate smart agents and the future of work, but we ignore everyday irritants.

We talk about digital transformation when teams are just waiting for us to fix absurd processes, eliminate double entries, and give them back the time they need to do their jobs properly. Here again, the spirit of genchi genbutsu(going to see for yourself) is forgotten ([FR]Future of Work, Intelligent Agents, and Genchi Genbutsu).

From assistance to control

The more a business claims to want to make work easier, the more it introduces tools to monitor it. AI often manifests itself less in the form of robots than in background systems: assistants that suggest the “right” customer response, platforms that time tasks, software that assesses the compliance of a file. These tools are supposed to help, but they gradually dictate how things should be done.

The original Lean approach did exactly the opposite because it was based on trust in the intelligence of those on the ground. It was based on the principle that the people who do the work know better than anyone else how to improve it. When deployed without feedback from the field, AI freezes experience instead of feeding on it, transforming practice into procedure and knowledge into constraint.

Let’s respect the spirit of Lean to restore confidence in AI

If AI wants to be the new Lean, not in the negative sense of the term but in the positive sense, so that it is not misused but creates trust with its users, it must adopt its founding principles rather than its caricatures. Start from the reality on the ground, learn as you go, respect people.

Starting from reality means designing use cases with those who do the work, not in their place. It means going into the field, observing and listing tasks, understanding flows, irritants, and gray areas. Lean called this genchi genbutsu, and AI, too, should start by observing work as it is done before seeking to transform it. Let’s not forget that only 4% of problems are visible from the top of the organization… (Learn about Yoshida’s iceberg of ignorance, or what management refuses to see.).

Learning by doing means integrating feedback loops and incremental improvement. MIT pointed this out in its study on the causes of AI project failure: most have no feedback mechanism, which is why they are ineffective (MIT: Why 95% of Enterprise AI Investments Fail to Deliver).

And respecting people means making a clear engagement: productivity gains will not mean job losses. Graban has made this a quasi-moral rule: “no layoffs due to lean“. Without this promise, no initiative can survive initial doubts, let alone the first crisis.

AI is not the new Lean, but rather a reflection of its misuse

Lean has failed every time it has been used to serve a financial objective. AI will suffer the same fate if it is used against those it is supposed to serve. In a way, the organizations that will succeed are not those that automate the most, but those that learn the best.

Work is not changed by adding a layer of technology, but by changing the relationship between those who decide and those who do. Lean failed to teach this, and AI is now forgetting it in turn.

But instead of seeing it as just another fad, we can see it as a test of managerial maturity. Businesses that know how to use AI as a tool for collective learning rather than unilateral rationalization will restore meaning to the technology, while others will continue to add to the pile of stillborn projects.

Bottom Line

AI is not the new Lean, but rather a reflection of what businesses have done with Lean: a mechanized tool, an idea stripped of its meaning, and a technical project dressed up in human rhetoric.

But this is not inevitable. We simply need to answer a simple question that makes executives uncomfortable: do we want to transform work, or just reduce costs?

Lean failed when it ceased to be a philosophy of collective progress. AI will fail in the same way if it remains an optimization strategy.

And let’s not forget that technology is a word that describes something that doesn’t work yet (Douglas Adams)

To answer your questions…

Why is artificial intelligence compared to Lean?

Both promised the same things: efficiency, simplification, and more time for people. But like Lean, AI is often applied without understanding its spirit. The article emphasizes that the problem lies less with the tools than with their use: imposed deployment, failure to listen to those on the ground, and indicators that are disconnected from reality.

Why do 95% of AI projects fail?

MIT shows that most fail due to a lack of operational anchoring. Businesses launch pilot projects without learning or feedback. AI promises productivity, not revenue, but it is rarely integrated into a real transformation of practices and value chains.

What have we misunderstood about Lean?

Lean was intended to develop people, not reduce costs. By associating it with job cuts, businesses destroyed trust. The article reminds us that the promise of “no job losses related to gains” is essential, otherwise AI will suffer the same rejection.

Why are employees wary of AI?

After years of broken promises, employees associate innovation with job losses. Their disengagement stems from a disconnect between talk of transformation and the reality on the ground. Above all, they want their daily lives to be simplified before new tools are added.

How can AI be made credible in the workplace?

Three principles: start from the ground up, learn as you go, and respect people. This requires co-construction, continuous feedback, and clear engagement on employment. AI will succeed if it serves collective learning rather than optimization alone.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
31SubscribersSubscribe

Recent