The problems that arise when AI scales up

-

There is one thing that most AI implementations that fail in business have in common. They don’t fail because they don’t work, or even because they don’t add value, but when they cease to be tolerable within the existing framework. Scaling up acts less as an accelerator than as a revealer.

What was acceptable as long as the use remained low-profile can become fragile when it becomes more exposed and changes scale. What was acceptable as long as it remained reversible and concerned only a small scope with controlled costs and therefore a relative ROI requirement becomes problematic as soon as it engages the organization beyond an experimental phase.

In short:

  • AI applications often fail not because they are ineffective, but because they become incompatible with existing organizational frameworks, particularly when scaling up, which reveals latent tensions.
  • At the individual level, AI is well accepted because it remains discreet, reversible, and non-engaged. But as soon as its use becomes collective, it raises issues of conditions of use, impact, and governance that undermine its sustainability.
  • The increased exposure of AI within organizations raises questions of responsibility, decision attribution, and side effects, to which businesses are not always prepared to provide structured answers.
  • The rise of AI requires coordination and formalization of rules, which exposes internal misalignments and calls into question uses based on implicit autonomy.
  • Scaling up implies a change in budgetary and strategic logic: the issue is no longer just one of local productivity, but of the overall impact on revenue and organizational dependence, making certain uses unsustainable.

Use becomes collective

As long as AI is used on an individual scale, it fits in without any noticeable friction. It helps, relieves, and optimizes on an ad hoc basis, but above all, it does not impose anything on others (What works today with AI without any particular effort). It works almost on its own, hence the success of shadow IT, and even if it only brings limited gains, we can say that it is a good start (Collective appropriation of AI: the only condition for tangible impact).

As soon as a use needs to be extended, its nature changes. It is no longer just a question of whether it works, but of determining who can use it, under what conditions, and with what effects on others. This is enough to undermine systems that worked precisely because they avoided these questions.

This is when many uses disappear. Not because they are ineffective, but because they were viable as long as they bypassed the organization, but no longer when they had to be integrated into it.

AI becomes visible

Another turning point occurs when AI ceases to be a subject of discovery and becomes a subject of discussion, particularly when questions arise about costs, impacts, and other side effects. What was accepted in a context of discovery and familiarization must then be justified.

Here too, this raises questions that had been avoided until then. Why here and not elsewhere? Based on what criteria? With what side effects? At what point does the tool influence a decision? Many uses do not stand up to this scrutiny, not because they are bad, but because they raise questions that the business is unwilling or unable to answer.

AI influences decision-making

As long as AI prepares, suggests, or assists, it remains widely tolerated because the final decision can still be attributed to an identifiable human. Once this threshold is crossed, it’s a different story.

When AI directly influences an engaged decision, the central question is no longer one of performance but of attribution and therefore responsibility. Who really decides? Who takes responsibility when the result is contested? Who can deviate from the recommendation, and under what conditions? As long as these issues remain in a gray area, without some form of governance, the organization finds itself on unstable ground.

Many systems disappear at this point through gradual withdrawal, reduction in scope, or a return to practices that, although more modest, are more politically secure.

Coordination becomes inevitable

Local practices also work because they do not force coordination. They slip into a gap with unspoken rules and implicit compromises, but on a larger scale, these compromises must be formalized and become rules.

AI then reveals misalignments that have persisted for years. What disappears in these cases are practices that were based on non-negotiated autonomy. But as soon as this autonomy has to be discussed, it clashes with the logic of power and turf wars.

The failure is not technical but organizational, and while it does not reveal anything new, it simply reveals what existed before and what everyone had come to terms with.

The I of ROI places demands on the R

The nature of a discovery or experimentation phase is that it has a limited scope, a defined budget, and, to some extent, an acceptance that it may not work and that the investment will be lost. But when the experiment is more or less conclusive and the question of scaling up arises, the budget discussion changes in nature.

Yes, it works “more or less” on a small scale, but if we want to scale up with an investment 100 or 1000 times greater, will the ROI be of the same order or exponential? And if it is of the same order, wouldn’t that money be better invested elsewhere?

This is somewhat what the MIT study that everyone talked about last year tells us. Pilot programs were abandoned en masse, not because they didn’t work, but because they had no clear and direct impact on revenue (Technologies sell productivity, but businesses want revenue).

In other words, a pilot program will attempt to increase productivity in a more or less empirical way, but scaling up will shift the focus to the P&L.

The practice ceases to be reversible

A final threshold is that of reversibility. As long as a practice can be withdrawn without major consequences, it is tolerated because it remains experimental. When it becomes difficult to remove without disrupting an activity, it forces the organization to recognize a dependency and encounters opposition from the less convinced or even opponents who want to keep a way out.

Some uses disappear at this point because they would require admitting that a change has taken place without having been explicitly decided and that we must now deal with a situation we did not seek. Withdrawal then becomes a way of regaining control, at the cost of a step backward that is rarely acknowledged as such and that will be attributed to other causes.

Bottom line

The practices that disappear when AI has to change scale paint a fairly consistent picture of the ceilings that organizations come up against. Ceilings in terms of visibility, responsibility, coordination, long-term engagement, and, of course, funding.

These disappearances do not mean that AI does not work, but show that as soon as it stops slipping into the gaps, it forces the business to look at how it operates, and it is often at this point that the movement stops.

To answer your questions…

Why do AI applications often fail when scaling up?

AI applications rarely disappear because they don’t work, but because they cease to be compatible with the existing organization. On a small scale, they remain discreet, inexpensive, and reversible. As they scale up, they highlight issues of governance, accountability, coordination, and funding that the business is not ready to address, leading to their abandonment.

Why does individual use of AI work better than collective use?

On an individual level, AI does not impose anything on others and circumvents organizational constraints. It provides local benefits without creating collective dependency. As soon as a use becomes collective, rules, rights, and shared impacts must be defined. These trade-offs weaken systems that worked precisely because they avoided any formalization.

How does the visibility of AI become an obstacle?

As long as AI remains experimental, it benefits from implicit tolerance. When it becomes visible, its costs, effects, and deployment criteria must be justified. This exposure forces the business to answer questions it had previously avoided. Many uses fail at this stage, not because of a lack of value, but because of a lack of organizational maturity.

Why is AI problematic when it influences decisions?

When AI assists, responsibility remains human and clear. When it directly influences a engaged decision, the question of attribution becomes central. Without explicit rules on who decides and who takes responsibility, the organization enters a gray area. To avoid this risk, usage is often reduced or abandoned.

What role do ROI and reversibility play in the abandonment of AI applications?

In pilot projects, an approximate ROI and possible failure are acceptable. On a large scale, the investment must have a clear impact on the P&L. If this link cannot be demonstrated, AI is called into question. Furthermore, the loss of reversibility creates resistance, with withdrawal becoming a means of regaining control.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
31SubscribersSubscribe

Recent