Businesses are deterministic, generative AI is not, and that’s a real problem.

-

The vast majority of businesses approach generative AI as a new performance tool. They want faster answers, automated summaries, and higher-quality content that is accurate and produced quickly. In short, they are looking to boost their efficiency. But what they often discover to their cost is that this technology is not so easy to tame. Not because of its technical complexity, but because it obeys a fundamentally different logic from their own. While businesses think in terms of processes, predictability, and compliance, generative AI operates in a probabilistic, iterative mode, which is often confusing.

These differences in approach are not without consequence and largely explain why the promises of productivity are slow to materialize.

In brief:

  • Businesses approach generative AI as an efficiency tool, but discover that it works according to a probabilistic rather than deterministic logic, which is at odds with their culture of predictability.
  • Traditional organizations rely on stable, repeatable processes inherited from Taylorism, whereas generative AI produces variable, non-standardized results.
  • The use of AI in business, such as in HR or marketing, reveals limitations linked to this variability: inconsistencies, errors, the need for human validation, and loss of trust.
  • Two main reactions are emerging: either strict supervision that limits potential, or blind delegation that exposes the organization to errors, revealing a lack of understanding of the tool’s specificities.
  • To integrate generative AI, businesses must adapt their operations: create spaces for exploratory use, train employees in discernment, value iteration, and review their approach to error.

Business has a deterministic DNA by nature

Contemporary organizations are the product of a long history of optimization. They were designed to ensure reliability, control, and repetition, and I often regret that many businesses are driven by a desire to repeat perfection ad infinitum, a legacy of the Taylorist era that is no longer relevant in today’s world.

From this perspective, work is divided, roles are defined, processes are documented, and expectations in terms of production and deliverables are very precise. Each task is part of a chain whose goal is to produce a predictable and consistent result. Management measures deviations from the norm, punishes deviations, and corrects anomalies.

This model has proven its worth in industry and structured services and is based on the idea that the environment can be stabilized, that rules make it possible to anticipate, and that performance depends on proper execution. It is a system designed to function in a relatively stable world where the same causes and the same actions must always lead to the same results.

Generative AI: probabilistic and exploratory functioning

Generative AI does not work this way. A model such as GPT does not reason, calculate truth, or follow a protocol. Instead, it anticipates the most probable continuation of a statement based on context and history. 

For each query, it generates a response from among billions of possibilities, according to a probability distribution that varies depending on the temperature, the prompt, the context, and training biases.

Here, the same cause never produces the same result; the same prompt tried ten times will give you ten different results, sometimes of uneven quality.

In other words, generative AI is not designed to provide consistent, deterministic results. It is designed to produce plausible responses that are often useful but almost never identical. Its power lies in its capacity for variation, not its repeatability.

Organizations are uncomfortable with uncertainty

This way of working clashes head-on with business expectations. I had identified this as a barrier to the deployment of generative AI in business: we are talking about a context where compliance is paramount and there is no room for error (Why enterprise AI can’t keep up with consumer AI: beyond ChatGPT, a more complex reality). What decision-makers want are reliable, auditable tools that can be integrated into business processes, but what they get are answers that are sometimes relevant, sometimes absurd, but in all cases difficult to trace and stabilize.

Faced with this, two reactions are possible:

  • The first is to restrict use: we regulate, filter, and control, to the point of stifling the model’s exploratory potential.
  • The second is to delegate without understanding, hoping that the technology will “magically” do better, at the risk of undetected errors or amplified biases.

Two very concrete cases illustrate what we are talking about.

Let’s start with an HR use case.

Several businesses have deployed HR assistants based on LLMs to automate responses to frequently asked questionsfrom employees (vacation, training, mobility, etc.). The goal was simple: to relieve the burden on HR teams and improve responsiveness.

But the limitations of the model quickly became apparent

  • The assistant sometimes gave inconsistent or incomplete answers, generated from obsolete documents.
  • The wording of prompts by employees varied greatly, introducing a source of confusion.
  • HR found themselves having to manually review and validate responses, turning a productivity gain into an additional workload.

Above all, management was concerned about the non-compliance of responses with legislation or company agreements.

The result: use was suspended or restricted to very simple cases, subject to strict safeguards. We tried to fit randomness into a deterministic mold and, unsurprisingly, we failed.

Now let’s look at another example in customer relations.

Marketing teams used generative models to produce sales emails or responses to prospects on the fly. On paper, this saved time, but in reality, the results were less than impressive:

  • The tone varied from one message to another.
  • Incorrect arguments appeared.
  • Errors appeared in product names, offers, and terms and conditions.

In a B2B software company, some salespeople even started copying and pasting AI responses without proofreading, convinced that they were saving time. The results were not long in coming: tensions with customers, inconsistencies in communication, and a loss of confidence in the tool.

Once again, it is not AI that is at fault, but rather the organizational framework that projects expectations of stability onto a technology that produces variations by its very nature.

Transform the organization to integrate generative AI

Rather than trying to bend AI to the rules of the business, ask what the organization can do to evolve and embrace this technology with all its unique characteristics.

This requires several structural changes:

  • Accept that some uses must remain exploratory, such as idea generation, content prototyping, or reformulation.
  • Define non-critical areas of use, where inaccuracy is not a risk but an opportunity.
  • Establish learning loops, where users refine, correct, and iterate with the model instead of waiting for a perfect answer.
  • Train people to use their judgment rather than tools: the real performance lever is not AI itself, but the ability to interpret, filter, and intelligently reuse its suggestions.

This also implies a cultural shift: stop viewing errors as malfunctions, recognize them as learning material, and admit that in some areas there is no perfect answer.

Integrating AI is, in a way, learning to manage uncertainty and therefore rethinking many of our reflexes.

Bottom line

Generative AI will never fit perfectly into an organization designed for perfection and repeatability. It is neither as reliable as a database nor as stable as an industrial process, but that is precisely what makes it so interesting.

To take full advantage of it, we must accept a change in mindset: moving from control-based management to meaning-based management, from rule-based steering to steering based on room for maneuver. This is not a technological revolution but a cognitive revolution, which, logically, begins with discomfort.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
27SubscribersSubscribe

Recent