AI adoption: methods put to the Test

-

Today, we talk about adopting artificial intelligence in the same way that we used to talk about digital transformation, with the conviction that all we need is a framework to master this new technology. But AI cannot be adopted; it must be learned. Before it can transform, it requires an understanding of how we work, how we make decisions, and how we learn collectively. The mixed results obtained so far are proof of this (Adoption of AI in the workplace : current situation).

Over the past year, frameworks, guides, and maturity models have been multiplying. Microsoft, AWS, BCG, McKinsey, the OECD, NIST…all promising to organize the transition from experimentation to production. But what are these methods worth when applied to organizations that have not yet thought about their vision of work in a post-AI world?

My aim here is not to pit one approach against another, but to examine them for what they are: often incomplete, “oriented” insofar as they serve the philosophy of their author more than the needs of the client, but still because they provide a basis for getting started and then learning from one’s mistakes, provided that one wants to go beyond simply deploying the technology and wants to make this moment an opportunity for self-learning and progress (95% of Enterprise AI Pilots “Fail” – -Just Like Lean? Not So Fast) .

But above all, it is important to bear in mind that adopting AI is not just about following a method, but confronting the reality of how you operate and questioning it before doing anything.

In short:

  • AI is prompting businesses to question their identity and future vision.
  • The most mature organizations see it as an opportunity for learning and transformation.
  • Others rely too heavily on their tools and frameworks, mistakenly believing that these alone will ensure the successful adoption of AI.
  • Methods and best practices are useful but remain secondary to the question of meaning and purpose.
  • Adopting AI involves a fundamental rethinking of work before integrating technological solutions.

AI adoption methodologies

Industrial frameworks: fast integration but assumed dependency

The first to position themselves were, of course, the suppliers themselves. Microsoft, AWS, Google, and IBM have produced comprehensive, structured adoption frameworks with the aim of reassuring customers and accelerating the transition to action.

At Microsoft, the Cloud Adoption Framework for AI describes a path with almost military rigor: strategic alignment, assessment of existing resources (data, security, governance), choice of cloud services, implementation of data governance, and then gradual deployment of use cases with monitoring (AI adoption). The logic is clear: it’s about giving Microsoft customers a turnkey method for moving from experimentation to industrialization, within a consistent foundation (Azure, Copilot, Fabric). The advantage is quite obvious in terms of clarity, consistency, and immediate tooling. But there is also a limitation, which is that AI is seen as a natural extension of the Microsoft stack: adoption is framed by technical possibilities, rarely by a reflection on the transformation of work.

AWS, for its part, offers Prescriptive Guidance for enterprise-ready generative AI, which is also remarkably precise (Best practices for enterprise generative AI adoption and scaling). It includes the “hub and spoke” logic, maturity assessment, the establishment of a center of excellence, the standardization of patterns, and a four-layer architecture (infrastructure, models, security/governance, applications). It is primarily an engineering methodology: comprehensive, documented, and perfectly suited to those who see adoption as a technical project. But it is also a closed system: everything is designed for the AWS ecosystem, and the question of work is completely overlooked.

These frameworks are excellent for IT teams because they structure the “how” but remain silent on the “why”, on how businesses appropriate technology, and on the culture that must be built around it.

Academic/institutional frameworks: a systemic vision but difficult to implement

Alongside “off-the-shelf” methods, there are academic approaches that adopt the angle of organizational capability.

The AI-CAM – AI Capability Assessment Model proposes five levels of maturity across seven dimensions: business, data, technology, organization, skills, risks, and ethics (Towards a Capability Assessment Model for the Comprehension and Adoption of AI in Organizations). The question is not whether the business “does AI”, but whether it is capable of doing something with it: data culture, governance, responsibility. This is useful for positioning oneself, but it is so complex that it is difficult to use it as a real lever.

Public maturity models, such as that of the US General Services Administration (Introduction to the AI Guide for Government) or the OECD grid (Policies, data and analysis for trustworthy artificial intelligence), take an even broader view. The OECD, in particular, highlights the slow spread of AI, sectoral disparities, and structural barriers: skills, data, culture, governance. These models show that adoption is a socio-technical issue and in no way an infrastructure problem, but they do not say how to move from understanding to action.

The NIST, with its AI Risk Management Framework, takes a different stance: it does not propose a maturity model or an adoption trajectory, but a method for developing a clear-headed relationship with AI (AI Risk Management Framework). The framework is structured around four functions, Govern, Map, Measure, Manage, which force organizations to understand their systems before transforming them, to measure risks as well as performance, and to continuously adjust both models and processes. NIST emphasizes a point that is often overlooked in large-scale AI initiatives: the performance of an AI system can never be judged independently of its context, data, use, and effects on people and operations. It is a fairly demanding framework that reminds us that the adoption of AI is less a question of technology than one of responsibility and understanding.

Hybrid frameworks: integrated cabinets & guides

Large consultancy firms have developed their own “hybrid” frameworks, which combine strategy, organization, and technology in a logic of industrialization. Their goal is not only to describe how to launch AI, but how to transform a series of experiments into a sustainable operational capability.

BCG now distinguishes between businesses that are truly “AI-enabled” and those that manage to reduce what they call the “AI impact gap”, i.e., the often abysmal gap between ambition and actual value captured (From Potential to Profit: Closing the AI Impact Gap and Where’s the Value in AI?). Their analyses show that organizations capable of scaling up are those that have built a unified platform, standardized their workflows, clarified their governance, and organized skills development around a capacity-based approach rather than isolated projects.

McKinsey, for its part, refers to “AI high performers”: a group of businesses that deliver superior results because they have industrialized more quickly, aligned their use cases with the business, professionalized their governance, and reconciled strategy, technology, and human capital (The state of AI in early 2024: Gen AI adoption spikes and starts to generate value). This report details what sets these organizations apart, namely the integration of AI into several business lines at once, operational discipline, rigorous risk management, and the ability to rewrite entire sections of operations.

These frameworks have the merit of articulating platform, organization, and value, and are based on careful observation of leading businesses, but they have an inherent limitation: they are prescriptive approaches that describe an idealized trajectory and assume a level of internal consistency that few organizations actually possess. They tell us how to “do well” in a stable environment, but are less accommodating of the chaotic, political, and cultural dynamics that characterize real life in business.

The core of best practices

Looking at leaders such as Microsoft, AWS, BCG, OpenAI, the OECD, and MIT, a consensus seems to be emerging.

Businesses that are making progress:

– Explicitly align their AI initiatives with a few specific business objectives, with an executive sponsor and clear value metrics.

– Start with an honest assessment of capabilities: data, technical debt, governance, skills, and culture of experimentation.

– Create a unifying AI Center of Excellence, with business relays, common standards, and open documentation.

– Establish consistent model governance, aligned with NIST frameworks and AI Act principles, with business-side product managers.

– Conduct systematic evaluations before deployment, as recommended by OpenAI (OpenAI Evals).

– Integrate AI into existing tools, CRM, ERP, software factory, and support, rather than leaving it on the periphery.

– Invest heavily in training and adoption: internal programs, communities, power users, incentives to explore.

– And, above all, treat MLOps and ModelOps as a strategic issue, not a technical one: monitor and maintain models over time (drift, updates, security, compliance, traceability) to prevent them from degrading or becoming uncontrollable.

Where AI works, it is not because the business has chosen the right technology, but because it has developed a collective capacity for continuous learning.

The blind spots of current adoption approaches

However, none of these approaches are without their flaws and blind spots. Here are what I consider to be the most notable ones.

Linearity first: frameworks describe a “perfect” path : strategy, diagnosis, use cases, governance, scale when in reality the opposite is true, and field studies show discontinuous trajectories, made up of setbacks, pivots, workarounds, and successive consolidations. These frameworks offer a useful interpretive grid, but they do not capture the real dynamics of organizations or the internal tensions that accompany any technological adoption. They model an ideal of progressive advancement, whereas in practice it is more like a succession of adjustments, compromises, and reorientations (Maturity models for the use of artificial intelligence in enterprises: a literature review).

Next is techno-centrism: many frameworks remain obsessed with data, models, and security. A supremacist technological discourse dominates AI, assuming that technology can automatically solve complex problems, which is precisely one of the biases of techno-centrism (Tracing the Techno-Supremacy Doctrine: A Critical Discourse Analysis of the AI Executive Elite).

The third blind spot is biased measurement of value. Studies show real but very localized gains. There is indeed an improvement in productivity on well-structured tasks, but also a shift in the workload toward supervision and verification, sometimes with a temporary decline in collective performance (How generative AI can boost highly skilled workers’ productivity). As things stand, there is a consensus that individual gains coexist with invisible costs: control, rewriting, increased coordination (Generative AI at work).

This brings us back to BCG’s concept of the “impact gap”: we often confuse time saved on a micro-task with value actually created at the system level (From Potential to Profit: Closing the AI Impact Gap and Local optimum vs. global optimum and the theory of constraints: why your productivity gains sometimes serve no purpose).

A few remarks

My criticism of these methodologies is not their content, which is often relevant and technically accurate, but their place in the dynamics of change. They come too early and too high up, as if AI could be deployed by following a recipe, when in fact they only make sense once the intention, the design of the work, and the organizational trade-offs have already been clarified. In other words, we are mistaking tools for execution for compasses.

The smoke screen of ROI

ROI is a measure, not an end in itself, and says nothing about the relevance of a transformation, only its apparent profitability. Executives who promise “AI business cases” often repeat the same mistake as with digital: confusing adoption with performance and productivity (The great illusion of technological productivity gains (including AI)).

Enterprise design must come before technology

The method is only valuable in a system that is already designed to take advantage of technology. AI frameworks often arrive too late: they model the “right way” to deploy technology in an organization that has never defined its own relationship to work, and in doing so, we make two mistakes. The first is to automate a dysfunctional organization, which will then dysfunction faster and on a larger scale, and the second is to allow a certain vision of operations and work to be dictated by the philosophy of the tool if you have not first established your own intention (Taking back control of enterprise design: intention before tools).

AI is not deployed in a vacuum: it requires intention, a vision of business design and work.

Adoption as appropriation of work

I have always defended the idea that adoption is not use but appropriation. It is perfectly possible to use a tool without anything really changing in the way we work, in the trade-offs we make, or in the way we organize collective activity. Most frameworks confuse usage and adoption, as if simply using a tool or feature were enough to produce value.

For me, adoption is something else entirely: it is a new ability to exercise judgment, to decide, to conceive of one’s work differently, and I often refer to this as productive appropriation. It is the transformation of actions, decisions, flows, and responsibilities, not the addition of yet another tool to an already long list. Productive appropriation assumes that AI is an opportunity to rethink the way we view work, which is a prerequisite for transforming the way we do it.

AI, testing enterprise design

Ultimately, the adoption of AI is a revealing factor. If business design is clear in terms of intentions, relationship to work, work design, and consistency with the company’s identity and DNA, AI adoption methodologies naturally find their place, but if not, they become a cover-up.

The correct order would be:

  1. intention
  2. business design
  3. choice of methodology
  4. tools

And under no circumstances the other way around.

Bottom Line

AI holds up a mirror to businesses. The most mature ones see it as an opportunity to learn anew, starting with learning what they are in order to think about what they want to be. The others see themselves through their frameworks and believe that they will be able to adopt AI because their framework seems complete and serious. Methodologies are necessary, best practices are useful, and diagnostics are essential, but all of this is secondary to the central question: What do we want to become and how do we want to operate?

If you don’t know the answer to this question, then no method or platform will be able to guide you.

Adopting AI is not an end in itself, but rather a discipline that involves rethinking work before automating or “augmenting” it with technology.

To answer your questions…

How does AI help businesses clarify their identity?

AI forces organizations to re-examine who they are and what they want to become. It highlights their strengths, limitations, and operational inconsistencies. The most mature businesses use this opportunity to relearn, clarify their priorities, and align their evolution with a clear intention. This introspection precedes any technological approach, because AI only adds value when the strategic direction has been defined. For decision-makers, this clarification determines the relevance of future initiatives.

Why can’t methods and frameworks alone guide the adoption of AI?

Frameworks structure and secure an approach, but they remain secondary until the business has defined what it wants to become. Without a vision, these tools produce mechanical or decorative approaches that transform nothing. The article emphasizes that methodologies are useful but insufficient if the objective is not clear. They are no substitute for strategic intent or reflection on how to operate. For managers, the priority is to establish meaning before selecting a method.

What are the risks of adopting AI without a specific intention?

Adoption without intention leads to scattered projects, unnecessary complexity, and unproductive investments. The business risks aligning itself with solutions because they seem comprehensive, rather than because they meet a real need. The result is a superficial, sometimes costly transformation that does not change the way work is done. The article emphasizes that no platform or method can compensate for a lack of vision. For decision-makers, clarifying their ambition reduces these pitfalls.

How is adopting AI a discipline rather than a goal?

AI is not an end goal but rather an approach that involves rethinking work before automating it. This discipline requires examining processes, understanding the value created, and adjusting operating methods before any technological intervention. It requires observation, learning, and continuous reassessment. AI thus becomes a means of transforming work rather than an end in itself. For leaders, adopting this approach ensures a more coherent and sustainable evolution.

How do the most mature organizations approach AI integration?

The most mature businesses see AI as an opportunity to relearn and better understand how they operate. They do not rely on frameworks to legitimize their choices, but use AI to refine their strategy and strengthen their way of operating. This stance sets them apart: they link technology to a clear intention and a transformation of work. In this way, they avoid superficial approaches. For decision-makers, this maturity promotes more effective adoption.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
31SubscribersSubscribe

Recent