For two years now, everyone has been talking about artificial intelligence in business, which seems logical given the promise of this technology. But in reality, companies are still feeling their way, and the results are far from what was hoped for, which is not surprising either. Indeed, every technological wave generates its share of enthusiasm and tinkering, and then eventually produces its effects after a slow digestion period.
And since we read or hear just about everything and its opposite on the subject, I tried to do some research to try to take stock of the situation as objectively as possible. No opinions or analysis this time, that will come in due course, but facts and figures.
In short:
- The adoption of AI in the workplace is widespread but often superficial, relying more on statements than on real process transformation or structured governance.
- Three levels of maturity coexist: experimentation, industrialization, and organizational transformation, the latter being rare but essential for creating sustainable value.
- The effectiveness of AI depends on its integration into the logic of work rather than as a simple technological addition, involving a redesign of roles, processes, and responsibilities.
- Several debates structure the governance of AI: augmentation or substitution of human work, centralization or autonomy, internal development or external solutions, and the need to measure real impacts.
- The transformation linked to AI is above all managerial, revealing the corporate culture and the way in which it articulates control, responsibility, human competence, and long-term vision.
Widespread adoption, but mainly declarative
The figures couldn’t be clearer. According to McKinsey, 72% of businesses say they use AI in at least one function, and 65% are testing or exploiting generative uses. The majority cite marketing, customer relations, IT, or support functions (The State of AI in 2024).
In other words, AI is everywhere on the slides. But in processes, much less so.
The same report indicates that few businesses have redesigned their processes to integrate AI rather than grafting it onto existing ones, even though this is precisely where the difference between a gadget and a transformation lies.
On another level, data from the Microsoft Work Trend Index 2024 reveals that three-quarters of knowledge workers already use generative AI tools. But in 78% of cases, this use has not been framed or steered by the business (AI at Work Is Here. Now Comes the Hard Part). In other words, adoption is happening “from the bottom up,” often outside of any governance framework.
This phenomenon of BYOAI (Bring Your Own AI) is reminiscent of the early days of “shadow IT”: employees experiment to get things done faster, but at the same time create a patchwork of practices, risks, and inconsistencies.
The problem is not individual initiative, which is often beneficial, but the lack of a collective framework on the part of businesses that are struggling to keep pace with their employees (Start measuring your AI Velocity Gap before your market measures it for you): without integration or measurement, it is impossible to know whether AI is really improving performance or degrading the functioning of organizations.
Contrasting practices
We often talk about “AI adoption” as if it were a homogeneous block. In reality, we need to distinguish between three levels of maturity:
- Opportunistic experimentation, focused on the tool: testing ChatGPT, Copilot, or an agent on customer support.
- Functional industrialization, where AI is integrated into a stable process (for example, to automate report writing or email filing).
- Organizational transformation, where work, roles, and decisions are redefined by integrating AI into the very logic of the business.
It is this third stage that creates value and remains, in 2025, very much in the minority.
Businesses that successfully make this transition have commonalities that are confirmed by all studies.
First, they treat AI as a matter of organizational design, not technology (How management let systems do the thinking for them and To manage is to design). Next, they start from the reality of the processes, not the promises of suppliers. They also have clear governance: an executive sponsor, a common data platform, legal safeguards, and explicit coordination between IT, legal, compliance, and HR.
Finally, and most importantly, they measure. One of the most interesting lessons from a McKinsey book on the subject is the direct correlation between economic impact and measurement discipline: simply tracking the performance of use cases doubles the chances of obtaining a concrete return (Rewired to Capture Value).
A neglected approach: treating AI as a work issue, not a technology issue
Productive AI (the kind that really changes things) is not a software layer but a reconfiguration of work. When a model writes, summarizes, or makes decisions, it is not performing an isolated task but redefining the way humans collaborate, control, learn, and arbitrate.
However, most businesses are content to simply “plug” AI into existing systems without questioning it. They add a chatbot to an already saturated customer service process, a text generator to an already pressured marketing department, or a code assistant without reviewing how teams design, test, and maintain systems.
The result: marginal gains, sometimes offset by a loss of understanding or skill.
The most advanced organizations have understood that AI is not an adjustment variable but an opportunity to redesign themselves (A poorly designed enterprise is illegible and incomprehensible to employees and customers).
They are rethinking value chains, how decisions are made, how data flows, and how humans remain in the loop.
It is at this level that we can talk about true adoption: when AI becomes a component of the organization and no longer a gimmick.
Fundamental debates
The widespread adoption of AI has not been accompanied by consensus, and there are still many debates surrounding governance issues.
First debate: augmentation vs. substitution.
Senior management talks about “complementarity”, but business plans expect immediate productivity gains. This discrepancy creates moral and operational tension, leading employees to feel that talk of increasing the workforce sometimes masks a cost-cutting strategy (Technologies sell productivity, but businesses want revenue).
Yet trust is a prerequisite for adoption. AI that is perceived as a cost-cutting tool rather than a means of progress will be sabotaged by the rank and file before it is even mastered (Lean Without Layoffs: The Commitment That Makes Continuous Improvement Work).
Second debate: centralization vs. democratization.
Should access to AI tools be locked behind strict governance, or should everyone be allowed to experiment?
The issue is somewhat technical and has a lot to do with data privacy and security, but it is also political because it involves the power to decide how work is done.
Mature businesses are seeking a balance: a common platform for security and compliance, with supervised autonomy for experimentation.
Third debate: build vs. buy.
Build your own models or rely on the cloud giants?
Proprietary models guarantee control over data but are expensive, require rare skills, and expose you to the complexity of the European AI Act.
External solutions offer fast speed but increase dependence and raise the question of data sovereignty.
Fourth debate: measurement vs. belief.
The promised gains are rarely demonstrated.
The few robust studies show real improvements in productivity, but also side effects: excessive standardization, decreased vigilance, and cognitive dependence (Research: AI Boosts Worker Productivity, but There’s a Catch).
Without an evaluation mechanism, AI becomes more of a business religion than a performance lever.
The gray areas: what we still don’t know
In 2025, several questions remain unanswered, and they do not concern technology but the very meaning of work.
How can we measure the real impact?
We know how to calculate the time saved when writing an email, but much less so the effects on the quality of a judgment or the consistency of a decision.
AI can reduce the time spent on a task without improving the overall result, or even deteriorating the quality (Local optimum vs. global optimum and the theory of constraints: why your productivity gains sometimes serve no purpose).
How can we maintain human skills?
When young employees immediately rely on AI for basic tasks, they no longer acquire the reflexes of the profession.
The risk is not only job loss but also the loss of know-how, a form of obsolescence of judgment (Will AI replace juniors? The false debate that’s only the tip of the iceberg).
How can delegation be governed?
Agents capable of performing a series of autonomous actions in business systems pose a new problem: that of shared responsibility.
Who signs off on a hybrid decision? Who is responsible for a chain of automatic errors?
European guidelines on models with systemic risks outline some runways, but their operational implementation is still in its infancy (AI models with systemic risks given pointers on how to comply with EU AI rules and AI Risk Management Framework).
How can fragmentation be avoided?
Between Copilot in the office suite, a recommendation engine in CRM, and a report generator in the HR tool, the business becomes an algorithmic patchwork.
Each of these tools acts without coordination, and we have organizations where several AIs coexist without communicating with each other (Digital workplace, AI, and interoperability: a problem that remains unresolved).
How can speed and responsibility be reconciled?
Senior management is under pressure to move quickly so as not to miss the boat, but regulation, compliance, security, and ethics cannot be reduced to a list of boxes to tick.
It is important to know when to say no, or at least not yet.
Above all, a managerial transformation
Despite its technological appearance, AI brings the business back to an eminently human subject: management, and the way in which AI is adopted says a lot about the way in which the business is governed.
Rigid hierarchical organizations see it as a tool for automation and control, while learning organizations see it as a way to free up time and broaden the scope of discernment.
AI therefore becomes a culture indicator: is it used to monitor or to empower? To cut back or to invest?
As such, AI transformation is above all a test of managerial consistency. Leaders must choose whether they want AI for control or AI for trust.
The former reduces costs but destroys skills; the latter takes time but builds sustainable performance.
Bottom Line
There is something both inevitable and absurd about the race for AI.
Inevitable, because artificial intelligence will effectively reshape production, decision-making, communication, and customer relations, but absurd, because many businesses still believe they can master it without transforming themselves.
AI is not just “another technology” but a mirror that forces businesses to look at what they know how to do, what they claim to know how to do, what they delegate because they no longer know how to do it, and above all, how they do it and how they would like to do it.
The organizations that will come out on top will not be those that have automated everything, but those that have been able to choose what they refuse to automate. They will have accepted that speed does not replace vision, that productivity is only valuable if it serves quality or innovation, and that a data-driven business without judgment can become mechanically stupid.
Maturity in AI will not be measured by the number of models deployed, but by how we draw the line between what is done by machines and what is done by humans.
To answer your questions…
Many businesses use AI without rethinking their processes. They add tools to existing systems without any real transformation. The gains are therefore limited. The most advanced businesses treat AI as a lever for organizational design, rethinking roles, decisions, and information flow to derive real value from it.
BYOAI (Bring Your Own AI) refers to the spontaneous use of AI tools by employees without an established framework. This stimulates innovation but creates risks in terms of security, inconsistency, and loss of control. Without clear governance, the business cannot measure or supervise these practices.
We can distinguish between experimentation (testing tools such as ChatGPT), industrialization (integration into a stable process), and organizational transformation (redesigning work and roles). Only the latter creates lasting value, but it remains in the minority.
The main debates focus on increasing or replacing labor, centralized or open governance, choosing between building or purchasing models, and the actual measurement of gains. These choices determine the culture and managerial consistency.
AI reveals the culture of governance. Some businesses use it to control, others to empower. Controlling AI destroys long-term competence, while trusted AI builds sustainable performance and a more learning-oriented organization.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







