As artificial intelligence finds its place on business agendas, debates on the subject are multiplying. Certain topics come up almost systematically, sometimes addressed from a technical angle, sometimes from a quasi-ideological angle, while others, though no less important, remain strangely absent from conversations.
That is why I find it interesting to take stock of the subject, because knowing what we are talking about, from what angle, and what we are avoiding talking about explains to a large extent why certain transformations take hold or get bogged down.
In short:
- The debate between strategy and technology in the adoption of AI tends to ignore the concrete transformation of practices, roles, and decision-making processes within businesses.
- The question of centralization or autonomy of AI initiatives is often reduced to an organizational choice, without addressing the operational integration of AI and the responsibility for algorithmic decisions.
- Discussions on the transparency and control of AI systems often remain theoretical, as they focus on technical aspects rather than the concrete conditions of use and supervision.
- The opposition between speed of experimentation and operational discipline masks the main issue: the lack of mechanisms to transform exploration into stable practices.
- Fundamental topics such as organizational design, incentives for adoption, and the redefinition of human roles are rarely addressed, even though they determine the success of AI projects.
Strategy or technology
One of the most frequent debates is between a strategic approach to AI and a technological approach. Should we start with use cases or invest first in the technical foundations? Should we define a vision before building capabilities, or vice versa? This debate is recurrent, but it is often misguided because it assumes that there is an ideal sequence, whereas in reality, strategy and technology evolve in an interdependent manner within organizations.
This type of discussion has the advantage of structuring responsibilities and budgets because it corresponds somewhat to the structure of organizations and their decision-making model, but it remains superficial because, while it allows for trade-offs, it does not address how the business operates on a daily basis. By focusing on the order of priorities, we forget to ask the question of how workflows, roles, and decision-making mechanisms will be transformed. This type of debate means that we miss the essential point and are often caught off guard when the time comes to address it, because one day we will have to.
Centralization or autonomy
Another recurring debate concerns the degree of centralization of AI-related initiatives. Should a dedicated entity be created, skills be concentrated, and all projects be managed from a central location, or should business teams be allowed to experiment freely? This is a legitimate question that comes up every time we talk about adopting and disseminating a new technology and identifying its use cases, because it touches on governance, consistency of approach, and ultimately its effectiveness. But it is too often approached as a binary choice, when in fact it covers a more complex reality.
The debate over centralization versus team autonomy focuses on a choice of structure, when this choice, in itself, does not address the issue of the operational use of AI. It does not say who decides whether or not to follow an algorithmic recommendation, who bears the consequences, or how the model can be improved over time, through experience and as the context evolves. By focusing on the organizational dimension, we forget to address the mechanisms by which AI is or is not integrated into operations.
Transparency and control
Issues of transparency, explainability, and control are also major topics in debates surrounding AI. They are often addressed from the perspective of risk, compliance, or acceptability. These topics are necessary and legitimate, but by focusing on the limitations of models, we forget to talk about their strengths and how they are used and integrated into decisions.
When transparency is treated as a purely technical issue, discussions become disconnected from actual uses, whereas the key question is not whether a model is perfectly explainable, but in what situations it is used, with what level of supervision, and for what decisions. Without a clear framework on this point, the debate on control remains theoretical and does not help in designing organizations capable of using AI responsibly.
Speed versus discipline
Another divide is between fast experimentation and operational rigor. Some argue for maximum acceleration because they believe that learning comes through trial and error, while others insist on the need to secure, document, and stabilize before deploying. This opposition is often seen as a cultural issue, when in fact it primarily reflects a difficulty in balancing exploration and exploitation.
In practice, businesses that limit themselves to this debate alternate between phases of intense experimentation and periods of stagnation, without managing to transform trials into operating procedures. The problem is not speed, but the lack of mechanisms to move from exploration to industrialization, and until this issue is addressed, the discussion about pace will lead nowhere.
Missing debates
Conversely, certain topics that I consider essential are rarely addressed, or at least not explicitly. The issue of business design, for example, is often relegated to the background. Few businesses are asking themselves how AI will change the way teams cooperate, how responsibilities are distributed, or how decisions will be made in the future organization of work, even though these choices are crucial to the success of initiatives.
The issue of incentives is also rarely addressed head-on. Debates focus on tools, architectures, or skills, but much less on what really determines the adoption of AI in everyday life. Integrating AI into operations means accepting recommendations that may contradict established habits, sharing some control, and accepting results produced by a “black box.” As long as the mechanisms of accountability, evaluation, and arbitration remain aligned with the “old” way of organizing work, AI will remain peripheral even when it works. This largely explains why many initiatives remain confined to pilot projects that fail to take root in the daily lives of teams.
Finally, the place of humans is often discussed in terms of replacement or job loss, when the real issue lies in redefining roles, responsibilities, and how to preserve careers even when jobs and positions disappear (Is AI the new Lean?). This lack of debate on the subject contributes to misunderstandings and resistance, and undermines confidence in AI projects.
Bottom Line
Discussions about AI and so-called “AI First” businesses are rarely useless, but often insufficient. They allow obvious issues to be addressed, reassure people about certain risks, and structure short-term decisions, but too often they avoid topics that deeply engaged the organization. As long as businesses continue to debate technology, structure, or speed without questioning the design of their operations, their ambitions will remain fragile. The debates that are missing are precisely those that connect AI to the way the business operates, and this is where the difference between stated ambition and effective transformation lies.
To answer your questions…
This opposition assumes that a logical order must be chosen between strategic vision and technological construction. However, in businesses, these two dimensions evolve together. Focusing on this dilemma allows budgets or responsibilities to be decided, but above all avoids addressing the real impact of AI on daily work. By overlooking workflows, roles, and decisions, organizations are delaying true transformation, which weakens their AI initiatives.
The debate focuses on organizational structure rather than the practical use of AI. Centralizing or allowing autonomy says nothing about who follows an algorithmic recommendation, who bears responsibility for it, or how the system progresses with experience. By remaining at this level, businesses are missing out on the operational mechanisms that determine the actual integration of AI into daily activities.
Transparency and explainability are often addressed as technical or compliance issues. However, the main challenge is not whether a model is fully explainable, but rather understanding the contexts in which it is used, the level of supervision involved, and the decisions it is used to make. Without a clear link to actual usage, these debates remain theoretical and do not contribute to the responsible and effective use of AI.
This divide mainly reflects the difficulty of moving from experimentation to industrialization. Businesses alternate between fast testing phases and periods of stagnation, without transforming tests into stable practices. The problem is not going too fast or too slow, but the lack of mechanisms to convert learning into sustainable operating procedures.
Discussions often avoid organizational design, incentive systems, and the redefinition of human roles. As long as responsibilities, evaluation, and careers remain aligned with the previous organization, AI will remain marginal. The challenge is not only technological, but deeply human and organizational, conditioning the real adoption of AI solutions.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







