Since the question of adopting AI in the workplace has been raised, as has historically been the case for all technologies, the question arises as to whether to start at the top, providing strategic impetus and a framework, demonstrating a clear commitment, or, on the contrary, to let uses emerge from the bottom up, trusting teams to find the right use cases on their own.
Posing the question in this way has the merit of simplifying the problem, as it gives the impression that all you need to do is choose a starting point and the rest will follow. However, experience shows that this question, posed in this way, almost always leads to dead ends. Not because one of the two approaches is wrong, but because both avoid the heart of the matter: how uses become, or do not become, collective choices that bring value (Collective appropriation of AI: the only condition for tangible impact).
In short:
- Pitting a top-down approach against a bottom-up approach to AI adoption masks the real issue: how to transform local uses into collective practices that deliver value.
- The bottom-up approach promotes a grounded approach to reality, but without coordination or governance, the gains remain limited and scattered.
- The top-down approach makes it possible to set a course, but often produces projects that are disconnected from the real world, motivated by external pressure rather than value creation.
- The real challenge lies in collective ownership, through an intermediate zone where experimental uses become structural choices for the organization.
- The exemplary nature of leaders is not measured by their personal use of AI, but by their ability to arbitrate, supervise its use, and take strategic decisions related to its deployment.
The merits of a bottom-up approach to AI deployment
The idea of starting at the bottom is based on a widely shared intuition that adoption cannot be decreed and must start from the reality on the ground. Teams know where AI can save them time, improve the quality of what they produce, or reduce certain frictions, and by letting uses emerge, we hope to avoid above-ground deployments and anchor the technology in operational reality.
In practice, this approach does indeed generate uses and can even produce visible productivity gains at the individual or team level, but it quickly reaches its limits. Uses multiply without coordination, gains remain local, and the organization derives no measurable benefit at its level. The time saved is absorbed, redistributed informally, or lost in the workings of the system (Without governance, the gains from AI are virtual).
Starting from the bottom up enables usage but is not always enough to produce productive adoption (AI adoption does not replace productive appropriation).
Finally, let’s say that this bottom-up approach avoids the trap of “the solution in search of a problem”, but it can create the exact opposite situation, which is hardly more desirable, namely wanting to solve everything with AI. You know, when you have a hammer, you want all problems to look like nails.
And this is confirmed in practice. Having discussed this with consultants who specialize in supporting their clients’ AI projects, they unanimously confirmed that, in their experience, more than 50% of use cases reported in the field are in fact problems that can (and even must) be solved either with AI or with tools already present in the organization.
Advantages of a top-down approach to AI
Conversely, starting at the top should help avoid this dispersion. It involves setting an intention, defining priorities, giving a common meaning to initiatives, and, in this logic, adoption is thought of as a management issue, with clear objectives, indicators, and rules.
Again, the intention is commendable, but experience shows that this approach is also far from perfect. Frameworks are defined without a clear understanding of the actual work (Learn about Yoshida’s iceberg of ignorance, or what management refuses to see.) and teams use the tools because they have been asked to use them, but without this fundamentally changing the way they work.
Starting at the top provides direction but does not create ownership. The organization displays ambition without always giving itself the means to translate it into everyday work practices.
Let’s also add that this approach can tend to spread panic and chaos where we just wanted to create a sense of urgency. Indeed, we now have enough feedback to say that many executives are pushing for AI projects “for the sake of AI”, under pressure from investors who understand no more than they do, and that in the end, this results in a certain amount of disorder and poorly thought-out projects rushed into action, whose inevitable failure only adds to the skepticism surrounding this technology.
Several studies converge on this point: internal tensions, misalignment between CEO, CIO, and CISO, lack of clear policies, and low ROI, even as external pressure (investors, competition, the narrative that “everyone is doing AI”) pushes businesses to multiply showcase projects, which corresponds exactly to the scenario of businesses that “do anything as long as it involves AI”. For Dataiku, for example, 35% of AI initiatives are “AI washing”, i.e., projects designed to send a signal of innovation rather than to create business value (Avoid AI Agent Washing: 4 CEO Priorities for Signal Over Noise).
On the other hand, 83% of CEOs believe that investors now factor AI strategy and execution into their decisions, which is driving a proliferation of visible but superficial projects (Why AI Demands Have 74% Of CEOs Fearing For Their Jobs).
A false dilemma
Pitting the top against the bottom actually masks what is lacking in most businesses: the middle ground where practices should become established (Stabilize to move forward: why experimentation alone does not produce results with AI).
Neither the “bottom-up” nor the “top-down” approaches address this issue properly. The former sidesteps it, hoping that things will work themselves out, while the latter glosses over it, thinking that the framework will suffice. In both cases, adoption remains incomplete and value fails to materialize.
The exemplary nature of leaders is often overestimated
This is where the question of the exemplary nature of leaders often arises. Should leaders themselves use AI to lead the way and publicly display their use of it to train the rest of the organization?
When framed in this way, the impact of setting an example is rather overestimated. A leader can use AI on a daily basis without it changing anything in the work of their teams and, conversely, teams can adopt tools without their leaders ever touching them. Leading by example through use is, at best, symbolic and, at worst, a publicity stunt. After all, the fact that managers once had their emails printed out and typed up by their secretaries did not prevent this tool from being deployed.
So that’s not the issue.
Where exemplary leadership really matters is not in usage but in arbitration. A leader is exemplary when they agree to deal with the consequences of emerging uses, when they decide what to do with the gains obtained, or when they accept that not everything that is possible will be explored or perpetuated.
Setting an example is not about showing how to use AI, but about showing how to decide when AI changes the rules of the game and that we need to take back control of the technology without letting it dictate the game (If your business isn’t designed for AI, it will end up being designed by AI).
You might say that I was saying exactly the opposite when I talked a lot about collaborative tools, but the subject is not the same at all. The use of these tools requires uniformity within a team: if your boss does not use social media or messaging and asks you to duplicate everything by email, you will quickly run out of steam. On the other hand, the adoption of AI begins with individual use, even though value creation is a collective phenomenon, but not necessarily within a team with a hierarchical dimension. Rather, it occurs throughout a workflow with a more operational and often cross-functional dimension.
So, yes, psychological security still plays a role, but it is inspired more by local management than by senior executives. Of course, ultimately, this security trickles down from the top, but in the specific case of AI, the role of senior executives should not be overestimated, as evidenced by the many cases of Shadow AI.
Adoption, productivity, and accountability
The question of whether to start at the top or the bottom is too often a way of avoiding the issue of accountability. Productive adoption requires someone to take responsibility for the moment when usage ceases to be an experiment and becomes a structural choice.
Otherwise, productivity may increase locally without ever producing a measurable effect at the business level, not because the potential value does not exist, but because no decision is made on how to move from potential to reality.
Bottom line
The adoption of AI does not begin at the top or at the bottom, but when the organization manages to make the connection between what is happening on the ground and what is being decided at the top. Until this connection is made, uses will multiply without producing collective value.
The exemplary nature of leaders is not determined by their ability to use tools, but by their ability to arbitrate and influence governance.
The question is not whether to start at the top or the bottom, but what to do to bring the two approaches together in a constructive way.
To answer your questions…
Pitting top-down against bottom-up is a false dilemma. Bottom-up adoption leads to concrete but scattered uses, with no collective impact. Top-down impetus provides direction but often remains disconnected from the actual work. In both cases, value is not fully realized. The real challenge is to link practical uses in the field to strategic decisions in order to transform isolated initiatives into collective practices that create value.
The bottom-up approach allows teams to find useful applications for AI and generate local gains. However, without coordination or governance, these gains remain invisible at the business level. Applications multiply without consistency, and a significant proportion of the cases identified do not actually require AI. The application exists, but it does not translate into productive adoption or sustainable organizational transformation.
A top-down strategy allows priorities to be set and clear ambitions to be stated, but it is often designed without detailed knowledge of the actual work involved. Teams then use AI out of obligation, without making any fundamental changes to their practices. Under external pressure, this can lead to poorly thought-out showcase projects, generating chaos, low ROI, and internal skepticism rather than tangible value creation.
Leading by example is greatly overrated. The fact that a leader uses AI does not automatically mean that the organization will follow suit. True leadership lies in the ability to arbitrate: deciding which uses should be perpetuated, how to exploit the gains obtained, and what limits to set. It is these decisions, rather than personal use of the tools, that shape the effective adoption of AI.
Productive adoption depends on the ability to transform experimental uses into conscious collective choices. This implies clear responsibility when AI ceases to be a test and becomes structural. Until this transition is decided, gains remain local and diffuse. Value emerges when the organization makes the link between the field and governance, and takes responsibility for the resulting decisions.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







