At the start of this year, two reports were published that offer a seemingly contrasting view of the state of artificial intelligence in organizations. What makes them unique, after analyzing the market from the perspective of major consulting firms (Adoption and impact of AI: lessons (and limitations) from the latest McKinsey and BCG studies), is that they come from two major publishers in the market, namely Anthropic and OpenAI.
So I would repeat the same warning I gave in my previous article: we are dealing with players who have an interest in presenting a positive view of things, and even if we are talking about a snapshot at a specific moment in time, all the perspectives expressed are more about prediction (and therefore marketing) than (rational) forecasting (AGI, employment, productivity: the great bluff of AI predictions). And, in general, and this applies to all reports, before reading them, ask yourself what their author is selling, and you will usually have a clear idea of the content without even having to read it.
On the one hand, we have Anthropic’s (Claude’s vendor) The 2026 State of AI Agents Report, which describes the quick and already productive adoption of AI agents in businesses, with uses in production, measured returns on investment, and a gradual extension to cross-functional processes.
On the other hand, we have OpenAI’s Ending the Capability Overhang, which points to a growing gap between the actual capabilities of AI systems and how they are actually used, both by individuals and organizations.
Taken in isolation, these two diagnoses may seem contradictory, but together they offer a coherent interpretation, provided we understand that they are not talking about the same subject and do not have the same level of analysis.
In short:
- The reports from Anthropic and OpenAI offer complementary perspectives on the integration of AI in business, with one focusing on successful operational uses and the other on the gap between technical capabilities and actual practices.
- Anthropic’s report shows growing and productive adoption of AI agents in engaged organizations, particularly in software development, with measurable returns on investment already being seen.
- The OpenAI report highlights a “capability overhang,” emphasizing that users make little use of advanced AI capabilities, despite widespread access to tools.
- Both analyses agree that the barriers are primarily organizational and that the value of AI depends on its integration into structured, multi-step processes.
- The divergence between the two reports can be explained by their methodologies, their focus (engaged businesses vs. mass usage), and their position in the ecosystem, reflecting two angles of the same structural issue.
For Anthropic, agentic AI is an operational reality
Anthropic’s report is clearly based on an organizational and operational approach. It draws on a survey conducted at the end of 2025 among more than 500 technical decision-makers and documents concrete uses of AI agents in production.
The first finding is that the experimental stage is over. Agents are no longer confined to conversational assistants or simple automations, and more than half of the organizations surveyed already report using them for multi-step workflows, with a significant proportion using them for cross-functional processes involving several functions (Collective appropriation of AI: the only condition for tangible impact and AI adoption does not replace productive appropriation).
The most mature use remains software development, where agents are involved in the entire production cycle, from ideation to testing and documentation. But the report emphasizes the quick expansion into other areas: data analysis, reporting, internal process automation, customer service, finance, supply chain, and even HR functions.
Another interesting point is the question of return on investment. Contrary to much of the still speculative discourse on AI, Anthropic points out that the majority of organizations surveyed already report measurable gains, whether in terms of productivity gains, cost reductions, or improved quality of deliverables.
Finally, the report does not deny the difficulties. The main obstacles identified are structural: integration with existing systems, data quality and accessibility, change management, and team skills development. Agentic AI appears less as a product to be deployed than as a capability to be integrated into an existing work system.
For OpenAI, a growing gap between capabilities and uses
The OpenAI report takes a radically different perspective. It does not focus primarily on pioneering organizations, but on all uses observed on a large scale, based on aggregated data from hundreds of millions of users around the world.
Its central concept is that of “capability overhang” defined as the gap between what AI systems can do and what users actually do with them. This gap continues to widen as capabilities advance faster than their adoption.
The report shows that the most advanced users exploit several times more cognitive capabilities than the medianuser, and that comparable gaps exist between countries. Productivity gains are strongly correlated with the use of advanced features, such as reasoning, data analysis, or multi-step workflows, which remain underutilized by a large proportion of users, including in professional contexts.
An important point in the report is the distinction between access and agency. Access to tools is now widespread, but the impact depends on the ability to integrate them into real, repeated, and economically useful work practices(Prepare the business and work before integrating AI). Without this agency, AI remains a tool for consultation or occasional assistance, far from its productive potential.
The bottom line is simple: without a deliberate effort to provide training, design uses, and integrate it into the organization, AI risks amplifying existing disparities rather than producing overall and sustainable gains.
Do the two reports talk about the same thing?
Yes and no.
Yes, in that they both describe the same phenomenon, namely the difficulty of transforming technological capabilities into operational value.
No, because they observe it at radically different levels. The Anthropic report focuses on organizations already engaged in a deliberate process of AI integration, while the OpenAI report takes a much broader perspective, including opportunistic, unstructured, and sometimes unprofessional uses.
These are not opposing diagnoses, but different slices of the same reality.
The same underlying diagnosis
Despite their differences, the two reports converge on several key points.
First, neither identifies technology as the main limiting factor. The obstacles identified are organizational, cultural, and operational, whether in terms of process design, governance, skills, or the ability to drive change (The great illusion of technological productivity gains (including AI)).
Second, both show that value emerges when AI is used beyond simple conversational applications, in structured, multi-step workflows that are integrated with existing systems.
Finally, both highlight a risk of polarization. Those who manage to exploit AI in depth capture a disproportionate share of the value, while others remain confined to superficial uses, with a gap that tends to widen over time.
Different levels of analysis and responsibility
The differences mainly relate to the scale and role of each player.
Anthropic takes a supportive stance toward businesses and highlights success stories that are already visible. Progress is incremental, organizational, and controllable.
OpenAI takes a more systemic approach. Progress is technological and exponential, but remains at the potential stage. The report implicitly points to broader responsibilities: training, public policy, and the design of educational and organizational systems.
This difference in focus explains why one describes a dynamic of success while the other emphasizes a worrying delay.
Why such a discrepancy?
The difference in approach can be explained first and foremost by the position of each player in the ecosystem. Anthropic observes intentional, thoughtful, and controlled uses (AI from productivity to P&L: nothing happens by chance), while OpenAI observes spontaneous, large-scale, and heterogeneous uses.
It can also be explained by the nature of the data used. A survey of engaged technical decision-makers does not tell the same story as behavioral data from hundreds of millions of users. And I would add that these decision-makers, for obvious reasons, are unlikely to say that it isn’t working.
Finally, it can be explained by the responsibility highlighted in each report. One shows what works when AI is designed as a component of the work system, and the other shows what happens when this design is lacking.
Bottom line
Taken together, these two reports describe less a contradiction than a structural problem. AI creates value when it is integrated into a work context designed for it. In the absence of such a context, its capabilities remain largely untapped.
AI is not lacking in power and is widely accessible, but in most organizations, it still lacks a work design and uses that are commensurate with what it now makes possible (Taking back control of enterprise design: intention before tools and Enterprise design before architecture: putting the company back the right way up).
What do you think? Is this a realistic picture of the situation or a prophecy that these financially precarious players hope will come true (Generative AI: a bubble, a crash, or a turning point?)
To answer your questions…
No, they analyze the situation at different levels. Anthropic observes organizations already engaged in the operational integration of AI, while OpenAI takes a global view of actual large-scale uses. One shows what works when AI is intentionally integrated, while the other highlights a massive underuse of available capabilities. Together, they describe the same challenge: transforming technological potential into concrete value.
Anthropic’s report describes agentic AI already in production in many organizations. Agents are used in multi-step workflows, particularly in software development, but also in data, finance, and customer service. The majority of businesses surveyed report measurable gains. The obstacles identified are mainly organizational: integration with systems, data, and change management.
Capability overhang refers to the gap between what AI systems are capable of doing and what users actually use them for. OpenAI shows that advanced features remain largely underutilized, even though they are the ones that generate the most productivity gains. This gap tends to widen as capabilities advance faster than their adoption.
Access to tools is not enough without the ability to integrate them into actual work practices. Without training, usage design, and integration into processes, AI remains a one-off tool. Both reports converge on this point: the limitations are cultural and organizational rather than technological.
Both reports show that AI creates value when it is integrated into a work system designed for it. Otherwise, its capabilities remain largely untapped. The real challenge is therefore not access to technology, but the design of organizations, processes, and skills that enable it to have a lasting impact.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







