Generative AI: a bubble, a crash, or a turning point?

-

Last March, I wondered whether generative AI was a lasting revolution or a speculative bubble (AI heading for an economic dead end?). I expressed cautious optimism, but I was definitely confident. I could see the gray areas and the first signs of abuse, and I wondered above all whether market players would be able to demonstrate enough value to pass on their costs to their customers and become profitable. But I was fairly confident that, over time, we would find solid uses and build viable business models.

But a few months later, the landscape has changed. Not much, but enough for me to take a fresh look at the subject.

No, generative AI has not failed. It is even progressing, models are improving, and the field of applications is expanding, but something has broken in the review process and doubt is setting in.

Doubts about the promises, the real impact, and the sector’s ability to deliver on its commitments, particularly technical and economic ones. And it is no longer just a matter of natural caution in the face of a still young technology, but of recognizing that a certain threshold of unsustainability may already have been crossed.

What was presented to us as a linear growth trajectory now seems closer to overheating, fueled less by results than by expectations of future results. This frenzy is not driven by the value delivered, but by the expectation of future value. We are investing heavily in a technology without ensuring that it has the minimum conditions for sustainability and, in the absence of a clear model, belief has become its main fuel (AGI, employment, productivity: the great bluff of AI predictions).

Today, the gap between what generative AI costs, what it promises, and what it actually delivers is growing, and the entire ecosystem seems to be moving forward, as has always been the case in similar circumstances, hoping that someone else will find the answer before being confronted with the reality principle.

This is not a doom-and-gloom prediction, because deep down I refuse to believe that it won’t work, but rather a rational reading of the weak signals which, when taken together, should teach us to temper our expectations and that even if, hopefully, the sector does not collapse, it will not be able to avoid a reconfiguration.

In short:

  • Generative AI is advancing technically but faces structural economic limitations: high training and usage costs, difficulty monetizing, low user retention, and dependence on tech giants for distribution.
  • The economic model relies more on expectations and hype than on actual value captured, creating a speculative dynamic similar to the internet bubble of the 2000s.
  • Established players (Microsoft, Google, Amazon) are integrating AI into their existing ecosystems without a clear business model, while pure players such as OpenAI and Anthropic are struggling to balance their finances.
  • A gradual realignment is underway: projects are being streamlined, budgets reduced, and focus shifted to targeted and industrial uses, to the detriment of global transformation ambitions.
  • Generative AI is entering a phase of commoditization and integration, becoming a business optimization tool rather than a driver of economic or technological disruption.

First and foremost, I would like to clarify once again that we are talking about generative AI. There are many types of AI that work very well (I for dummies who want to see a little more clearly), with a proven ROI, which are profitable for the entire value chain and some of which have been used for years without anyone even realizing it. The only question that arises is whether there is a risk of throwing the baby out with the bathwater. for years without even knowing it, and which raise no questions except, perhaps, whether we might one day throw the baby out with the bathwater.

Foundations more fragile than we wanted to believe

Despite enthusiastic projections that are not based on any solid methodology (see above) and breathtaking demonstrations that publishers have mastered, it is becoming increasingly clear that the economic model of generative AI is flawed. Not only because it is young and immature, but because it is based on assumptions where costs, value, and revenues do not align. This imbalance, which is unfortunately structural, is being pointed out by more and more observers.

Let’s start with the costs. Training a model like GPT-4 is said to have cost more than $100 million (The Extreme Cost Of Training AI Models), but it is mainly the inference, i.e., each user request, that remains expensive, ranging from $0.01 to $0.1 (How much does GPT-4 cost?) depending on its complexity. Unlike SaaS models or services financed by online advertising, the more the service is used, the more it costs the provider, without any automatic economies of scale.

On the other hand, the ability to charge users and therefore pass on the costs to them is very limited. The general public is limited to offers at $20 per month such as ChatGPT Plus, and businesses, for their part, find it difficult to justify high costs for gains that are often difficult to measure. According to an IBM study, only 25% of AI projects currently achieve profitability targets (Will genAI businesses crash and burn?). Most of the gains, when there are any, come from cost reductions and highly targeted automation, which also explains why low-margin sectors cannot afford to invest heavily in AI (The disconnect between AI spend and potential).

Added to this is deflationary pressure from the open source world, with models such as MistralLLaMA and Phi that can be deployed locally at low cost and with very competitive performance. This movement may still be in the minority, but it is driving prices down. Businesses that can internalize an open source model have no reason to pay a high price for a variable-cost API. The result is predictable:the unit price of a token falls, without the infrastructure costs of private players following the same trend.

The balance of power is therefore shifting. While generative AI was thought to be capable of disintermediating the tech giants, it is now being absorbed into platforms that control its distribution. Google is integrating Gemini into Search, Gmail, and Android, Microsoft is imposing Copilot in Windows and Office 365, and Amazon is including its AI building blocks in AWS. Faced with this, new players have neither the interface, nor the platform, nor the installed base, and depend on those who control the entry points.

This is clearly evident with OpenAI, which, despite having popularized the concept of conversational agents with ChatGPT, is almost entirely dependent on Microsoft for its cloud (Azure), distribution (Copilot), and even technical support for businesses. Microsoft bills, Microsoft embeds, Microsoft supervises, and in this scheme, OpenAI is just an engine.

An engine that is not even captive. Users can easily go elsewhere. From one model to another, friction is low, usage is interchangeable and, unlike the large historical platforms, generative AI has no structural lock-in. No social network, no closed ecosystem, no cross-dependencies.

What’s more, “cognitive loyalty” is very low: usage statistics show that most users exploit these AIs for tasks such as writing messages, managing calendars, or generating content (memos, emails), which involve ad hoc assistance or optimization of individual tasks rather than structural transformation of workflows (AI Assistant Statistics 2025: How AI is Transforming Workflows and Productivity). In addition, individuals switch between different AI systems to try them out, subscribing and canceling their subscriptions with each experiment, which means that revenue forecasts are no longer very meaningful (The ARR no longer says much about the health of a startup).

The result: models support costs but do not capture sustainable usage or recurring revenue. Just as during the gold rush, the only ones who made money were the pickaxe sellers, the only winner in this story is Nvidia, whose record margins (over 75% on AI-dedicated GPUs) are now financed by an economy that is still unable to prove its viability (Big Tech’s AI spending boom increases risk of a bust).

A structurally unprofitable model

The financial losses of AI players are presented as a cyclical issue, suggesting that we need to give the market time to mature, for uses to become established, and for investments to translate into revenue. And it is true that this is how things have always worked in the tech world, but in the case of AI, there are grounds for doubt because the problem is not cyclical but structural.

Once again, we are often given the example of Google or Amazon, but these businesses did not see their costs increase in proportion to usage, in fact, the opposite was true.

Just because it worked for one type of business and technology does not mean it will work for everyone.

Large language models (LLMs) are not platforms. They are computational infrastructure that consumes enormous amounts of energy during training and use. GPT-4, Claude 3, and Gemini 1.5 are not comparable to a search engine or cloud software: their marginal cost does not decrease with volume; worse, it increases. Each additional user, each query, each thousand tokens has a significant energy and hardware cost (There Is No AI Revolution).

The problem is that revenues are not keeping pace. OpenAI is expected to generate around $4 billion in 2025, but at the same time, the company is expected to see its expenses grow to around $9 billion without managing to balance its model (Will genAI businesses crash and burn?). And as for the future, analysts are concerned that despite revenue growth, the business will continue to see its expenses grow proportionally, especially given that the vast majority of users pay nothing (OpenAI’s profit trajectory is an open question).

Anthropic, backed by Amazon and Google, is in a similar situation. It is valued at around $15 billion for, according to sources, less than $150 million in annual revenue. Again, the multiples are typical of a speculative bet and not of a structurally viable business.

Let’s be clear: every new user, every query, does not bring these companies closer to profitability but contributes to deepening their losses.

At the same time, revenues are largely captured by the wrappers I told you about recently (Wrappers, deeptechs, and generative AI: a profitable but fragile house of cards). The majority of OpenAI’s revenue comes from direct subscriptions to ChatGPT (Plus, Teams, Business, etc.), representing more than 70% of revenue, while the sale of API access (used by integrators such as Notion, Canva, Copilot, Salesforce, etc.) represents only about 15 to 20% of the total, even though itThis means that the value generated by uses integrated into other tools mainly goes back to these end platforms, and not to OpenAI itself (OpenAI Is A Bad Business), especially since access to APIs is often sold at a loss (The Subprime AI Crisis) to stimulate usage, which is still struggling to take off.

Even when AI players are present in the final products, they have no control over pricing, distribution, or customer relations. OpenAI in Copilot, Claude in Notion, Gemini in Gmail: in each case, AI is integrated but invisible. It is Microsoft, Google, and Amazon that market, bill, build customer loyalty, capture the added value, and can change AI suppliers as they see fit.

And what these same wrappers are discovering today is that profitability is no more obvious on their side. Copilot, which was supposed to be Microsoft’s show of strength in augmented productivity, is struggling to gain traction: 60% of businesses have tested Copilot, but only 16% have moved on to the deployment phase (How to get Microsoft 365 Copilot beyond the pilot stage). Many organizations start by purchasing a limited number of licenses, often to test Copilot on a few pilot teams, and then hesitate to roll it out more widely due to a lack of convincing feedback.

Whether it’s Microsoft or Salesforce, the revenue generated by AI is low and the projections are not very encouraging (Reality Check). But Microsoft and its peers have a considerable advantage over OpenAI and others: in addition to being at the forefront of customer relations and controlling distribution for some of the pure players, they are not single-product companies. They have cash cows that will enable them to finance their AI efforts for years to come, until the market mature, time that others will not have.

Today, 66% of pilots do not go into production due to immaturity and ROI (88% of AI pilots fail to reach production — but that’s not all on IT), and only 25% of AI initiatives have generated the expected return on investment in recent years, with only 16% deployed across the business (IBM Study: CEOs Double Down on AI While Navigating Enterprise Hurdles).

In this context, costs continue to rise, and business customers are growing weary of testing tools that struggle to demonstrate economic impact because the promised productivity gains are not materializing(Workday CEO: “For all the dollars that’s been invested so far, we have yet to realize the full promise of AI”) with investments growing but net output per worker not increasing proportionally (AI’s productivity paradox: how it might unfold more slowly than we think).

The result: the models are expensive, the perceived value remains unclear, user loyalty is low, industrial profitability is uncertain… and the only ones making net margins in this system are the infrastructure vendors, namely Nvidia, of course, but also Microsoft, Amazon, and Google, not thanks to AI, but thanks to the cloud, bandwidth, and GPUs. It’s a logic of rent on material dependence, not on software value.

Let’s be clear: I’m not saying that AI has no value or contributes nothing; I am deeply convinced of the opposite. I am simply stating that, given the perceived benefits, businesses and individual users are not willing to pay the price that would allow AI providers to become profitable one day, when part of that price is captured by intermediaries.

There is an undeniable asymmetry between the benefits of AI and the investments required to obtain them, and the structural nature of this asymmetry could well lead to a dead end.

A bubble sustained by beliefs but not by facts

This isn’t the first time that tech has been driven more by belief than by results. Nor is it the first time that an entire sector has bet on a promise without checking whether the economic conditions are in place to deliver it. But in the case of generative AI, the gap between expectations and reality (AGI, employment, productivity: the great bluff of AI predictions) is becoming difficult to ignore.

The figures speak for themselves: valuation multiples unrelated to revenue, massive funding rounds based on unverifiable projections, and constant pressure to fuel growth that is still theoretical.

Anthropic is a prime example: an estimated valuation of $15 billion for just $100-150 million in annual revenue. It is a heavily funded organization, but its model still relies heavily on conditional funding and integration agreements with giants such as Amazon, Google, Salesforce, and Zoom.

OpenAI, for its part, is a global showcase… but continues to lose several billion dollars a year, despite the spectacular adoption of ChatGPT. As for the $3 billion in revenue it hopes to generate from its agents in 2025, it appears that this will come from a single customer, namely Softbank, which happens to be a shareholder in OpenAI (Reality Check). It’s a bit like your bank buying your products to make the world believe that you’re doing well, and on top of that, this activity will probably be loss-making too.

The parallel with the internet bubble of the early 2000s is not unreasonable. Generative AI is funded on promises of the future, not on solid assets (The Dot-Com Bubble vs. The AI Boom: Lessons for Today’s Market), the network effect is weak, customer loyalty is uncertain, and dependence on external capital is extreme, although I do see notable differences such as “clear and often very B2B use cases, known and proven business models with a clear way of generating revenue (albeit insufficient) and, above all, government support for the sector.” (AI heading for an economic dead end?).

What fuels this dynamic is a well-known mechanism in venture capital: FOMO (fear of missing out). No one wants to miss out on the next Google. The result is a race for valuation where revenue models matter little, as long as the apparent trajectory is exponential.

What is sustaining the AI bubble today is not the value delivered by the products, but the narrative of an inevitable disruption (Is the AI Revolution Already Losing Steam?), a technological shift that must be believed in even before it materializes.

This dynamic is reinforced by the fact that the major players have every interest in perpetuating this illusion of inevitable change. Microsoft, Amazon, and Google are all massively integrating AI features into their products without any visible price increases. Not because they are immediately profitable, but because they strengthen their grip on ecosystems and suggest that they will be the ones calling the shots.

The idea of a transformation underway is being sold, when in most cases it is just cosmetic packaging or clever rebranding.

And even among players who had engaged in concrete AI applications with well-thought-out use cases, limitations are emerging. The case of Klarna is interesting in this regard: after announcing with great fanfare that it was automating part of its customer support with AI agents, the company had to admit that the results were neither as replicable nor as transformative as expected and that, while they had calculated the gains, they had underestimated what they had to lose ([FR]Klarna shows us the limits of AI agents).

We are gradually seeing the tide turning. Projects are being put on hold, roadmaps are being revised, and deployment plans are slowing down. According to The Times, several giant data center projects planned to absorb the AI wave have been suspended or scaled back as early as the first quarter of 2025 (Big Tech’s $340bn AI spending boom increases risk of a bust), such as at Amazon (Amazon has halted some data center leasing talks, Wells Fargo analysts say), and this appears to be a general trend in the sector.

Finally, even among end users, the phenomenon of cognitive fatigue is beginning to set in. The initial “wow” effect is wearing off. ChatGPT, Claude, and Copilot are still being used, but less to transform than to assist. They have become ad hoc tools, not agents of transformation.

AI therefore seems to have reached the peak of its promise, without reality following suit (AI’s productivity paradox: how it might unfold more slowly than we think).

In other words, everyone is still playing, but no one is really looking at the scoreboard anymore.

What next? Burst or landing?

Technology bubbles don’t always end in a burst. Sometimes they deflate slowly and quietly, the hype subsides, promises diminish, projects are scaled back, and in the end, what remains is a more modest but sometimes healthier infrastructure.

Since the crash scenario is not difficult to understand or explain, let’s look at the more credible scenario for

2025-2026: a very discreet reversal

Nothing alarming, but faced with doubts, criticism, and the fading “magic” effect, the market is slowly beginning to shrink. AI budgets are gradually being reduced, CIOs are stopping the proliferation of pilot projects that lead nowhere (88% of AI pilots fail to reach production — but that’s not all on IT), but are learning lessons from them, and finance departments are starting to be strict about ROI.

On the user side, the magic is wearing off. The consumer use of ChatGPT and its ilk is plateauing. People are starting to talk about fatigue, trivialization, and even cognitive saturation. AI is becoming a little tiresome and disappointing, and in the end, its invasive marketing is working against it.

The slowdown in data center construction projects is the news that is changing the AI landscape: while the marketing departments of the major players say that everything is and will be wonderful, their own investment decisions suggest that they do not see demand keeping pace.

2026-2027: a silent purge

This is the year of realignment. Startups disappear or are bought out, but this is nothing like the dot-com crash. It is discreet, and some even say that the “exits” are quite good. This is light years away from the expectations of 2024, but ultimately very reasonable. It proves that it is not AI that is not working, but that we expected too much too quickly.

The survivors are reorganizing and streamlining their investments.

At the same time, the big platforms are regaining control. OpenAI is becoming increasingly integrated with Microsoft and is abandoning its dream of becoming the new Google (Does OpenAI want to, should it, and can it become the new Google?), while Anthropic is merging into Amazon’s cloud offerings.

The major platforms are regaining control. Models are becoming invisible: they run in the background, integrated into existing products, no longer bearing their name. Invisible and interchangeable, they are commodities.

Once again, having multiple product lines allows you to survive in the long term. OpenAI and others were doomed to survive in the short term, and the initial hesitation of investors undermined their independence, but this was the price they had to pay to survive.

In businesses, we no longer talk about AI transformation. We talk about incremental improvement, productivity aids, and document assistance. AI is becoming just another building block.

On the financing side, the tone is also changing: investors are closing the taps, which is partly what has accelerated this phenomenon.

2027–2028: reconstruction in sobriety

Gradually, the lines are stabilizing.

Those who survive are starting over on a different footing.

Smaller, more sober, open source and specialized models, such as those developed by Mistral or Microsoft’s Phi series, vertical integrations, with AI embedded in business processes, and new business models, no longer based on tokenized usage, but on the business value created: for example, generating a qualified customer response, a structured contract, or a validated summary report.

Payment is based on action or even results, and the model attempted by Salesforce a few years earlier is becoming the norm (Agentforce Pricing Update: Salesforce Announces Major Changes).

AI is no longer a disruptive technology but a tool, and this is where its second life begins.

Those who never dreamed but focused on components continue to thrive: Nvidia, of course, but also cloud providers, middleware publishers, and specialized integrators.

Generative AI may have been nothing more than a technological transition to something else.

Bottom line

The problem with generative AI is not its potential or the benefits it brings, but the fact that no one is willing to pay for it, especially since technology providers are not the ones capturing most of the market revenue.

The problem is therefore not the technology, but the narrative surrounding it. A narrative that presents it as inevitable, based on the idea that AI was going to change everything and the belief that all this would create a new market and new sources of income.

The price to pay will be too high for many customers, and the income will probably not materialize, at least for pure players.

The discourse on AI is correct in terms of the trend but, as we have seen, is not based on any figures (AGI, employment, productivity: the great bluff of AI predictions), and this has led to exaggerated expectations that are gradually undermining confidence in the sector.

The benefits are slow to materialize and may never come, costs are rising, and ultimately the review is becoming tiresome. This is not unique to AI; it is the history of the tech world, except that with AI everything is faster and more amplified.

And as always, we will bounce back to something less flashy but healthier and more impactful in the very long term. We are not at the end of AI, but we are reaching the end of the experimentation and learning period.

What will remain are more sober models, more targeted uses, and deeper business integrations, but all in a virtually invisible way. AI will no longer be a spectacle, but will deliver in the utmost discretion.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent