You might say that the collapse of a so-called artificial intelligence bubble is not some hypothetical futuristic scenario, and that the question is not “what if?” but “when?” But for now, the AI sector is here to stay, has an undeniable impact on the economy, and if there is a bubble, it has not yet burst.
But let’s imagine that…
Prologue
It is still difficult to pinpoint the exact moment when everything changed, but for years, artificial intelligence was presented as a given: a tool for transformation, a driver of growth, the promise of a more fluid, more efficient, more intelligent world.
In businesses, government departments, and conferences, people talked about strategic opportunities, technological imperatives, and competitive advantages. It wasn’t necessary to understand AI, but to deploy it quickly at all levels.
Then reality gradually reasserted itself. Uncontrolled costs, unstable models, excessive infrastructure—all the ingredients were there for it to end badly. And, above all, there was one question that was never really answered: what was it for?
The promise machine
At first, the story was simple: artificial intelligence was going to change everything. This promise was repeated endlessly by researchers, platforms, and analysts until it became obvious, like a reflex: we had to invest quickly and heavily. Missing this opportunity meant being left out of history.
And let’s admit that in the early days, the optimism seemed entirely justified. The technical advances were real and sometimes spectacular. Language models capable of summarizing texts, generating code, and conversing almost naturall, magically, some said. Enough to fuel many dreams and the wildest business models.
Major players such as Google, OpenAI, Meta, and Nvidia were accelerating, and others were following suit. There was already talk of general AI, civilizational disruption, and radical transformation of society. And since every technological leap (or what was presented as such) triggered a new wave of funding, there was no reason for the party to stop.
But behind the hype, the models remained limited, capricious, and energy-intensive. And, above all, their cost was skyrocketing. Training a model cost hundreds of millions of dollars, and operating it on a large scale required extraordinary infrastructure. The power consumption of a consumer chatbot was approaching that of an entire neighborhood.
But it didn’t matter. We kept going because everyone believed in it or pretended to believe in it.
Leaders could no longer back down without losing face, and investors didn’t dare ask questions. Businesses, customers, and partners followed suit because, in any case, there was only one alternative, and it was unthinkable: admitting that no one really knew where they were going or whether it would ever make money.
At this point, AI was no longer a technology but a narrative leading to an inevitable future.
The frenzy
It was no longer technological innovation but a Pavlovian reflex: as soon as one player announced a new model, the others reacted within the hour. A cycle of continuous one-upmanship where real innovation mattered less than the perception of progress.
Fundraising followed this rhythm. Billions were raised on the promise of a model that was fast, more ethical, and more powerful than yesterday’s, even though behind the scenes, engineers knew that the performance differences were marginal and often imperceptible to the end user. But in the markets, valuations were soaring, and that was what mattered most.
We then witnessed a new phenomenon: circular investment. Nvidia financed the developers of models that used its chips. OpenAI invested in start-ups that used its APIs. Large funds supported entire layers of the ecosystem, from the model to the interface, from the interface to the service. Money was circulating at high speed and in large quantities between all the players, but ultimately no value was being created.
Tools were no longer being produced, only financial flows.
But in reality, this circular investment logic had a much broader effect: it drained the capital market. By reinjecting the same billions into the same structures, everything else was deprived of oxygen: industry, energy transition, public research.
In 2025, AI players such as Nvidia, Microsoft, Alphabet, OpenAI, Anthropic, SoftBank, and others alone accounted for nearly 4% of US GDP, in direct capitalization or through induced effects. This was more than on the eve of the dot-com bubble burst.
It was no longer a question of sectoral risk, but of systemic risk. If the AI bubble burst, it would not only be start-ups that would fall, but entire sections of the interconnected economy: pensions via pension funds, energy infrastructure, public debt based on AI growth projections, and even the stability of financial markets.
And yet, everyone continued to play along.
The warning signs were there. Some analysts noted an excessive concentration of investment in a handful of players, while others were concerned about dependence on cloud giants or the fact that a small number of players were concentrating the material resources, data, energy, financing, and technical tools needed to advance AI.
But funding was reaching record highs, and we all know that the air at the top can be intoxicating…
Models were generating text, images, code, and even making decisions, with agents promising to manage activities from start to finish, and that was enough to maintain the illusion that a revolution was underway and that it should not be slowed down.
We were moving away from rationality, and AI was becoming a kind of faith, a strategic obligation.
But at that point, no one was really in control of the direction it was taking.
Hidden costs and denied costs
That’s when the numbers started to pose a problem.
Training a large model required tens of thousands of specialized chips, months of computation, and megawatts of energy. With each public launch, the need for servers and bandwidth multiplied. Data centers expanded, networks heated up, and, of course, bills climbed.
But these costs were not yet too prominent because what mattered was power, not expenses. However, executives knew that the business model was not sustainable.
Most consumer users paid nothing. Professionals, on the other hand, remained cautious. Many had tried the support tools, but few had adopted them on a large scale: too many errors, too much instability, too many technical dependencies. As for integrating these systems into critical processes, that was still a promise, not a reality.
So fictitious growth was invented.
Business customers received free credits, “experimental” packages, and open-access models. The goal was to inflate metrics, show increasing adoption, and convince the market that AI was becoming indispensable and that, much like Google in its day, after “buying” the market, it would eventually pay off.
Internally, however, we could feel the first tensions rising. Operating costs were exceeding forecasts, GPUs were becoming a rare, almost speculative resource, and the energy bill was no longer negligible.
But charging the true price was commercially impossible.
AI for $100 a month? Unimaginable.
Pay-as-you-go AI? Off-putting.
AI reserved for certain uses? Unacceptable.
The industry had promised magic for all, and it had to honor that promise. It could no longer backtrack without breaking everything.
Unnatural growth
The more AI gained ground, the more paradoxes multiplied.
Officially, the models made businesses more productive, employees more efficient, and services more accessible.
In practice, however, their uses remained unclear, their benefits uncertain, and their shortcomings increasingly visible: hallucinations, biased decisions, cognitive dependence, blind automation, infrastructure overload, and even political manipulation. Unpublished internal studies showed that the majority of requests processed by AI assistants were superficial, redundant, or unusable.
Conversational agents, which were supposed to relieve customer service departments, often led to an increase in complaints. As for code co-pilots, although very popular, they required human proofreading and correction.
But it was necessary to grow and show that AI affected all sectors, all functions, and all uses. So we continued to deploy, automate without method, and promise anything and everything.
Meanwhile, the first social tensions logically began to appear. Teams were eliminated to finance AI projects without a clear economic model, not because AI was replacing them. Jobs were weakened by unreliable tools, and burnout set in among human validation teams, who were paid to filter out inappropriate responses or train models without the slightest recognition.
And then the inevitable happened.
An independent moderator working for an AI platform was found dead after weeks of digital duty, alone in the face of a continuous stream of violent, abusive, and absurd content. The investigation revealed that he had repeatedly warned about the mental strain his work was causing, but no one ever bothered to pay attention to his reports.
The case could have been hushed up, and indeed other similar cases may have been hushed up before, but this one became symbolic of what AI demanded and what was being ignored.
Growth, in this case, was no longer progress but a headlong rush that went against the economy, against the environment, and against humans themselves.
The pact of silence
It would be wrong to think that no one knew at this stage, but no one said anything.
Engineers saw the limitations, researchers privately denounced the lack of real progress, decision-makers sensed that momentum was fading, and even the most enthusiastic investors were beginning to have doubts, but the system was locked in.
To say publicly that performance was stagnating was to cause its value to plummet. To acknowledge that the models were not delivering on their promises was to jeopardize its funding. To admit that the costs were no longer sustainable was to open the door to disaster.
So everyone kept quiet, hoping to hold out for another quarter, another version, another round of financing.
The major players maintained the momentum with meticulous care. Yes, AI might, and indeed would, go off the rails, but at worst it would have to be supervised and under no circumstances stopped. But in any case, they had to be financed as a priority, because otherwise AI would fall into the hands of “irresponsible” players.
The discourse of fear took hold. Fear of uncontrollable AI, of a geopolitical competitor that is fast, or of becoming obsolete. It became a fundraising tool: “We may be doing it wrong, but without us it will be worse”.
Meanwhile, consumer applications were losing quality. Models, restricted for security or cost reasons, became increasingly unpredictable, and response times lengthened. Results deteriorated, but communication remained intact.
While everything seemed fine on the surface, panic was setting in behind the scenes, but, as on the Titanic, everyone played their part until the end.
The beginning of the end
The breakdown did not begin with a scandal or a stock market crash, but with silence.
The launch of a highly anticipated new model was delayed, officially to “improve security,” but in reality because it wasn’t working. Too expensive, too unstable, too inefficient.
In the weeks that followed, the signs came thick and fast. A strategic partnership was suspended without explanation, and a technical leader was dismissed. Then came the targeted layoffs, initially presented as “realignments”, but which quickly took on the scale of a disguised redundancy plan.
At the same time, operating costs continued to skyrocket, and data centers were approaching the physical limits of what the local, regional, or national network could provide. GPUs were being delivered months late. Above all, energy bills were reaching unsustainable levels, especially in regions under strain.
Customers were also beginning to grow impatient as their own expenses increased without a clear return on investment. The promised productivity gains were not materializing, and the more cautious were putting their projects on hold.
That’s when a document was leaked. An internal report from one of the industry leaders, written six months earlier, said that current models had reached a ceiling. The promises of general AI were unrealistic in the short term, costs had become “structurally unsustainable” and economic viability no longer depended on end users, but on a set of cross-subsidies. Revenues from one product financed the losses of another. At the same time, the ecosystem depended on indirect financing, venture capital, cross-partnerships, and public aid, which masked the lack of real profitability.
The effect was immediate.
In the days that followed, several businesses quietly withdrew their consumer offerings, some platforms were shut down without notice, contracts were broken, and dozens of projects were abandoned.
The rest was only a matter of time.
The crash
When it happened, the collapse lasted only eight days.
It all started with the announcement that OpenAI would not be finalizing its next round of fundraising due to a “strategic reassessment” and “new balances to be found”.
But everyone in the industry understood perfectly well what was happening: investors were now refusing to fall into line. The next day, Anthropic suspended two advanced research programs. Then Google retrofitted Gemini into its in-house products as a lightweight assistant. Access to premium versions was restricted, officially to reduce costs, unofficially because no one was willing to pay the real price anymore. AI went from being a technological showcase to a simple subsidiary tool. Meta closed one of its labs in complete anonymity. On Thursday, Nvidia lost 28% on the stock market as it became clear that sales of GPUs for AI had plateaued.
The news came as a shock. The media, which had been cautious or even complicit until then, changed their tune and began publishing testimonials that had been accumulating for months: unreliable models, overestimated gains, environmental abuses, misuse, exhausted employees.
On Friday, SoftBank froze all its AI investments, Oracle terminated its contracts with two major players for “failure to meet performance targets,” and a former technical director published a damning blog post, referring to a “collective suicide disguised as strategy“.
On Monday, the Nasdaq index plummeted. AI valuations collapsed and dozens of startups declared bankruptcy, some of which had not even delivered a product, just promises, prototypes, and slides.
This was no longer a sectoral crisis. Pension funds, heavily exposed to Nvidia and Microsoft, suffered heavy losses. Several regional banks suspended credit lines to cloud infrastructure providers. US state governors were already talking about the impact on public finances, as budget projections had incorporated the promises of AI.
In Europe, some telecom operators had to revise their investment plans. In Asia, data center projects were put on hold.
In one week, the illusion of an autonomous sector had collapsed: the AI bubble had contaminated the very structure of the economy.
In business, panic ensued. IT managers prepared to cut critical services, and CIOs sorted through what would still be maintained and what would no longer be. Digital assistants disappeared overnight, models stopped responding, and APIs simply displayed: “service unavailable”.
The technological dream had become a functional void.
The void
The crash had happened, but the most destabilizing part was yet to come: the void after the fall.
Generative AI-based tools were not designed to be autonomous. Their performance depended on constant updates, servers, and continuously retrained models. When the financial flows stopped, these structures followed suit.
In the weeks that followed, thousands of businesses found themselves high and dry: no support, no fixes, no maintenance. Internal teams, trained to consume proprietary models without understanding their architecture, found themselves helpless. Without source code, without data, without infrastructure, there was nothing to recover, nothing to repair.
In government, the damage was immediate. Some reception services had relied on AI agents to respond to users, but overnight, everything came to a halt. Hospitals that were experimenting with AI for triaging emergencies had to urgently revert to paper-based systems.
Some local authorities, which had co-financed AI assistants for processing files, social requests, or educational guidance, had to suspend entire sections of their digital services. Modernization projects, despite having been financed at great expense, were abandoned. All that remained were unusable interfaces connected to silent servers.
For individuals, the cut was even more drastic. Conversational interfaces disappeared from smartphones, automatic translators became pay-to-use or inoperative, and content generation tools stopped responding.
For the most dependent users, such as small businesses, freelancers, teachers, and people with disabilities, the shock was head-on. Time saved thanks to assistants, ease of use, simplification of complex tasks: everything disappeared in a matter of days.
It then became clear that no viable plan B had been thought of. There was no backup plan, no free alternative, and even less local control. Only remote services, connected to infrastructure operated by players that had now disappeared.
The survivors
Technological investment had come to a standstill. For several months, no one was financing anything, whether AI, biotech, or climate. Major funds rejected anything that remotely resembled a distant promise. It was not until the markets stabilized that local, useful, and measurable projects began to tentatively regain support.
However, not everything had disappeared in the collapse. Some pockets of resistance had remained, and they proved essential once the bubble burst.
The first to hold out were industrial AIs. Less spectacular but robust, they were used to optimize a site’s energy consumption, detect mechanical failures, and predict peaks in logistics demand. No conversation, no text generated on the fly, just reliable correlations. Little data, little noise, but clear value.
In the public sector, some organizations regained control. Older, locally hosted models were reactivated. Less powerful, they had the advantage of being explainable and maintainable. Local governments, once dependent on third-party services, rediscovered the power of simple, controlled, and viable solutions.
Technology cooperatives also emerged. Engineers tired of the race for giant models created sober alternatives, universities hosted open-source projects, trained on transparent corpora, and local authorities financed thematic, agricultural, medical, and environmental AI, which was limited but stable.
Even in the private sector, certain business models adapted. Publishers now offered locally installable AI, without going through the cloud, with a transparent business model: fixed license, technical support, or billing upon deployment. Platforms that had been shut down during the crash reappeared in another form: slower, more expensive, but transparent about their limitations.
And then there were those who invented something else.
A former R&D director at Anthropic launched a veterinary diagnostics start-up based on a local model, trained solely on validated cases; a collective of African engineers developed AI for crop monitoring adapted to rural areas without internet connection; and a former moderator launched a cooperative providing human support for tasks that AI would never do again.
As for the big names in AI, they had experienced mixed fortunes. OpenAI had been quietly absorbed by Microsoft in a move that was more financial than strategic, with the most widely used models being retained and the others abandoned. Google refocused on infrastructure, relegating generative AI to the status of an auxiliary tool. Meta cut almost everything except for a few internal projects dedicated to moderation and advertising. Nvidia fared better because its technology remained central, even though demand had fallen. It therefore repositioned itself in robotics and industrial applications. As for SoftBank, it lost everything, or almost everything. The Vision AI fund was closed. As for Apple shareholders, they are still rubbing their hands with glee at the business’s overly cautious approach…
And then what?
What followed was not a return to the past, but a slow, almost artisanal recovery.
Belief in general artificial intelligence, ubiquitous, fluid, and inexpensive, had evaporated, and with it the promises of unlimited growth, effortless gains, and algorithmic control of society.
What survived was more modest but, above all, more solid.
In businesses, AI once again became just one tool among many. It was used where it had proven its reliability. The fantasy of the universal assistant was replaced by local, specialized, explainable solutions.
Decision-makers, for their part, found it difficult to learn from their mistakes. Some continued to search for the “next miracle“, refusing to admit that the previous one was based on a fundamental misunderstanding that confused technological power with value creation.
But elsewhere, another culture developed. A culture of sobriety that accepted uncertainty, that put people back at the center, that considered technical, social, and environmental resources to be limited and therefore precious.
This more sober culture was not only a rational choice, it was also a form of protection.
People no longer believed in reviews of disruption or visionaries, and even less in technical promises.
Trust had been burned with billions, and even legitimate projects for the future would have to deal with the consequences of these painful memories.
The real price of artificial intelligence was finally set. Not only in euros or kilowatt-hours, but in attention, trust, dependence, and freedom. The general public had never been asked to pay this price, but in fact had paid it in other ways: through loss of control, systemic fragility, and opacity.
Those who continued to build AI systems now did so differently. Gone were the dreams, replaced by clear accounts, accepted limitations, and a certain form of humility.
Bottom Line
Tech bubbles never fail completely. They leave behind tools, reflexes, and sometimes useful infrastructure. But above all, they make us aware of the imbalances on which they thrived.
In this scenario, the bursting of the AI bubble is not the end of the technology, but the end of a review. That of free, universal, omnipotent artificial intelligence, driven by a few overfunded players in the name of collective urgency.
What remains standing after the collapse is nothing impressive, but it works and is profitable. These are modest uses and technical communities that have understood that when it comes to AI, as elsewhere, scarcity requires us to think differently.
Tomorrow, a new wave will return with other promises and other slogans, but one thing will have changed: our threshold of credulity. But in fact… no. History has shown us that we are unable to say no to unrealistic promises and smooth talkers, and we can safely assume that the same scenario will play out yet again, simply with different actors and a different setting.
To answer your questions…
The collapse of the AI bubble seems inevitable because the sector relies more on economic narrative than on real value. The costs are enormous, the models unreliable, and profitability absent. When funding dries up, many players will fall. The only businesses that will survive will be those that focus on sober, useful, and transparent uses.
Technical stagnation, unsustainable energy costs, and dependence on a few giants were clear signs. Performance was no longer improving, but investments continued. When investors stopped following, unprofitable projects came to a halt. The withdrawal of players such as OpenAI and Nvidia triggered panic.
It was based on a closed system where the same capital circulated without creating value. Businesses offered free credit to simulate adoption and hide unprofitability. Energy, infrastructure, and GPUs cost more than the revenue generated. This speculative model could not last.
Within days, thousands of businesses lost their AI services. Models stopped working, government agencies had to revert to manual methods, and many startups shut down. The entire economy suffered the repercussions of excessive dependence on private infrastructure.
AI has not disappeared, but has refocused on modest, local applications: maintenance, energy, and healthcare. Open source models and cooperatives have taken over. The illusion of free, universal AI has vanished, giving way to a more modest, pragmatic, and human approach.
Crédit visuel : Image générée par intelligence artificielle via ChatGPT (OpenAI)







