Is generative AI doomed to never be profitable?

-

There is much debate about the ability of AI players to ever become profitable, and this is anything but a trivial question. After all, what is the point of investing in a technology if its players could disappear overnight?

The response to this is that in the past, companies such as Google and Amazon waited many years before becoming profitable, and that not only is this logical for a startup, but it is also part of the strategy of a business such as OpenAI: create a market, flood it, and only then monetize it once you have become its center of gravity.

With nearly 400 million weekly users today, OpenAI seems well on the road to success.

But a gray area remains: AI is not like other SaaS, its costs are increasing, we are also told, in proportion to the increase in usage, which makes profitability unthinkable. This is the view of Ed Zitron, one of the most extreme proponents of the bubble-about-to-burst scenario. It makes for difficult reading at times, but it raises some pertinent questions (The Generative AI ConOpenAI Is A Systemic Risk To The Tech Industry and Reality Check).

I fully agree with the hypothesis that we cannot count on the same economies of scale as in traditional SaaS, but while the arguments put forward are indisputable, they deserve to be qualified somewhat.

I should point out that we are talking here about generative AI; there are many other types of AI, and their business model has been viable for ages (AI for dummies who want to see a little more clearly).

In short:

  • The economic model of generative AI faces high usage costs (training, inference, salaries) that do not yet allow for large-scale profitability despite growing revenues.
  • Revenue sources are varied (subscriptions, paid features, APIs), but the low share of paying users and the lack of clear ROI limit the model’s current viability.
  • Usage costs are falling quickly thanks to technical advances (better infrastructure, model optimization), but this trend could slow down and jeopardize the path to profitability.
  • Several risks weigh on the model: lack of perceived value, stagnating efficiency gains, regulation, data scarcity, free competition, and value shifting to other players (applications, local use).
  • Future profitability will depend on the balance between cost reductions, customer willingness to pay, and regulatory constraints, with scenarios ranging from traditional cloud profitability to low-profit public service.

Generative AI cannot be profitable

Since ChatGPT and its counterparts have been in the news, two claims have often been made.

The first is that running artificial intelligence costs a fortune, and the second is that since you have to pay for each query (even if the result is not satisfactory), unlike traditional software, it will never be profitable.

These statements are based on a true observation: generative AI is very expensive to operate. But that does not mean that the business model is doomed to failure.

How much does generative AI cost?

Here, we need to distinguish between two things.

First, there is training, which is the phase where the model is “trained” by making it read billions of pages so that it learns to generate text.

For a model like GPT-4, we’re talking about $100 million for the final training (The Extreme Cost Of Training AI Models).

Then comes inference, which involves mobilizing expensive infrastructure every time a query is entered. In the case of ChatGPT, we are talking about approximately $700,000 per day to run the service.

And let’s not forget salaries: at OpenAI, the average cost per employee exceeds $1 million per year, with a total payroll of $1.5 billion for 1,500 employees.

Does generative AI make money?

Yes, of course, and there are currently three main sources of revenue.

Firstly, consumer subscriptions such as ChatGPT Plus at $20 per month.

Secondly, in the form of a paid feature. This is the case with Microsoft Copilot for businesses, which is billed at between $30 and $36 per month per user. AI costs more than the rest of the suite, and my opinion is that Microsoft has no choice but to charge a fairly aggressive price, given that the cost structure of AI does not allow for the economies of scale of SaaS, whereas it can afford to offer massive discounts on its Microsoft 365 suite.

Finally, and this is the model generally used in B2B, there is a pay-per-use model via API where you pay per million tokens (blocks of text). This brings in between $2 and $8 depending on the case with ChatGPT.

As for advertising, players are considering it, but OpenAI would clearly prefer to avoid it (Does OpenAI want to, should it, and can it become the new Google?).

In June 2025, OpenAI announced annual revenue of $10 to $12 billion, and Anthropic exceeded $3 billion.

But this must be put into perspective.

Of the more than 400 million weekly users of ChatGPT, less than 3% pay, with the rest being free users. It is therefore fair to say that OpenAI loses money on every query.

Many businesses are experimenting, but few have industrialized the tool. The return on investment is still unclear. According to an IBM study, only 25% of AI projects currently achieve profitability targets (Will genAI businesses crash and burn?) And I was recently confirmed that many projects do not make it past the pilot stage due to a difficult-to-quantify ROI in the face of such significant real costs (88% of AI pilots fail to reach production — but that’s not all on IT).

Microsoft is no better off with Copilot, which is struggling to convince ([FR]Copilot: the disappointments of generative AI integrated into Microsoft office). 60% of businesses have tested Copilot, but only 16% have moved on to the deployment phase (How to get Microsoft 365 Copilot beyond the pilot stage).

It’s not that AI doesn’t work, but rather that we have relied on fanciful projections (AGI, employment, productivity: the great bluff of AI predictions) and that the cost to customers seems excessive given the expectations generated and the benefits observed.

In short, the AI giants are unable to pass on their costs to their customers due to a lack of perceived added value. It’s important to bear in mind that the “real” price of a consumer subscription to ChatGPT should be between $100 and $1,500 per year depending on usage ([FR] ChatGPT: $20 per month? At that price, even your toaster would hallucinate).At that price, anyone would think twice before asking it for anything.

The bottom line is that AI has little impact on their bottom line, at least in terms of revenue, because in terms of costs, it’s a different story. Azure AI, Copilot, and Github Copilot account for barely 5% of Microsoft’s revenue, Oracle does not disclose this information, and Salesforce is estimated to be at 2.5%.

Today, for these players, AI is a relatively unprofitable growth driver, and while the subject is still in its infancy, experts believe we will have to wait until AI accounts for more than 10% of revenue and has a positive impact on margins before it moves from being a fun feature to a real line of business (Generative AI: a bubble, a crash, or a turning point?).

Will costs inevitably skyrocket with success?

You might think so, especially when compared to traditional SaaS, which has a marginal cost of almost zero, which is not at all the case with AI.

Well, contrary to popular belief, no. In fact, the cost of use is falling quickly. In two years, prices have dropped by more than 90%. Why?

Firstly, because servers are becoming more powerful and less energy-intensive: NVIDIA’s new supercomputer is 30 times faster than the one from 2023, while consuming less energy.

Secondly, models are becoming more efficient: we no longer activate the entire model’s brain every time, which reduces the computing effort.

Finally, requests are grouped together and processed more efficiently in parallel.

According to Sam Altman (OpenAI), the cost of use has been divided by 10 every year for the past two years.

The result: the further we go, the less each response costs to produce, but it remains true that these costs are still far too high to be passed on to the customer. In fact, OpenAI does not expect to be profitable before 2029.

To put costs and revenues into perspective, the business lost $5 billion in 2024 on revenues of $3.7 billion and is projecting revenues of $14 billion for 2025, with losses of $12 billion.

Where is the risk?

Even if costs fall, profitability is not guaranteed, and here are the risks facing the sector.

1) Customers do not see the value. The less tangible the value and ROI, the less willing they will be to pay, and the more costs will have to be reduced.

2) The cost of use is no longer falling. This is a risk. Altman’s law has been proven over two years in a sector that started from a very low base in terms of optimization, but there is no guarantee that the margin for improvement will not shrink over time.

3°) Competitors are slashing prices. We know that Deepseek was a publicity stunt that cost much more than the advertised price, but if free models become more widespread, fewer and fewer people will be willing to pay.

4°) Laws are becoming stricter. It will be necessary to prove that AI respects copyright, the environment, and is auditable and audited. The cost of regulation could weigh heavily on AI in the future, and we are getting a taste of this with the European AI Act (The European AI Act for dummies).

)Data for training is becoming scarce and expensive. AI has reached the limits of the data available on the web, and it will be necessary to pay a high price for quality content to train future models (Can AI run out of fuel or kill the web?). There is also the issue of copyright, with radically different points of view between AI vendors and authors, and between the two sides of the Atlantic. We will be following the first court cases on this issue with interest ([FR]Why the legal defeat of an AI start-up could have serious consequences for OpenAI and [FR]The trial that could spell disaster for AI).

5) Apps (such as Notion, Canva, etc.) capture all the value. I have already mentioned this, but these “wrappers” that come between the AI provider and the customer can capture most of the market value and commoditize AI manufacturers (Wrappers, deeptechs, and generative AI: a profitable but fragile house of cards).

6) Models work directly on phones. If AI runs locally, no requests are sent to the cloud what means no more revenue for the provider.

These risks are still theoretical, but according to experts, it would only take two or three of them to occur at the same time for the business model to collapse.

Can AI be profitable or not?

There are actually two scenarios.

In the optimistic version, costs continue to fall, businesses pay more and more, and regulation remains reasonable. In this case, the model becomes profitable with a 40 to 60% gross margin, like the big clouds (Amazon, Google, etc.).

In the pessimistic scenario, prices collapse, costs stagnate, regulation increases, and users do not pay. In this case, AI becomes a “digital public service”: useful, essential… but not very profitable.

Bottom line

It all comes down to basic economics: generative AI will only be profitable if it manages to get enough users to pay for a service whose cost is falling fast enough.

It’s a race between technological progress and economic reality, and at this point, it’s anyone’s guess who will win.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
31SubscribersSubscribe

Recent