AGI, employment, productivity: the great bluff of AI predictions

-

Every week brings a new batch of figures on artificial intelligence, each more impressive and even worrying than the last. Millions of jobs are at stake, productivity gains are astronomical, and general artificial intelligence, the kind that will render us all obsolete, is just around the corner!

But how much confidence can we place in these figures? How scientifically rigorous are they?

All too often, we confuse forecasts with predictions, credible projections with risky extrapolations.

This is a constant in the world of technology, but once again, it is important to distinguish between what is based on analysis and what is based on belief or communication.

The fact is that most of the figures we read, discuss, and perhaps base our decisions on are rough estimates that are not supported by any serious mathematical model.

In short:

  • The figures put forward on the impact of AI are often based on loose extrapolations, without a solid scientific model, and serve marketing or political interests rather than providing reliable analysis.
  • Confusion between forecasting and prediction fuels misperceptions about the future of AI, with the former being based on proven data and models, and the latter being speculation or even belief.
  • Artificial general intelligence (AGI) is subject to vague definitions and divergent views, making any discussion of its advent highly speculative and unscientific.
  • The predicted effects of AI on employment and productivity are based on fragile assumptions and questionable methodologies, with significant discrepancies between estimates and little empirical evidence to date.
  • The dominant discourse on AI is shaped by actors with an interest in exaggerating its impact, creating an asymmetry between technological promises and economic and social realities, to the detriment of rigorous analysis.

Forecasting vs prediction: two diametrically opposed approaches

I recently highlighted the difference between forecasts and predictions (Why do leaders and experts make big mistakes when it comes to anticipating the future?), but I think it is worth reiterating here.

A forecast is based on observable data, validated mathematical and statistical models, and an assessed probability of occurrence. It is the result of a rigorous methodology similar to that used in economics or meteorology.

Conversely, a prediction is a statement about the future based on little fact, subjective, even prophetic.

Let’s put it another way.

Forecasts are made by serious, somewhat sad people who follow strict methodologies where intuition and creativity have no place.

Predictions, on the other hand, are made by people who tell you that the future is going to be terrible and that you will only survive by buying their products and services. Fear marketing has always worked well and, to quote a famous politician, the bigger the lie, the more people believe it.

Remember that in 2015 we were all going to die if we didn’t invest in digital transformation. Who died from digitalization or Uberization? No one (Digital : the empire strikes back). Tomorrow, we’ll be talking about the Metaverse, which five years ago some people valued at between $5 trillion and $13 trillion. And don’t forget that in 2000, we were all supposed to have flying cars.

Back to the point.

When it comes to AI, the confusion between forecasts and predictions is systematic: estimates and even marketing arguments are presented as certainties, systemic effects are projected from local or laboratory tests and turned into truths.

Very few, if any, of these predictions are modeled on solid data or validated by comparing scenarios. They are therefore not based on any robust model.

Anyone who has studied economics knows what a robust model is: it is based on explicit assumptions, verifiable empirical data, the ability to be tested over time, and controlled sensitivity to parameter variations. This is often not the case here.

Making predictions about AI without a rigorous model is like forecasting the weather two weeks from now without looking at the sky or having a satellite.

Indeed, most predictions about AI are based on task matrices, opinion questionnaires, or unmodeled qualitative reasoning. This is more foresight than predictive science.

And the vocabulary used by consultants, journalists, executives, and, of course, vendors’ marketing departments deliberately maintains this ambiguity in order to make us take self-fulfilling prophecies for plausible scenarios.

AGI: an elusive definition, diverging horizons

It inspires dreams, fears, and fantasies: I am, of course, talking about general artificial intelligence (AGI). It is the holy grail, the ultimate AI: a form of artificial intelligence capable of understanding, learning, and performing any human cognitive task with a level of performance at least equivalent to that of a human being, autonomously and transferable between domains.

It is what will render us all obsolete.

But we still need to agree on what we are talking about, and that is far from being the case because there is no consensus definition of what AGI is.

For some, it is AI capable of performing all human cognitive tasks autonomously and generalizably. For others, it refers to a system capable of transferring skills acquired in one domain to another without human supervision. Still others simply refer to performance equivalent to that of an average human in a variety of tasks.

More recently, Microsoft and OpenAI announced that AGI will be achieved as soon as OpenAI develops an AI system capable of generating at least $100 billion in profits (Microsoft and OpenAI agree on a financial definition of AGI of $100 billion).

Much less ambitious and, based on experience, I think that when you lower the criteria for judgment, it means that you think it will be difficult to achieve. A bit like the level of requirements for a high school diploma…

In short, AGI is a bit like the Loch Ness monster: everyone talks about it, some swear they’ve seen it, but no one can clearly define it or prove its existence.

In 2022, a study by AI Impacts surveyed 738 AI researchers (2022 Expert Survey on Progress in AI) and 50% believed that AGI would appear before 2059, 25% believed it would not appear before 2100, and a significant minority believed it would never appear.

But in AI terms, 2022 is an eternity away. So what is the current thinking?

Sam Altman, CEO of OpenAI, has stated that AGI could emerge as early as 2025 (Reflections), a position shared by a few figures in the sector but a minority in the scientific community. At Anthropic, its president Dario Amodei believes that AGI could emerge as early as 2026, or even in the next 12 to 24 months (Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity). This is in line with the “AI 2027” scenario developed by former researchers at OpenAI and the Center for AI Policy, who predict that AGI will emerge around 2027.

But others are much more cautious.

For Geoffrey Hinton (formerly of Google), AGI could arrive in 5 to 20 years, i.e. between 2028 and 2043 (Here’s how far we are from AGI, according to the people developing it), but he emphasizes the continuing uncertainty surrounding this timeline and even what AGI really is (‘Godfather of AI’ says there isn’t a consensus on what ‘artificial general intelligence’ means.

Alongside this, various studies place the arrival of AGI between 2030 and 2060.

We are therefore talking about a margin of error of 35 years for something for which no definition exists and whose very existence is doubted by some.

Other authorities such as Yann LeCun (Meta) consider it very distant and poorly defined (Meta’ s LeCun Debunks AGI Hype, Says it is Decades Away), while others such as Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Pausing AI Developments Isn’t Enough. We Need to Shut it All Down), announce its arrival as imminent and potentially dangerous.

These differences therefore make any discussion about the timing or impacts of AGI entirely speculative and reveal a complete lack of consensus not only on what AGI is, but also on the plausibility of its arrival within a usable time frame.

We are therefore entirely in the realm of personal growth at best and marketing at worst. All of this is based more on intuition, philosophical positions, commercial imperatives, and subliminal messages to investors than on measurable or modelable progress.

The AI IQ scam

I read here and there that AI is now achieving IQ scores similar to the most intelligent humans and will quickly surpass them. But I don’t read anyone who, instead of frantically sharing the hype, questions the relevance of the argument.

IQ is a statistical measurement tool designed to assess certain human cognitive abilities such as logical reasoning, memory, and verbal comprehension. It is based on a normalization around a human average for a given age group.

Applying this concept to artificial intelligence is therefore scientifically questionable because AI does not share human cognitive structure or biological limitations.

Furthermore, AI can excel at certain IQ tasks while failing at tasks that are elementary for humans (contextual understanding, common sense). Try asking chatGPT to do a calculation or solve an equation, even a simple one, and you’ll laugh: it’s normal, it can’t count, not even the number of characters in a word or the number of occurrences of a letter in a word, and it doesn’t understand the meaning of what it says. For it, the answer it gives you is not the best in terms of meaning, but the most statistically probable.

The tests used to calculate the IQ of an AI are also often biased, because AI is often trained on the test corpus itself. It’s a bit like training on the subject of the exam you’re about to take.

Finally, the intelligence of an AI is fundamentally different from that of a human being: it is specialized, contextual, and has neither consciousness nor intention.

IQ is therefore neither relevant nor sufficient for assessing or comparing the cognitive abilities of artificial intelligence.

LeCun says as much himself (Are we all wrong about AI? When academics challenge the Silicon Valley dream and This AI Pioneer Thinks AI Is Dumber Than a CatMeta AI Chief Yann LeCun: Human Intelligence Is Not General Intelligence) when he says that AGI, or whatever you want to call it, will not be achieved by LLMs, which are otherwise touted for their supposed IQ (Meta’s AI chief: LLMs will never reach human-level intelligence). In doing so, he is saying much the same thing as Luc Julia, who tells us that the alarmist talk surrounding AGI is often exaggerated and does not reflect the reality of AI’s current capabilities ([FR]In AI, too much artificiality, not enough intelligence, according to specialist Luc Julia). But I don’t understand why every time I quote the latter on LinkedIn, I see a kind of outcry, as if people were afraid that their dreams and toys were being shattered.

Don’t get me wrong, I’m not saying that one day AI won’t be capable of equaling or surpassing us, at least in certain areas.

I’m just saying that no one agrees on what we’re talking about, when it will happen, or even if it will happen, and in any case, to what extent.

I’m not saying that any of these eminent specialists are right or wrong, I’m just saying that everyone is free to choose the hypothesis they want, none is more valid than another, and none is based on scientific reasoning but on intuition and conviction.

AI and job destruction: a mixture of assumptions and approximations

I recently told you that I did not believe in the widespread replacement of humans by AI, at least in the medium and even long term (AI and jobs: why I don’t believe in the “great replacement” of humans by machines).

Now let’s look at the figures available on the subject.

Goldman Sachs talks about 300 million jobs “at risk” worldwide (Generative AI could raise global GDP by 7%).

Also in 2023, McKinsey predicted that 60 to 70% of working time could be automated by 2030 (The economic potential of generative AI: The next productivity frontier). It is important to note the difference between jobs and working time.

More recently, in 2025, the World Economic Forum (2023) anticipates 92 million jobs will be lost by 2030, but also that 170 million new jobs will be created at the same time (The Future of Jobs Report 2025).

There is no need to go any further: you will find a huge amount of figures that do not all say the same thing, and above all, not in the same proportions.

But it is important to bear in mind that a job that is exposed is not a job that is lost. Furthermore, these estimates are not based on any robust mathematical model. They are neither econometric projections nor validated models, but at best extrapolations from matrices matching tasks and AI capabilities, and at worst intuitions transformed into figures for marketing purposes.

There is currently no scientific model that can reliably predict how many jobs will be lost, when, and in which sectors. This is therefore a subjective assessment with no predictive value.

We have already learned this lesson in the past, but it does not seem to have been learned. In fact, as early as 2013, a study by Frey & Osborne Frey told us that between 33% and 47% of US jobs were automatable (The future of employment: how susceptible are jobs to computerization?). Three years later, the OECD reduced this figure to 9% due to a methodological problem: the initial estimates assumed that a job would be fully automated if the majority of its tasks could be automated, which is completely unrealistic.

Closer to us, another study has just tempered the most alarmist scenarios (Generative AI is not replacing jobs or hurting wages at all, economists claim). Analysis of data from 200 million job offers in the United States shows that the arrival of generative AI tools had no significant impact on job offers or wages in the most exposed sectors.

LeCun also agrees (Meta scientist Yann LeCun says AI won’t destroy jobs forever).

I would like to emphasize once again the importance of rigorous methodology. We list tasks, look at what AI can do, and deduce a potential “replaceability” that may or may not become a reality. But is AI solely responsible for the potential destruction of jobs in 2025? Perhaps variables such as economic tensions or wars play a role, even a minimal one, in a potential economic slowdown? But technocentric predictions never take external variables into account

Again, don’t put words in my mouth.

I am often asked whether “AI is going to steal our jobs.”

My answer is that the question is poorly phrased:

• Will AI take over some of my activities? Certainly. But how much and in what time frame?

• Will it impose unwanted and restrictive tasks on me, such as wasting time writing a one-off prompt or checking its results and correcting errors? Yes. But to what extent?

• Will it create high value-added, more fulfilling tasks? Certainly, but to what extent and for how many people?

• When or how quickly will this happen? I have absolutely no idea, rationally speaking.

I’m not saying it won’t happen, I’m just saying that it’s likely (how likely?) but without knowing to what extent, and that everything we read in terms of scale and timeframe is based on nothing solid.

I can tell you that I will probably go on vacation this summer (the probability is not even 100%), but I don’t know where or when. Given that, I don’t have enough to sustain a conversation with my friends about my future vacations, but if I talked about AI with the same level of certainty, I could pass myself off as a guru.

In the meantime, I have no doubt that everything that can be automated will be automated, that we are deluding ourselves about so-called more fulfilling jobs, and that in the end we will make a lot of mistakes before perhaps going back to where we started (Let’s stop being nAIve with AI in the workplace). But to make the right decisions, we need figures and time frames, and in this area, we are navigating in a fantasy world.

Productivity gains: conditional projections

McKinsey therefore argues that generative AI could generate +3.3% annual productivity gains by 2040. PwC estimates that the impact on global GDP will be +14% by 2030 (Sizing the prize. What’s the real value of AI for your business and how can you capitalize?).

But these figures are conditional because they are based on massive and fast adoption, widespread retraining of employees, and smooth integration into organizations. All of which never happens (You can see the computer age everywhere but in the productivity statistics).

And, above all, they are not based on any rigorous modeling, but only on these optimistic and linear assumptions.

In reality, the gains observed remain localized and often marginal. Integration costs, data quality, managerial culture, and the learning curve are all obstacles that significantly mitigate the promised impact, not to mention acceptability and social impact (The challenges posed by AI are not technological, but must be met today.).

Estimates of productivity gains are often based on assumptions stacked like a house of cards: if just one parameter is unrealistic, the whole thing collapses.

Let us remember that it took a very long time for electricity to truly transform industry, as time was needed to rebuild and reorganize factories that had been designed for other forms of energy, and that the era of prosperity that followed also required violent social struggles. And history tends to repeat itself forever (We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.). As one of my mentors used to say, “It takes time for things to happen quickly.”

Daron Acemoglu, an MIT economist and a credible figure, having been awarded a Nobel Prize, seems to agree (Daron Acemoglu: What do we know about the economics of AI?): the macroeconomic effects of AI are currently overestimated, due to a lack of evidence to the contrary. His work suggests a maximum increase in GDP of 1.6% over 10 years with an annual productivity gain of 0.05% or even negative gains for the least skilled professions, not to mention the negative side effects. According to him, the central question is not whether AI will transform everything, but how to direct innovation towards uses that truly complement human work rather than systematically seeking to automate it.

This is a far cry from McKinsey’s 14%. Who is right? Perhaps Acemoglu, whose method seems more rigorous from an economic point of view, but I am not qualified to judge.

What I do know, however, is that there is a huge gap between 14% and 1.6%, and that there is no way of knowing whether one is right and the other wrong, unless we blindly follow the hypothesis that suits us best.

A poorly phrased question

Rather than asking “Will AI replace humans?”, we should perhaps ask ourselves: for which tasks, with what systemic effects, and at what pace?

AI replaces tasks, rarely entire jobs. It also creates new needs: model supervision, prompt design, human verification. Every technological wave has given rise to new jobs, even if it takes time.

Technology does not solve problems, but helps us solve them and, in doing so, creates new ones that humans will have to solve (Technology Doesn’t Solve Problems).

We also forget about rebound effects: greater productivity can lead to greater demand. Finally, organizations do not change instantly: their inertia slows down the impact of technologies, especially since social and regulatory frameworks do not always follow the same dynamics as technical innovation.

The discourse factory clouds analysis

One final aspect is worth highlighting: these fanciful predictions are not just the result of excessive optimism or a lack of method, but often the product of an ecosystem of conflicting interests.

Consulting firms and the media, which thrive on reviews that mix hype and fear, have every interest in fueling scenarios that boost their offerings, influence, or audience. AI solution providers want to convince investors that the promised land is in sight. As for the investors themselves, after injecting massive amounts of capital into AI technologies, they need to tell the market that the applications are mature and that a return on investment is imminent.

This dynamic produces a self-perpetuating techno-evangelical discourse in which it becomes impossible to distinguish analysis from marketing.

This bias influences political, budgetary, and HR decisions: industry closures, educational reforms, and transformation plans based on unreliable data. In governance as in strategy, making decisions based on dubious figures is tantamount to disguising intuition or even credulity as rationality.

A structural asymmetry between discourse and reality

Discourse on AI is accelerating much faster than organizations, skill sets, and regulations. This asymmetry between the speed of promises and the slowness of reality creates a gap between the technological offering and the capacity of systems to absorb it.

This is not new, it is just taking on unprecedented proportions.

Once again, it is important to make a distinction here: we can agree on the trend but admit that the figures are meaningless.

We can sense that a wave is rising without being able to say where each drop of water will fall.

Yes, AI is already transforming the world of work, but the overall effects of this transformation are still largely unknown. It is not fear or unbridled enthusiasm that should guide our decisions, but rigor. Between rigorous forecasts and spectacular predictions, we must learn to keep a cool head.

The figures being bandied about to predict the future of AI are more speculation than science, and no equation has ever shown that such a percentage of jobs will disappear, nor is there a model that supports projections for 10 or 20 years. These are estimates based on vague, often unverifiable assumptions.

Bottom line

The predictions may be right about the general trend (yes, AI will transform work), but they are completely flawed when it comes to figures, time frames, and concrete impacts. It would therefore be wise to move away from a quasi-prophetic technological vision that serves no purpose other than to support marketing rhetoric, and return to a solid, humble, and evolving economic and social analysis.

A technology may be promising without its effects being predictable. That is the difficulty of thinking about the future in uncertain times.

I am not saying that nothing will change, far from it, but that the figures put forward tell us absolutely nothing.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent