LinkedIn posts, conferences, consulting firm reports… it feels like we’re in a race to see who can come up with the most catastrophic and radical predictions.
“80% of tasks will be automated by AI”, “50% of jobs will disappear by 2030”, “in 20 years, everyone will be unemployed”, “if you’re still using this tool in 2024, you’re screwed”, “this tool/practice/job is dead”. And, in the end, there is always a solution, an offer, a service. Ultimately, what appeared to be a serious analysis turns out to be a sales pitch hidden under a pseudo-scientific veneer.
Technological discourse, or at least its marketing, increasingly operates in a binary mode: either you transform immediately, or you disappear. The digital transition is becoming a race against time in which every delay, every hesitation, is presented as an unforgivable strategic mistake. And to fuel this dramatic rhetoric, any means are acceptable: risky predictions, dubious extrapolations, figures obtained without serious methodology…
In short:
- Alarmist technological discourse often relies on sensationalist, methodologically unsound predictions to elicit an emotional response and sell a solution.
- There is frequent confusion between prediction (opinion or intuition) and forecasting (rigorous analysis), which fuels ambiguity around the real impacts of AI on employment.
- Generative AI currently embodies this rhetoric, with exaggerated promises about productivity or automation that fail to take into account the concrete constraints of organizations.
- Numerous past examples (Second Life, Google Glass, metaverse, blockchain) show a systematic gap between technological promises and their actual adoption.
- The main problem is not the technology itself, but the fear-mongering marketing that surrounds it, based on short-termism and simplistic discourse rather than serious, contextualized analysis.
Predictions vs forecasts: the art of ambiguity
The first misunderstanding, often deliberate and perpetuated, stems from the confusion between prediction and forecast. A prediction is an opinion, an intuition, a belief that may be sincere or instrumental, but it is only binding on the person who makes it. A forecast, on the other hand, involves rigorous work: explicit assumptions, methodology, margins of error, and verifiable sources.
When an AI expert states that “half of all jobs will disappear“, they are expressing a belief, not a bottom line based on serious modeling. When Kai-Fu Lee writes in AI Superpowers (2018) that “within fifteen years, artificial intelligence will be technically capable of replacing about 40 to 50% of jobs in the United States“, this is based on a personal estimate with no formal methodological basis.
Conversely, the World Economic Forum cites a much more measured figure in its report Future of Jobs 2023: 23% of jobs are expected to be transformed by 2027—which is very different from mass destruction (World Economic Forum – Future of Jobs Report 2023).
And when a Nobel Prize winner in economics, whose methods we assume to be rigorous (Daron Acemoglu: What do we know about the economics of AI?) is asked about AI he says us that AI will lead to a “modest increase” in GDP of between 1.1 and 1.6% over the next ten years, with an annual productivity gain of around 0.05%.
In any case, it must be acknowledged that while these practices have always been commonplace, they have reached new heights with generative AI (AGI, employment, productivity: the great bluff of AI predictions), and I am fairly convinced that if, tomorrow, we experience some form of disillusionment with this technology, it will not be because it does not work, but because too much has been promised and too early.
The business of fear
This practice is not prospective analysis but a kind of fear marketing. The higher the figures, the greater the urgency. The more violent the change is presented, the more decision-makers are inclined to buy solutions. But behind these alarmist predictions, there is almost always a commercial interest: to sell a solution, training, an audit, or a product. And since attention spans are short, dramatization is pushed to the max to make an impression on the audience.
In fact, I’ll even give you a tip if reading a 100-page study puts you off: look at who sponsored it and you’ll know what it says.
This is the logic of FOMO (fear of missing out), which LinkedIn, to name just one of its main channels, feeds every day. We read ready-made phrases such as “If you’re still doing X in 2024, stop right now,” “This practice is dead,” “Here’s why Y is going to kill Z“. In the end, nothing really dies, and if it does, it takes a very long time. Nothing kills anything overnight, but this anxiety-inducing vocabulary attracts clicks. These phrases are not intended to inform, but to trigger an emotional reaction and, ideally, a commercial engagement.
The latest: generative AI
Generative AI is now the latest incarnation of this logic, which it pushes to its limits. It is certainly very powerful, but its potential is presented as unlimited and its impact as inevitable and imminent.
The figures are staggering: 300% productivity gains, 80% of tasks automated, 60% of work impacted. A McKinsey study suggests that 60 to 70% of working time could potentially be affected, but points out that this is a potential, not actual automation (The economic potential of generative AI: The next productivity frontier). And we know that in tech, the difference between potential and reality is often significant (You can see the computer age everywhere but in the productivity statistics (Robert Solow)).
The problem is that these scenarios forget that organizations are not sterile laboratories, clean rooms, or airplanes simulating weightless flight. The deployment of AI does not depend solely on technology (The limits of technology-driven transformation), but also on managerial culture, data quality, team training, workflow organization, and a host of other factors. These are variables that tech evangelists do not control but prefer to ignore in order to maintain the illusion of a homogeneous and fast revolution.
And then, of course, we have to admit that most of us do not have the skills to question the words of an expert AI scientist. On the other hand, I doubt that he has the economic or even socio-economic skills to assess the impact of technology on job destruction. To each his own. I’m not going to ask a football player to model and quantify the impact of a championship victory on the sales of the club’s sponsors.
Nothing new under the sun
We’ve seen this story unfold before our eyes many times.
Remember Second Life, presented in 2006 as the future of the web? Businesses rushed to get involved, convinced that they had to be there. The result: they fled as quickly as they had arrived.
Too complex, too unstable, too out of step with real-world uses (The World’s First Metaverse: What Happened To Second Life?).
The same goes for Meta’s metaverse, launched with great fanfare in 2021 and largely abandoned by 2023 due to a lack of clear business interest (Metaverse: where do we stand 5 years later?).
Google Glass? Launched at $1,500 each, hailed as a revolution, then quietly abandoned in 2015 after privacy issues and a total lack of convincing use cases (Why was Google Glass discontinued?)..
And what about blockchain, which was supposed to transform everything from logistics to HR governance? In 2018, we read that all businesses would switch to blockchain. In 2023, only 2 to 8% of digital projects will make structural use of it (Organizations aren’t adopting blockchain? Study reveals why).
Amara was right
Faced with this constant gap between promise and reality, it is worth remembering Amara’s law:
“We always overestimate the impact of technology in the short term and underestimate it in the long term.”
— Roy Amara
A realistic law, but not very bankable because it doesn’t sell. On the contrary, it encourages caution, a long-term strategy, and investment in appropriation rather than in the wow effect (Are you familiar with Amara’s law on the short- and long-term impact of technology?). Not very compatible with short-term marketing logic and the imperative to sign clients quickly by promising them the moon.
Bottom line
Technology is often an easy scapegoat for failed transformation projects, but it is not technology itself that should be criticized, but rather the rhetoric that accompanies it. It is not innovation that we should fear, but the manipulative rhetoric that rides on its coattails. AI is a serious, complex, multidimensional subject that deserves better than superficial figures, simplistic analogies, and promises of a ready-made revolution.
So no, if you don’t buy the miracle tool right away, you’re not going to die. And no, it’s not a big deal if your business takes six months longer than expected to explore a new field of technology. What matters is not being the fastest to run in a given direction, but making sure you’re running for the right reasons and in the right direction.
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







