AI and jobs: why I don’t believe in the “great replacement” of humans by machines

-

AI can be discussed from just about any perspective, but we always end up talking about the elephant in the room: its impact on employment and the predicted widespread replacement of humans by machines.

The story goes that AI will end up performing tasks carried out by humans, and we are already confronted with this, but the key question is whether one day all or almost all jobs could be taken over by AI.

AI is replacing humans, and that’s a good thing

Let’s start with the good news: yes, AI is replacing humans and making their jobs disappear, and that’s excellent news.

I usually say that under the influence of a Taylorist model that persists despite the changing nature of work, we have designed jobs for robots, but since we didn’t have any robots, we gave them to humans.

Now that we have robots, we might as well free humans from these jobs that they don’t like and in which they don’t thrive.

To do what with them? Give them more meaningful and fulfilling jobs, which of course will allow them to better express the qualities that set them apart from machines. But that’s another subject that we’ll talk about another time.

Replacement should not be a goal

When we talk about replacing humans with technology, the first question to ask is whether we are talking about a goal or a consequence.

In other words, has technology been used with a business objective in mind and the consequence is that there is no longer a need for humans or for them to be where they were, or has the aim been to separate from humans and technology been used as a means to this end?

In the first case, everything is generally fine.

In the second, it’s like self-medicating without really knowing what you’re suffering from, or even getting the wrong disease.

Having spent most of my career in services and tech, I can no longer count the number of times businesses have asked for this or that tool to be deployed with certain ideas in mind, without being aware of their real problem and, sometimes, deliberately wanting to ignore it.

I really enjoyed seeing the look on my interlocutors’ faces when I said “instead of responding to your request, I would prefer to help you with your need”. Depending on the reaction, I knew what I was getting myself into.

All this to tell you that in my opinion at least 75% of businesses are in the second category, that technology will not solve their problems, and that not only will it not allow them to separate from humans but will also create other problems that humans will have to solve.

Computers have never made anyone lose their job in an office

As far as I can remember, I have never seen that the spread of computers in businesses has made it possible to get rid of people, a fortiori, because it is them we are talking about, white-collar workers.

And yet, over time, all the tools for collaboration, communication and collective intelligence should have brought us there, but this is not the case.

Logically, if we work better and faster together, this should increase individual and collective productivity, so we should need fewer people, right?

Perhaps these technologies have just increased the bandwidth of employees and instead of enabling people to be separated, they have just reduced delays. I accept the omen and in some cases this must be the case.

Perhaps businesses have also sometimes lacked courage, but I don’t believe it.

But there is also the fact that technology requires a period of assimilation for organizations and, above all, that it requires them to adapt, whereas they often want technology so that they don’t have to adapt.

This brings us back to the famous Solow paradox (You can see the computer age everywhere but in the productivity statistics), which says nothing about whether AI will make it obsolete.

The myth of the augmented employee

Proof of this is the promise that AI will “augment” each employee individually to make them more efficient. I want to believe it, even if the figures tell a different story in terms of both adoption and benefits (Generative AI in the workplace: revolution or illusion?).

But we don’t work alone. We are part of a collective, and no matter how work is organized, we are interdependent. The consequence is that the sum of individual productivity gains may not mean any improvement at the collective level if, before deploying AI, we do not question the organization of the work in question (AI in the workplace: going beyond augmentation to actually transform).

AI or not, there will always be a weak link in the system, a person who, through suboptimal use of AI or sometimes through the very nature of their work, will nullify the progress of others. A caricatural example: thanks to AI, complex calls for tenders can be answered much faster, but they require validation by technical experts, then validation of the costing, plus overall commercial validation. If these people do not increase their bandwidth to work at the same speed as others, they will be the limiting factor in any improvement process. Worse still, they will be buried under requests that will arrive faster and in greater numbers and may work even slower than before. AI will turn humans into coordinators and validators, and their limited cognitive capacity by definition will limit the gains.

AI will not eliminate bottlenecks (Eliyahu Goldratt’s fictional interview on infobesity and bottlenecks in knowledge work), it will only highlight and multiply them.

Humans remain very profitable

For AI to replace my humans, several cumulative conditions must be met.

The first is that it does the work of humans with an equivalent level of quality. This is not always the case today, whatever we are led to believe.

The second is that it does it faster. This is the case provided that it is trained on data of sufficient quantity and quality and that it is not applied in a dysfunctional environment in terms of process and data, in which case it will cause the organization to malfunction faster and on a larger scale.

The third is that it must be profitable! If we realize that AI is indeed 10 times faster than humans of equivalent quality but costs 20 times more, it won’t work.

Today people are speaking up.

A person whose business in a highly regulated sector had to develop its own AI for understandable reasons told me, “given the cost of the request, it can’t be self-service for everyone”.

In general, the ROI of AI and especially of generative AI is slow to materialize ([FR]Making GenAI profitable: big companies are still looking for the right model or Workday CEO: ‘For all the dollars that’s been invested so far, we have yet to realize the full promise of AI’ or Irving Wladawsky-Berger: Is the AI Revolution Already Losing Steam?). So of course we’re only at the beginning and we can expect major progress in the next 3 to 5 years, but remember that the same thing was said about the Internet of Things, which today is hardly on the agenda of managers anywhere ([FR]AI: time for clients to scale up?). And what about the Metaverse? Moreover, for some, generative AI is already living out its last hours([FR]“Generative AI will soon disappear”).

Today, business customers are struggling to find the ROI of AI, while publishers, in order to promote the dissemination of the technology, still do not pass on the full costs to them, but there is no guarantee that this situation will last long and that one day the financiers will not whistle the end of the game.

Moreover, things are no better among publishers. A well-informed source at a market leader recently told me something like this: “We only pass on a fraction of the costs to customers and despite this, sales of AI products are very low… and even if we increase the prices of the licenses for the historical offer very slightly to include AI, it doesn’t go through either”.

Because while AI represents astronomical sums for these publishers, it is an investment, not income. Microsoft has spent $55 billion on AI for $13 billion in revenue, and Anthropic or Open AI are losing huge sums (The Generative AI Con). This is normal when we talk about so-called emerging technologies, but given the sums involved, the time is not far away when the question of profitability will arise. Okay, we’re only talking about generative AI, whereas there are plenty of forms of AI with a more tangible ROI, but it constitutes the bubble that can take everything else with it when it bursts.

Today, publishers only pass on a fraction of the costs to customers so as not to slow down adoption and take or even create markets, but there is no guarantee that costs will one day fall and that customers will be willing to pay more. However, for a technology, even an excellent one, to survive, two things are needed: that customers derive a tangible ROI and that those who manufacture it recoup their investments and make a substantial margin on them. If one of the two loses money in the long term, the story does not last long.

So yes, businesses will continue to invest so as not to miss the opportunity if it ever arises (ROI Vs. RONI: why businesses should invest in AI despite uncertain ROI)…until the day it becomes untenable.

To put it another way, today, AI often does much better than humans but sometimes less well, incomparably more quickly and at a price that no one can bear if they had to pay it.

You can buy Ferraris to deliver pizzas in town, it’s much faster than deliverymen on bikes and it’s better for the image, but no one does it and there are surely reasons for that.

Not everything that can be automated should be automated

I often have a very execution and efficiency-oriented discourse, but there is more to life than that.

There are tasks, jobs, where quality is not everything or can sometimes even be relegated to the background because what matters is that they are carried out by humans. What is valued is not only the job but the human dimension conveyed by interactions (“I’m Afraid We Are Automating This Work Without Really Understanding It”).

If the argument for replacing humans with AI is, logically, based on criteria of efficiency and value created, it is obvious that in the context of this “connective work” we will destroy value.

This may seem obvious, but we have to admit that past experience shows that in some areas we have not hesitated to destroy value through the misuse of technology (How years of progress have killed customer service) to such an extent that some businesses are now backtracking by emphasizing the rehumanization of customer service.

Take the example of the CEO of Klarna, the Swedish Fintech company that recently declared, after having fired 700 customer service staff and replaced them with robots: “We had an epiphany: in a world of AI, nothing will ever be as valuable as a human being. You can laugh at us for realizing it so late, but we will start working to make Klarna the best at offering humans to talk to.”

But if we have learned from the past and think above all in terms of value, we will have understood that sometimes the use of technology must be reasoned.

Solutions that always create problems

As unpopular as this opinion may be today, technology does not solve any problems, even if we are under the illusion that it does (Technology Doesn’t Solve Problems).

Technologies often appear to solve problems, what they actually tend to influence the most is reorganise human social relationships and power dynamics.

Technologies are physical manifestations of our inherent problem solving abilities. Technologies can’t solve problems independently, they are extensions of us when we solve problems.

I would add, as I said above, that more often than not, and this is the last straw for things we call “solutions”, technology often creates new problems, either directly or because by helping us to solve some of them, it brings to the fore some that were hidden by the previous problems. And to solve them, we will need humans, even if they use technology to do so.

The myth of the spontaneous generation of qualified employees

To return to the prevailing discourse today, AI will not kill jobs but transform them. I believe this partly for the reasons mentioned above, but we must also bear in mind that it is in no one’s interest to sow fear, even if we were convinced of the opposite.

Tomorrow, humans will supervise AIs and agents, a bit like if everyone became the manager of their own small team of virtual agents and dedicated their time to tasks with very high added value in which AIs cannot compete.

I suppose that to reach this level, a combination of things is needed, which I will summarize in two words: skills and experience.

Now I would like someone to explain to me how we end up with experienced employees, experts, managers and decision-makers in a world where we have done away with all the “entry level jobs” that may not have created much value when performed by humans but which allowed those humans to learn before moving on to the next stage of their career (Will AI replace juniors? The false debate that’s only the tip of the iceberg).

Perhaps one day someone will have the bright idea of realizing that even if AI is faster and more profitable (which is far from being the case today), some of these jobs will have to be reserved for humans because this is the only training channel that will provide the expert and experienced employees that will be needed in the future. There are hard skills that can be learned without working, even if putting them into practice is another matter entirely, but when it comes to soft skills, nothing is possible without experience and time. However, when AI has taken over most of the work, soft skills and humanity will become the major qualities required of humans (The Digital Renaissance: How Companies Can Become Future-Ready Through New AI and Company Rebuilding and A new operating model for people management: More personal, more tech, more human)

Of course there are self-taught geniuses, but certainly not enough to run the world of tomorrow.

One day we will fear for the future of humanity

AI is not just about technology. As we have seen recently, it is also about geopolitics (AI: who will impose their law on the geopolitical scene?) but above all it is a vision of the world (The challenges posed by AI are not technological, but must be met today and Like the climate, does AI deserve its “COP”?).

At the rate things are going, I think that one day the question will arise in terms of societal choice: which model do we want?

Of course, I already hear people saying that a world where AI takes care of everything that is difficult is a better and deeply desirable world, but that is only a partial argument. What we need to ask ourselves is what a world where the use of AI is pushed to its peak would be like.

So of course there is the work dimension, but there is also the societal dimension. A world where it is no longer necessary to learn, where our cognitive abilities would decline, where AI would make all the decisions, where we would become idle spectators of our own lives, without taking risks, without emotion, without creativity, to the point of becoming decorative elements of a world shaped by AI for AI.

This is not the end of humanity but the end of what makes us human, of our humanity.

There will always be beings with human form, but will they still be human with all that entails in terms of emotions and imperfections?

Do we want that? If not, perhaps for the good of humanity, it will be decided not to take humans out of the world of work because, no matter how their existence is financed afterwards (Towards a golden age of welfare and precariousness?), it will be the beginning of the downgrading not only of each individual but of the human race as a whole (AI in the workplace: avoiding the Wall-E effect)..

It is not because we will be alive that we will still experience things, and it would be a shame if AI contradicted this beautiful quote by Nicholas Negroponte: “Computing is not about computers anymore. It’s about living.“.

The myth of AGI

When we talk about the global replacement of humans by AI, we are of course thinking of artificial general intelligence, or AGI. Myth or reality?

Today there is no consensus on the subject, even among experts, so it’s up to each of us to make up our own minds.

Sam Altman (OpenAI) or Demis Hassabis (DeepMind) talk about a horizon of 0 to 10 years (Sam Altman lowers the bar for AGI), for Yann LeCun (Meta) we are talking about 20 to 50 years and others tell us that it is a pipe dream.

Moreover, Altman has backtracked on the subject recently ([FR]ChatGPT: Sam Altman asks for calm on general AI): “twitter hype is out of control again. we are not gonna deploy AGI next month, nor have we built it. we have some very cool stuff for you but pls chill and cut your expectations 100x!

But behind the talk on the subject there is not only an inventory of technological advances: there is above all a marketing communication that speaks to the market (a little) and to investors (a lot). In other words, the more cash is needed, the more it is announced that the end of the tunnel is near.

But without AGI, the widespread replacement of humans by AI remains a myth.

Bottom line

Yes, AI will replace humans, and sometimes that’s a very good thing. As to whether it will replace all or almost all of them, we can ask the question differently.

Is it possible? Most certainly.

Is it likely? Not in the medium term, and as for the long term, we’re in the dark.

Moreover, didn’t Erik Brynjolfsson say that for every dollar invested in machine learning (The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence), it will be necessary to spend 9 dollars on intangible human capital? This proves that not only is it not tomorrow that AI will kick humans out of business, but also that it will be necessary to invest in developing their uniqueness.

Personally, I think that there is too much anxiety-provoking talk on the subject (but ask yourself whose interests they serve) but that it will of course be necessary to transform work and organizations in depth if we don’t want the worst to happen.

But perhaps the worst is desirable for some.

Image: large replacement by AI by Edaccor via Shutterstock.

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent