The challenges posed by AI are not technological, but must be met today.

-

Artificial intelligence is making the headlines and is bound to have a huge impact on our jobs, our businesses and the way we work and live in the future.

On the other hand, I’m not very satisfied with the content of the debates on the subject, especially as seen from France. Yes, AI represents enormous economic opportunities, but also a risk of submission if national suppliers don’t play in the big league. Yes, AI is a major technological challenge, but it’s neither the technology nor its adoption nor its effects that will have the greatest impact on our future. And yes, AI is a philosophical subject, but given what’s at stake, I don’t think we can afford to waste time on the palaver and conceptual tea-room discussions our politicians are so fond of.

In a recent article (Will AI replace juniors? The false debate that’s only the tip of the iceberg) I started from the very specific problem of the acquisition of skills by young employees replaced by AI for basic tasks, and concluded that this was just the tree that hid the forest.

We are in the early days of agentic AI capable of demonstrating autonomy in very specific domains, the last step before general artificial intelligence (GAI) , which will be able to replace just about any human for any task, including decision-making.

The replacement of the human as a productive tool is not for tomorrow, but it should be taken literally: it’s for the day after tomorrow, or even the day after that. And that shouldn’t reassure us that there’s enough time. The Titanic’s trajectory towards the iceberg is known, the only thing that remains to be determined is the angle and speed of the impact , which will only depend on the rudder being applied today, because it was not applied yesterday.

For Sam Altman, CEO of OpenAI, AGI is 2025 away (Altman Predicts AGI by 2025), while DeepMind co-founder Shane Legg talks of 2828 (Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years). Looking to the conservatives, a 2022 survey of 738 AI experts estimated a 50% chance of AGI by 2059 (When will singularity happen? 1700 expert opinions of AGI) and for MIT roboticist Rodney Brooks, we’re safe until 2300 (Elon Musk predicts AI will be ‘smarter than smartest human’ by end of 2026 and reveals huge upgrade for his Grok chatbot).

Reassuring? Given what’s at stake, I doubt we’ll be ready to face the changes brought about by AGI in 2059. 2300? More likely, but only if we make the right decisions quickly.

The problem is that we’re in the fog and can’t see the iceberg. At least not the people on the control bridge.

How AGI will transform our economies

Perhaps the easiest impact of the AGI to perceive is the one it will have on the economy, with several angles of attack.

Transformation of the labor market

AGI could automate not only repetitive tasks, but also highly-skilled jobs, pushing some sections of the population into very long-term structural unemployment.

The corollary isa massive need for requalification towards professions involving new skills (such as AI supervision and ethics), which presupposes massive investment by governments and/or companies.

Of course, new professions and sectors will emerge, but given the importance of the AGI footprint, most will be dedicated to it, such as AGI maintenance and ethics. Although… according to Eric Schmidt: “A simple rule is that you won’t succeed if AI doesn’t control AI. This is not without consequences, because the ability to verify something means you’re at least as smart as the thing you’re verifying.” (Eric Schmidt on Henry Kissinger’s surprising warning to the world on AI). It’s true that letting a problem regulate itself has been tried and tested in the past! Coming from him, we shouldn’t expect anything other than the antithesis of Kissinger, who saw the stakes but has sadly left us.

In the end, and this is easy to understand, the countries and companies that dominate the AGI sector will concentrate most of the wealth, and economic inequalities will increase.

AGI: a productivity booster

Last but not least, the good news is that AGI should lead to a significant drop in production costs, resulting in a generalized fall in prices in virtually all sectors and a mechanical increase in purchasing power.

But if demand fails to materialize (and it’s to be feared that AGI’s growth will be more exponential than that of purchasing power), we run the risk of ending up in a crisis of overcapacity , again with unemployment as a consequence.

AGI will drive new business models

AGI will push us even further into the data economy, with all that this implies in terms of ethics and personal data protection, a subject on which war is already being waged between tech companies and institutions, and even between governments.

But the impact of the AGI on the job market is also likely to revive the question of a universal basic income. This is good news for those who have long been advocating the idea, but the reason why we’ll get there may also give rise to fears of a much darker reality.

AGI, a factor of social upheaval

We talk about it a lot less, either because we don’t see it or because we refuse to see it, but the social consequences of AGI will be dramatic.

Change in the balance of power

Those who possess AGI will be able to have an unprecedented influence on society, with the ability to direct thought and action. There are already many examples of this today, and we can only wonder whether what lies ahead is the worst or the best!

On the other hand, the most optimistic may say that the economic gains generated, if well redistributed, will be so great that they could contribute to reducing inequalities. After all, this was already the case to a certain extent with the industrial revolution and the development of the tertiary sector, but that’s now ancient history, and since then we’ve seen the opposite.

What place for the human in society?

In my previous article, I spoke of the end of the human being as a production tool, but are we ready for that? Aren’t we heading for an identity crisis, with individuals losing the essence of what defines them socially?

Here again, everything depends on the choices we make. Some are optimistic:

“The JOB will unwind slowly.

There will still need to be work contracts.

Employment patterns will change, enabled by technology. 

But Work is not dead. 

In fact, our most important work is ahead of us – to build better successful, more harmonious societies.”(Building a Better World Without Jobs ).

I love blissful optimism, but only in the movies: in the real world it doesn’t work.

New models will emerge, and some of them are already on our doorstep, with the emphasis on non-automatable human tasks (creativity, empathy, ethical supervision), but are we aware of what this means if it’s a global, large-scale change, and what’s more, a change that’s more suffered than chosen ?

We can even envisage the worst-case scenario if, as some predict, AGI were to replace social roles such as teachers, therapists or spiritual guides. Let’s face it, in some cases this is already the case.

An ethical and societal crisis?

What kind of crisis could the world be plunged into if AGI were to get out of control, a hypothesis that we can’t rule out on principle?

AGI only learns from humans, and will therefore havea proven tendency to amplify and accelerate our biases if we don’t educate it to avoid them. We know this, but will we do it? Will we put the necessary safeguards in place just in case?

Then imagine a world where a madman lets the AGI take the lead on geopolitical and military issues? What would the consequences be?

Finally, there’s the risk of cultural impoverishment if creativity were to be dominated by AGI. In a way, this question is already being asked today (Can AI run out of fuel or kill the web?), with AIs only being fed by self-generated data.

What scenarios for AGI?

You won’t learn anything new here. As with every disruption, there are 3 possible scenarios.

The first is utopia: AGI will be used to improve the world, address inequality, fight poverty, climate change and improve quality of life for all.

The second is dystopia. Mismanagement of AGI could lead to social collapse, increased inequality and loss of human control.

The third is the controlled status quo. Progressive, regulated adoption would mitigate negative impacts while maximizing benefits.

The first is by definition utopian, the second undesirable, and the third requires global regulation that is highly hypothetical in the current state of affairs.

Today’s unsatisfactory answers

We all agree that the best and the worst can happen, and that, as is often the case, our future will be what we want it to be.

That’s if we understand what’s going to happen, make the right decisions and know how to explain and implement them, because most of the time they won’t please anyone.

Most people who think about the subject converge on 3 solutions:

– tax companies

– redistribute via training and universal social income

– accept precariousness

No matter which expert you read or listen to, nothing else emerges, and I’m not satisfied with the pseudo-golden age we’re being promised (Towards a golden age of welfare and precariousness?).

Since the solution can only be a global one, can you imagine a world where companies agree to be taxed and voters agree to live in precariousness and welfare?

Of course not. It’s a model that nobody wants a priori , but nothing says that there is another.

I don’t know if another path is possible, but it seems to me that this one would be difficult for companies and society as a whole to accept , unless, in the wake of a crisis of unprecedented proportions, we agree to rethink everything. And even then… the AGI will have generated its leaders and its winners, and I think they will have every power to prevent the rules of the game from changing.

Bottom line

The shock of the AGI will not only affect companies, who alone will not be able to cope with it, and even less employees, who will be the adjustment variable: it is our institutions and our social and therefore fiscal models that are at risk (Aligning Our New Technologies Will Require a New Institutionalism).

How long will it take to reinvent and adapt them, ideally with continental or even global coherence?

30 years?

Where is the public debate on the subject? I’m not talking about experts, zealots, gurus or researchers, but mainly politicians, who today see AI as a technical-philosophical subject, have perhaps never heard of AGI, and who should be starting to make the decisions that will enable us to face the world of today.

The debate is nowhere to be found. Worse, it reminds me of the one on climate change: we see nothing, then we pretend not to see anything, then we can’t agree to do anything about a phenomenon whose existence we can only acknowledge. Except that it’s going to happen faster, affect people even more concretely, and there will be even more divergent interests on the subject.

What politician has talked about it other than the late Kissinger?

But from there to making it an economic, institutional, fiscal and social issue? We have time. A country and its institutions can be turned around in 6 months, and the population immediately understands and embraces them, as we all know.

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
27SubscribersSubscribe

Recent