The idea of setting up an international conference similar to a “COP for AI”, along the lines of the climate COPs, deserves careful consideration in view of the growing economic and societal impacts of artificial intelligence (AI).
I said recently that we need to stop thinking about AI from a philosophical or purely technological angle , and take a pragmatic look at the economic and social issues that go hand in hand with it (Towards a golden age of welfare and precariousness? and The challenges posed by AI are not technological, but must be met today). The idea is that, even if the far-reaching changes it will bring about won’t happen for another 30 years, there’s nothing to say that it isn’t already too late to pivot our institutions, and that the response to these challenges could only be international (The World Isn’t Ready for the Next Decade of AI).
It’s only a short step from there to wondering, given the earthquake it’s going to cause, whether AI doesn’t deserve its own “COP”, like the one that exists for climate change, a step I’m happy to take, given the despairing vacuity of political discourse on the subject.
Why a “COP for AI”?
First of all, because we’re dealing with a technology with a global reach. AI transcends borders. Its uses, although localized, are often interconnected via global infrastructures and influence critical sectors such as health, education, work or defense. Like climate change, AI is a transnational issue requiring global coordination.
Secondly, because AI poses a risk to society and institutions that can be described as systemic.
Social risks such as algorithmic discrimination, the exacerbation of inequalities and threats to employment.
Then there are the political risks of misinformation, electoral manipulation and mass surveillance.
Nor can we overlook the ethical risks and abuses associated with autonomous systems (autonomous weapons, critical decisions without human control).
Finally, we must mention the environmental risks associated with the energy footprint of LLMs (AI Expert Says That Each ChatGPT Query Is Equal To Wasting Half A Litre Of Water, Implying That There Needs To Be A Sustainability Plan In Place and Environmental impacts of artificial intelligence).
There are a number of initiatives (national regulations such as the European AI Act, technological alliances), but they lack overall coherence. A COP for AI could align international efforts, define common ethical standards and promote responsible practices.
Major economic and social challenges
I’m just reiterating what I said in my last article, but it’s always a good idea to remind ourselves exactly what we’re talking about.
From an economic point of view, we’ll have to deal with issues linked to the redistribution of wealth. AI generates profits concentrated in a few large companies (Big Tech), which exacerbates economic inequalities. International governance could promote a more equitable distribution of profits to address issues such as employment.
We are facing a structural change in the job market. The destruction of jobs in certain sectors calls for global retraining and skills adaptation policies, with some even going so far as to advocate the introduction of a universal income to combat the large-scale impoverishment of society and the end of the human being as a production tool.
Next comes the need for responsible innovation. Technology is neither good nor bad by nature; it all depends on what we use it for. Encouraging AI to serve the common good (health, environment, education) (If Done Right, AI Could Make Policing Fairer) and curbing harmful or useless applications is one solution, but is utopian outside global governance (Global Governance of AI).
But the impact of AI also has a societal dimension, with digital inequalities on the rise. A gap exists between countries with the resources to develop AI and those without. A COP could support equitable access to technologies and training, and avoid the creation of imbalances that could have major geopolitical impacts.
Then there is the question of data sovereignty. Questions of personal and national data control require global coordination. This goes far beyond the issues raised by the RGPD, and when we see the questions linked to its application, or even the struggles between states on the subject, AI imposes greater coherence and at a more global level.
Finally, there are philosophical questions that are more a matter of values. Should there be universal principles governing the place of AI in sensitive areas such as health or justice (Microsoft bans US police departments from using enterprise AI tool for facial recognition)?
Is a COP for AI a utopia?
If understanding why the idea of a COP for AI makes sense isn’t complicated, it’s also easy to understand why it doesn’t yet exist, and why there are so many obstacles in its way that it will probably never see the light of day, or even that numerous lobbies will work to thwart its actions.
First of all, there are diverging interests between countries. As with climate change, countries have different priorities depending on their level of development. Some will want to maximize the economic opportunities of AI, while others will seek to limit its disruptive effects, protect their jobs and economies, and favor their national players.
AI is also a field in which technological and economic power relations are expressed, precisely between states and between tech giants that their respective governments want to protect and see take global leadership. Here, the domination of developed countries and private businesses in the development of AI could complicate the establishment of fair rules.
Finally, the subject of AI is technically complex to grasp. Unlike climate change, where the objectives are mainly quantitative (reduction of emissions), the impacts of AI are often more diffuse and complex to measure (algorithmic biases or misinformation).
Some possible approaches to global AI governance
I’m well aware that the debate rages on as to whether or not AI should be regulated, and it’s the well-known story that “the USA innovates, China copies and Europe regulates”, which is not untrue to the extent that attitudes are sometimes caricatured.
We may agree with Eric Schmidt that there’s no point in regulating AI until it’s capable of regulating itself (Eric Schmidt on Henry Kissinger’s surprising warning to the world on AI), but given what’s at stake, we may also think that the adaptation of society and institutions not only won’t happen by chance, but also needs to be coordinated at global level, so that those who really try to tackle the economic, social and ethical dimensions of the issue don’t have to pay the price, particularly from an economic point of view.
The question is not so much whether there should be governance, but rather how far we should go.
There are certainly objectives on which a consensus could be reached, such as developing a global AI ethics charter, encouraging the transparency of AI systems (explicability, auditability), reducing the environmental impact of AI systems, protecting human rights and preventing authoritarian excesses.
Others, on the other hand, proposed by numerous experts and researchers, would be more controversial or even cause an outcry. I’m thinking in particular of the introduction of accountability mechanisms for businesses and governments, the creation of a global fund for equitable AI, financed by a tax on the profits of tech companies, and the establishment of global indicators to monitor the impact of AI on inequality and human rights.
Bottom line
AI, like climate, is profoundly shaping the future of our societies. If climate COPs have taken decades to property a consensus, a COP for AI could follow a similar path, but must act quickly in the face of accelerating technologies. Such an initiative would send out a strong signal: AI is not just a technological tool, but a factor of systemic transformation that requires collective and responsible governance.
Will we succeed? I have my doubts, and despite what’s at stake, it’s likely to remain a pipe dream.But I’m curious to see what will come out of the summit for action on artificial intelligence to be held in France next February.
The themes addressed are heading in the right direction (AI in the service of the public interest, the future of work, innovation and culture, trusted AI, global governance of AI), but I’m curious to see which heads of state and government will be present, and who the experts, researchers and NGOs represented will be, as this is bound to give color to the thinking.
Will there be a follow-up? Will the subject subsequently become part of public debate and policy? I have my doubts.
But what about you? What do you think of all this?
Photo by Stockphotos.com