There isn’t a business leader who isn’t aware that AI is going to change a lot of things, but there are many for whom the subject remains remote and who wonder what is going to change and how much.
This isn’t the first time that, faced with a so-called emerging technology (in fact, it’s been emerging since the 70s), we’ve said to ourselves “something’s going to happen, but I’ve got the time“… until the day we run out of time.
Although they don’t always have a clear idea of what’s at stake for their business when it comes to AI (Why enterprise AI can’t keep up with consumer AI: beyond ChatGPT, a more complex reality) , they are already faced with the European regulations in force since mid-2024, and aren’t always aware of what this means for them. What’s more, they’re not always aware that their business is unknowingly using AI through the solutions they receive and use on a daily basis.
And I have no doubt that we’ll find ourselves in a similar situation to that experienced with the GDPR, i.e. “it’s not for me“, then “I’ve got the time” and finally “I see fines raining everywhere, I’ve got to take stock with my legal“.
So today I’ve decided to delve into the EU AI Act and extract the gist of it for people like me, who aren’t experts in the technological dimension of the thing, but who as leaders need to know what it’s all about and where they’re going…
We’ll also be wondering whether Europe is shooting itself in the foot by regulating so much.
Experts, you can skip this one, it’s just a popularization article that won’t tell you anything you don’t already know.
What is the AI Act?
The AI Act is a legislative framework designed to regulate the development and use of AI in Europe, to protect citizens and businesses from the possible abuse or misuse of AI technologies, and to promote responsible and ethical innovation.
The AI Act came into force on August 1, 2024, with phased implementation between February 2025 and August 2027, giving businesses a transition period to comply with the new requirements, and when we see what happened with the GDPR this is no luxury.
It follows on from the regulatory approach of the European Union, which has already been something of a pioneer with the GDPR (General Data Protection Regulation). This time, the aim is to property common standards to:
- Reduce risks linked to algorithmic bias, data security or surveillance.
- Ensure transparency in the use of AI systems.
- Maintain European competitiveness while respecting fundamental EU values.
A novel classification system
The hallmark of the AI Act is its innovative classification of AI systems by level of risk. This hierarchy is essential to understanding how the regulations will apply to a given business.
1°) Minimal risk
These are systems deemed safe for users or society.
This includes, for example, anti-spam filters and algorithms recommending music playlists.
At this stage, there are no specific obligations. The business can continue to use these tools without any adjustments.
2°) Limited risk
These are technologies which, by their very nature, require greater transparency.
Here we find chatbots, image or text generators (like ChatGPT).
Here, the only obligation on businesses is to inform users that they are interacting with an AI, not a human.
3°) High risk
These are systems that can have a direct impact on the rights or safety of individuals.
We’re talking here about AI used to recruit employees, make medical diagnoses or decide whether to grant bank loans.
Here, the business must demonstrate that its models are reliable, accurate and free from discriminatory bias. But that’s not all: regular audits will need to be put in place to ensure compliance, and detailed technical documentation provided for inspection purposes.
4°) Unacceptable risk
These are applications that are strictly forbidden by law.
Examples include mass surveillance, cognitive manipulation and systems that exploit specific individual vulnerabilities (AI could map and manipulate our desires, say Cambridge researchers).
In theory, any use of these technologies should result in sanctions.
Want to bet?
Obligations imposed by the AI Act
Depending on the sector and uses of AI, the AI Act introduces obligations that will have a direct impact on operations. And you’ll see that absolutely everyone is affected to one degree or another.
1°) Reinforced audits and compliance
For systems classified as “high-risk”, businesses will have to prove that their AI complies with the criteria defined by the EU, document algorithm design and development processes, and put in place internal control mechanisms to quickly identify and correct any failures.
2°) Transparency for users
Chatbots will have to point out that they are not human, and algorithmic recommendations will have to be explained to users in an understandable way.
I’m very curious to see the implementation of the second part, but it depends on the level of detail and precision expected…
3°) Sensitive data management
If an AI uses personal data, the company will have to ensure that it is collected, stored and processed in compliance with the GDPR, in addition to the new AI-specific rules.
Each AI system will have to clearly indicate how it works and its limits. Here again, I’m curious to see how this will work when we know that the limit of AI is that very often it can’t say it doesn’t know (Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality ).
AI Act risks and opportunities for managers
At this stage, you may be thinking that all this represents constraints, a considerable amount of work and potential problems. That’s true, but those who anticipate the subject can gain an advantage over those who are more wait-and-see or negligent.
Let’s say we find ourselves in much the same situation as with the RGPD but on a different scale.
First and foremost, being compliant quickly reduces legal and financial risks.
Compliance with the AI Act in fact makes it possible to limit fines in the event of non-compliance (up to 6% of worldwide annual sales). If we recall the GDPR, it took a long time for the fines to arrive, but they did in the end, even if it is doubtful whether the penalties imposed are truly dissuasive.
Compliance can improve a business’s brand image and reputation.
The days when, when talking about personal data, we used to say “it doesn’t matter, users don’t understand it” are over, and there is a real awareness among the general public, if not fear.
Put another way, there can be no data economy without trust in the players.
Complying with the AI Act is a guarantee of transparency and ethics, two criteria increasingly valued by customers, investors and business partners. This can position a business as a responsible player.
The AI Act is also a differentiating factor.
Compliant businesses will be able to distinguish themselves on the market by guaranteeing ethical and reliable AI systems. This could become a major competitive advantage in winning contracts with governments, large businesses or organizations sensitive to ethics and social responsibility.
We also need to think ahead and prepare for a world where AI will be regulated everywhere.
By aligning themselves with the European standard, businesses will be prepared to operate in an environment where regulations will become increasingly present. Other regions, such as the USA or Asia, could adopt similar frameworks in the years to come, and when you consider that Europe is generally stricter on these subjects, “who can do more can do less”.
The AI act can also be a good pretext for switching to responsible innovation.
It encourages businesses to review their practices and invest in compliant, innovative solutions. This initial constraint may encourage a more rigorous approach and thus improve the quality of AI.
Finally, there is the question of attractiveness to talent and investors.
Businesses that respect ethical and transparent standards will be more attractive to certain talents in search of meaning, as well as to investors focused on sustainability and social responsibility.
As always, it’s a mix of constraints and opportunities, which I’ll sum up in two points: trust and innovation. I don’t see how we can survive in an AI-dominated economy without it.
Are European businesses still penalized by over-regulation?
It’s impossible not to mention once again the famous adage that “the USA innovates, China copies and Europe regulates“.
One of the major criticisms that can be levelled at the AI Act is that it could put European businesses at a disadvantage against those in countries where AI regulations are less strict or non-existent.
This is neither the first nor the last time this has happened.
Indeed, it cannot be denied that the burden of compliance is a competitive disadvantage, at least in the short term.
The price of compliance and maintaining it will indeed be high, especially for SMEs who will have to invest in audits, certifications and documentation of their algorithms, a burden from which their competitors operating in less stringent jurisdictions will be freed. At least for a while.
On the other hand, they’ll have an advantage when the others have to do it sooner or later, because they’ll have a head start.
The criticism I hear most, particularly from tech entrepreneurs, is that the AI Act will be a brake on innovation.
Indeed, the regulatory framework will undoubtedly slow down the experimentation and marketing of new technologies, which may limit competitiveness in markets where speed is crucial, such as e-commerce or digital services.
Finally, I’ve heard talk of a flight of talent and capital.
The former could indeed be attracted by countries offering a more permissive and less bureaucratic environment (although I have little faith in this), while the latter might prefer regions where the costs associated with regulatory compliance are lower (but more and more companies value social responsibility and won’t be able to do without the European market).
But all is never black and white, and opportunity often arises from constraint.
The first opportunity, don’t fool ourselves, is to protect against foreign competition under the pretext of ethics, when the truth is that we’re not up to scratch! But if this is an opportunity for AI players, it’s not certain that the same can be said for client companies, who may find themselves deprived of functionalities that their foreign competitors are taking advantage of.
The AI Act can also be seen as a vote of confidence by customers and partners, guaranteeing their clients that their solutions are ethical, secure and respectful of rights.
As we’ve already said, in AI as elsewhere, businesses that anticipate regulations protect themselves against future AI-related scandals, which could damage their reputation, lead to sanctions or exclude them from certain markets.
There’s also a question of taking consumer concerns into account with the increasingly attentive to issues of transparency, ethics and privacy.
We can also talk about strengthening our long-term competitiveness by already being compliant when other regions adopt similar regulations.
Finally, and let’s reiterate, this is an attractive factor for ethical investors, who increasingly favor businesses aligned with ESG (Environment, Social, Governance) standards.
Is AI Act a price to pay today to be competitive tomorrow? Perhaps, provided that others don’t get ahead of us, acquire experience and develop innovations that we won’t be able to catch up with in the future.
Bottom line
The AI Act imposes major constraints on European businesses, but it can also be a strategic bet on the future.
While businesses in less regulated countries may appear to have a short-term advantage thanks to lower costs and faster implementation, European businesses can build a sustainable competitive advantage, provided they don’t fall irreparably behind.
Unfortunately, we didn’t need the IA Act to ensure that, with a few rare exceptions, Europeans have little weight in the global game.
On the other hand, we can’t deny that regulations on the use of personal data and a form of ethics are welcome and correspond to user expectations, at least in Europe.
In the meantime, to show that it is up to the challenge, Europe is also expected to act on other points (The challenges posed by AI are not technological, but must be met today.).
Image: European AI Act by Ivan Marc via Shutterstock