You don’t need to trust AI, you need to trust the people who train it.

-

Trust in artificial intelligence is a major issue for businesses. While many companies are investing massively in these technologies to improve their operations and decision-making, they often come up against a reluctance on the part of employees to adopt the recommendations of AI systems, especially as their managers don’t seem much more convinced than they are ([FR]How senior executives perceive AI ).

But why don’t employees trust AI?

A lack of understanding of AI

We often hear that this mistrust stems from the fact that users don’t understand how AI works. So-called “black box” models, which produce results without clearly explaining their internal processes, are often rejected in some way.

The hypothesis often put forward is that, without accessible explanations, users would be less inclined to trust the recommendations of such models.

Conversely, transparent models, whose methods and results can be understood by non-experts, should be more readily accepted.

However, it comes as no surprise that I’ve found an experiment that says exactly the opposite.

It all depends on who built the black box

A study conducted by researchers from Georgetown University, Harvard and MIT has revealed that users are more likely to follow the recommendations of an opaque model than those of a transparent one (People May Be More Trusting of AI When They Can’t See How It Works).

The study, conducted with a U.S. fashion retailer, focused on in-store inventory allocation decisions. Two models were compared: a simple, explainable model based on fixed rules, and a more sophisticated but opaque model. Employees were given the opportunity to adjust or reject the models’ recommendations.

The results ran counter to what might legitimately have been expected: decisions based on the opaque model were followed far more often than those based on the transparent model, with significantly better performance.

The researchers put forward several explanations for this result.

Firstly, the trust established by peer involvement. Employees knew that the opaque model had been designed with the participation of their colleagues. This peer involvement reinforced the perceived legitimacy of the system. In a way, the problem of trust was shifted from the technology to those who built it and, very importantly in this case, those who trained it.

Then there’s the illusion of competence with regard to the transparent model. With the explainable model, employees felt able to criticize the recommendations and sometimes made unjustified adjustments. This phenomenon, known as “overconfident troubleshooting”, was based on unverified and even erroneous assumptions.

Finally, uncertainty management. When there was a lot at stake or uncertainty, as in high-volume stores, employees were more inclined to trust the opaque model, presumed to perform better.

The importance of the professional context

But we need to be totally objective on the subject, and therefore admit that we’re dealing here with a professional context with precise expectations and issues, which may not be the case in everyday life and with less educated populations, or when the use of AI doesn’t represent an issue.

Researchers have found that those with less knowledge about AI are actually more open to its use (Knowing less about AI makes people more open to having it in their lives). In countries where AI education is lower, people tend to trust it more than in countries where people are more aware of its limitations.

A survey of American students also showed that the less they understand AI, the more likely they are to use it for homework.

On the other hand, for analytical tasks, the opposite is true: people who are familiar with AI are more inclined to use it, as they value its ability to help them.

Bottom line

AI is not just about technology, but also about human perception.

These results overturn the preconceived notion that transparency is necessary to property trust in AI or, for that matter, in any technology . In reality, trust seems to depend less on technical understanding than on the human and organizational context in which AI is trained and implemented.

This shows that transparency is no guarantee of trust, as a clear explanation can sometimes encourage users to challenge recommendations, even if wrongly. Alongside this, perceived legitimacy plays a key role. When users know that AI is created and tested by experts or their peers, they are more inclined to accept it.

Rather than focusing solely on making AI systems simpler or more powerful, businesses should therefore invest in adoption strategies that include end-users in the design process and build trust.

Image: trust in AI from wenich_mit via Shutterstock

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent