Why enterprise AI can’t keep up with consumer AI: beyond ChatGPT, a more complex reality

“But how is it that companies are so far behind in adopting AI internally when we see how fast it’s going on the consumer side?”.

We hear this question all day long, and it’s just another episode in the long story of the consumerization of enterprise tools (Consumer AI in the enterprise: the usual story of IS consumerization): consumer tools and uses always end up arriving in the enterprise world with a delay, a delay due to the specificities of the business world, which requires tools and uses to be adapted, even if this means having to restrict them.

But of course, employees, and sometimes even managers, often don’t understand the slowness of this process. “All they have to do is install ChatGPT and that’s it, everybody knows it works, we use it every day”.

Well, no, it’s much more complicated than “just install ChatGPT”, and as everyone only has eyes for consumer AIs, I’d like to explain here how enterprise AIs are different, even if it means making things as simple as possible.

Enterprise AI vs. consumer AI

Consumer AI as we all know it is essentially aimed at improving the user experience, most often through conversational agents, and addresses tasks such as automation, answer retrieval and content generation. It uses publicly available data and relies on generic models designed to operate on a large scale, without complex customization.

Enterprise AI, on the other hand, aims to optimize processes, automate complex tasks, improve decision-making and obtain answers to complex business questions. It uses data from the company and its internal applications, and is customized to meet the particular needs of each sector, company and business, with more sophisticated models adapted to the organization’s specificities.

This implies fundamental differences, which are also constraints.

Data scope

As mentioned above, one uses public data and the other corporate data.

On the face of it, this seems simple to understand and poses no major problems, whereas in fact the opposite is true, raising questions of silos, rights and governance.

Application silos: an obstacle to integrated AI

Public data on the web is aptly named: it’s everything that can be accessed online by anyone without the need to identify themselves. What’s more, they all obey standards that govern the way content is structured, presented and exchanged on the web.

In the enterprise, it’s the other way around: the internal landscape is made up of applications that host, process and communicate their data according to their own standards.

By definition, nothing is freely accessible within the company except the intranet home page. The rest is customized according to different criteria and levels of rights. As for business applications, they obey the same rules: you must already have the right to use them, and then, depending on the rights given, you can see and do different things.

Here, AI will come up against the same problems as users (Employee experience is sick of the software industry): it will only have the access it is given, without even mentioning the level of rights.

Why should this be?

Because each application brick is most often “held” by a vendor who offers its own AI. Of course, an editor can supply several bricks, but the fact remains that you will have an AI for CRM, one for ERP, one for office automation etc… and that the interest of each editor is to push the use of its own AI and not to open up its data to those of competitors.

One reason for hope, however, is the example of Workday and Salesforce, who have announced a partnership focused on integrating these two major enterprise applications to stay ahead of the growing demand for integrated AI (Workday Salesforce Partnership: Teaming Up For Enterprise AI).

Can we expect to see this type of announcement become more widespread? It’s hard to say: these two software giants are not competitors in the strict sense of the word, and in fact complement each other very well. I can’t see SAP and Oracle doing the same thing, nor can I see Microsoft abandoning its office automation preserve.

Rights: AI doesn’t know what you have a right to know

Once the question of access has been resolved, we could say that all we have to do is give the AI the maximum level of access to all the company’s information.

This would be a serious mistake.

Because if AI has access to everything, it will answer a given question using all the information in its possession, and therefore possibly information to which the user is not supposed to have access (Generative AI in the enterprise: a silo breaker or just another layer in an already complex IT landscape?).

A caricatural example: HR asks the AI what the projected number of employees is one year ahead, and the AI is aware of a massive downsizing plan that is not yet public.

Governance impacts data quality

Consumer AIs have to deal with fake news, and it’s the mass of available information that helps them separate the wheat from the chaff.

Fake news doesn’t seem to be a problem for companies, but data is sometimes in short supply (we’ll talk about this later) and, strange as it may seem, inaccurate.

The company may be experiencing two governance problems, which may seem at odds with each other, but which can often be cumulative.

The first is strong governance in some areas, which slows down the publication and updating of information. It may be that new data or up-to-date information exists, but the AI is unaware of it. Conversely, if it knew about them before publication, it could either base itself on unvalidated information or leak information before its official publication.

The second is the lack of governance of many document or storage spaces, something that personal “drives” and above all the multiple extranets, intranets and internal sites contribute to worsening.

It’s not uncommon to find different versions of a document, or different explanations of the same subject, as you explore the bowels of an intranet and document databases.

More generally, a recent study revealed that companies estimate that over 40% of their data is unusable, unreliable, lacking in quality, out of date, inaccurate, duplicated or inconsistent. It adds that improving the availability of operational data to integrate AI tools is therefore the biggest challenge in implementing AI technologies (Data debt hampers AI investments, sustainable processes drive business value).

Insufficient quantity and diversity of data to train an AI?

Let’s assume that the company’s data is “clean”, which is far from obvious. Between the updating issues I’ve just mentioned and the systems of record, which are sometimes overflowing with poor-quality data (LDAPs are a case in point…), the task is enormous.

But there still needs to be enough data to train AI. A subject that is minor, confidential or poorly documented, or even a company whose size generates little information, or where the culture is mainly oral (this does exist), will lack the material to train an AI.

The problems that may one day be encountered by consumer AI may occur more quickly in the enterprise (Can AI run out of fuel or kill the web?).

This lack of quantity can result in a lack of diversity. A lack of diversity that will even exist independently of the quantity of data if there is consistency in practices over the long term, which can be a factor of bias.

If, for example, you expect AI to help you recruit in a more inclusive way, but your recruitment history favors over-educated white men , your AI will simply set this in stone! Unless, of course, you train it with dummy data showing it what you’d like to see in the future, because, of course, it’s not going to learn from your competitors with better practices.

Data security and confidentiality: the risk of consumer AIs

The sensitive issue with consumer AI is the risk to users’ personal data.

With enterprise AI, we can consider that all data is sensitive, which implies strong constraints on the choice of service providers and hosting methods, not to mention, of course, issues linked to the RGPD.

Paradoxically, this is one of the reasons why companies need to move forward on these issues: if they don’t offer an alternative, they will see their employees sharing sometimes sensitive data with consumer AIs.

In fact, it’s interesting to read ChatGPT’s own response on the subject.

It is generally risky to provide sensitive corporate data to a public generative AI like ChatGPT, unless you have specific guarantees on confidentiality, security, and compliance. If you’re considering using AI to process critical data, it’s best to choose AI solutions designed for enterprises, offering secure deployment options (such as on the private or local cloud) and better data management.”

No room for error in enterprise AI

As we know, consumer AIs can make mistakes due to limitations in the data used, the algorithms themselves, or their implementation in real-life contexts. Consumer tools warn their users on this point.

In the enterprise, in many cases, the right to error is close to zero, and four out of 10 managers do not trust the data feeding AI to produce accurate results (How senior executives perceive AI).

This brings us back to what has already been written on the subject of governance, but it will also be necessary to win the trust of skeptical or even distrustful users.

An increased need for specialization

Consumer AIs are designed for individual, simple and broad use cases, whereas enterprise AIs are designed for complex cases, on a corpus of specific, specialized data and often collective subjects.

While the operating principle remains the same, the level of personalization required is both time-consuming and costly.

No AI without appropriate governance

It would be wrong to say that there is no governance at the level of consumer AIs, but it’s still in its infancy.

Nothing universal exists, but we are witnessing the beginnings of attempts at local regulations that are still poorly coordinated, there are also sector-specific regulations (such as the GDPR) and the rest is left to ethical charters that certain players impose on themselves.

All of this works well for enterprise AI, with two specific features: the need to take account of sometimes different or even opposing regulations within the same organization, and the need to equip itself with its own global governance in order to reassure both its customers and its employees.

A more pressing need for ROI in business

Finally, there is also a different financial paradigm. For the consumer AI user, the question doesn’t arise: he can use free services and switch to a pay model for a modest fee if he sees the value in taking advantage of advanced functionalities.

At this point, the real question of profitability weighs on the publishers of these solutions. Today, we know that OpenAI’s financial situation is anything but rosy, despite a valuation that bears no relation to its real economic performance.

But if OpenAI or others disappear, it’s not the user’s problem.

It’s a different story in the enterprise, whatever the model.

If it develops its own models, the company wants the certainty of a rapid ROI, not having the means of an OpenAI to lose money for years.

If it uses external suppliers, its commitment is less, but the pressure is no less, and B2B publishers have understood this (Workday CEO: ‘For all the dollars that’s been invested so far, we have yet to realize the full promise of AI’).

Some are even calling for greater vigilance:

The problem is that the current level of investment — in startups and by big companies — seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. Mounting evidence suggests that won’t be the case.” (Is the AI Revolution Already Losing Steam?).

What naturally slows down the deployment of AI in companies: they only commit to use cases where an ROI is likely (Generative AI, which use cases are profitable in the end?): the individual user of consumer AI is a user, the company that makes it available to its employees, whatever its technological choice, is an investor).

The inevitability of corporate change management

The individual user who discovers AI, learns to master it, ends up evolving his practices as he progresses along the learning curve.

But we can’t say that he’s leading a time-consuming and costly change management process on himself. 

The reverse is not true in the enterprise: you have to fight against fears, more or less justified, train people, encourage adoption… (An AI platform tailored for the enterprise), which often means long and costly change programs.

Conclusion

AI will make its way into the corporate world, but it will take longer than in the general public, and only for very specific use cases, at least in the beginning.

This is not only due to a certain reluctance that would be understandable in view of the stakes involved, but is not even the case. It is above all due to a context and constraints that make the process more complicated and therefore slower, despite the clear will to move forward.

But it would be a mistake to believe that AI will spread as quickly and successfully in the enterprise as it has in the general public: it’s not “just” about deploying ChatGPT.

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
1,756FansLike
11,559FollowersFollow
28SubscribersSubscribe

Recent posts