AI in the digital workplace: a brilliant assistant, but an unreliable colleague

-

I don’t know if AI is the future of the digital workplace, and it’s a question that we will surely be able to ask ourselves in the near future, but it is certainly part of it. From helping to find information to automating repetitive tasks and generating content, it is presented as a virtual assistant available 24/7 and capable of digesting volumes of information that no one has read for a long time.

But behind an enticing promise could lie a less appealing truth: if AI draws its value from a disorganized information system, it can become a factor of confusion and errors on a large scale.

In short :

  • AI integrates into the digital workplace, but its reliability depends entirely on the quality of the available data.
  • However, information systems are often disorganized: bulk files, obsolete data, application silos.
  • AI amplifies these problems, producing erroneous or out-of-context responses or revealing confidential information indiscriminately.
  • The problem is less technical than cultural: lack of governance, transversality and rigour in content management.
  • AI is just a mirror of the system: to take advantage of it, we must first restore order to the information.

AI doesn’t discriminate and amplifies what it finds

Asking employees to be as concerned about cleaning up their digital workspaces as they are about their children tidying their rooms might be patronizing, but it’s not necessarily a bad idea.

A quick look at hard disks, online storage spaces and other shared disks is like visiting a cabinet of horrors. You find documents with random names, duplicated, never archived or deleted even though they are no longer useful. There are also “different versions of the same truth”: where can you find reliable and definitive information when you have v1, v1_final, v1_final_bis, a copy of v1_final_bis, etc.? Finally, there is lots of obsolete content which is still used, shared and circulated, not because it is reliable but because it is easy to find and was bookmarked a year ago.

This is not limited to files where little can be done if employees manage them carelessly. Structured data that one might think is under control is also not to be forgotten.

I have repeatedly worked on projects that at one point involved retrieving data from an enterprise directory, ironically named Active Directory, although one quickly comes to question its active nature.

Poorly named fields, duplicates, directories in which you can still find employees who have been gone for two years, managers who no longer have a team, and inconsistent reporting relationships and other fanciful, erroneous, and outdated data.

The CRM is no better, with duplicate customers, out-of-date information, opportunities that have not been updated following their abandonment and fields filled in hastily to satisfy a process.

The HRIS is not far behind either: erroneous assignments, obsolete titles, organization charts that do not reflect reality…

This data now serves as raw material for conversational agents and AI assistants integrated into tools for results that we can imagine.

Contrary to what we can read here and there, an AI does not make mistakes as long as it has enough material on a subject to avoid hallucinating. It can, however, be unreliable because it has been trained with unreliable or obsolete data, or because there are several sources on the same subject with conflicting information.

Alternative truths are not the preserve of certain leaders across the Atlantic, nor are the fake news that poison AIs reserved for the general public web. Business is a machine that produces information constantly, and that’s not a problem because it makes sense to have intermediate versions and updates, or for a source to one day become obsolete and useless. The problem is that we leave things as they are and don’t do much to improve the situation.

Artificial yes, intelligent sometimes

I cannot stress this enough: automating a bad practice makes it more efficient, not better (The limits of technology-driven transformation). But doing something bad more efficiently is not a good idea.

In the case of AI, this results in assistants capable of summarizing anything, including outdated or unverified content. Crossing data even if it is incoherent or incomplete. Proposing convincing answers, but out of context, or caused by hallucinations due to the fact that the AI did not have access to all the necessary information.

This brings us to another problem: the compartmentalization of data.

Many AIs are in fact confined to a vertical application. The digital workplace has its AI, email has its own, CRM has its own, the same goes for HRIS and pretty much all the business tools.

Each solution provider offers its own AI and does everything it can to protect its playing field, i.e. to prevent the AI offered by a competitor from using its data. It’s a way of making yourself indispensable, which is a practice that is as bad for the tools as it is for the people (Making yourself indispensable at work is the worst thing you can do).

So everyone works on their own basis, in their own logic, without the ability to reconcile the information.

From my point of view, one of the most interesting use cases of AI is to put data from different silos into perspective with each other. A concrete example: less than 10% of businesses are able to correlate HR data and business metrics (Is People Analytics the Next Job to Be Outsourced by Technology?). But without collaboration between solutions, without pooling data, AI will not help much.

Of course, there are counter-examples such as Workday and Salesforce, which are working on agents (Workday Salesforce Partnership: Teaming Up For Enterprise AIcapable of operating on the data of their respective products, but I don’t see this becoming widespread. Indeed, the particularity of these two vendors is that they have no products that compete with each other. I find it hard to envisage the same thing between SAP and Oracle, for example.

So, for lack of anything better, businesses will have to make do with this limitation or find non-native solutions that will be complicated and costly to implement?

This gives us an AI that is incapable of contextualizing certain information and putting it into perspective. When the IS and the data are organized in silos, AI thinks and works in silos.

AI that knows too much… or not enough

It’s a paradox: AI doesn’t do what it should know how to do, but sometimes learns what it shouldn’t know.

Or, to be exact, it doesn’t know what it has the right to repeat to whom.

AI is fed massive volumes of internal data and potentially has access to content that an average employee would never see, such as confidential documents that have been misfiled or simply information that is not intended for insiders.

The entire information heritage is used to educate it, and this is a priori good practice. On the other hand, although the AI is in principle intended to know everything, it does not know what information it has the right to use to respond to a particular person.

A caricatural example, but just to illustrate the point. An intern or a young recruit in the HR department may be asked to compile a summary file on the history of the layoff programs made by the company in the past. Why not? But if the disk shared by the HR department contains a file reserved for management that refers to a plan currently being prepared, and the AI mentions this in its summary, it goes without saying that this could cause problems.

Once again, the problem is not that she has read the document but that she does not know who she is allowed to talk to about the content. She may therefore include sensitive information in her answers that is misinterpreted or taken out of context, which raises questions of confidentiality, compliance and responsibility.

If we consider AIs as super assistants, take the example of a Chief of Staff. He has access to certain bodies, knows a lot about strategic and confidential subjects, but when a colleague talks to him at the coffee machine he knows what he has the right to say to whom. AI does not.

Conversely, it may ignore information that is useful to a given employee, simply because this content is stored in a tool to which it does not have access, or in a format that cannot be used.

AI therefore sometimes gives a “plausible” answer, but one based on an unbalanced data scope: too broad on some subjects, too narrow on others. This can even lead to hallucinations.

When it is neither reliable, secure nor contextual, AI can become a source of confusion, even a risk, especially if users do not have the means to understand what its answers are based on and to question them.

And yes, unlike consumer AI, business AI is far from smooth sailing in terms of deployment (Why enterprise AI can’t keep up with consumer AI: beyond ChatGPT, a more complex reality).

A question of governance, not technology

My intention here is not to make an accusatory speech about AI: it is a fantastic tool with real potential and it would be stupid to do without it. Yes, there are technical limitations and precautions to be taken, but the problem is primarily cultural and organizational.

As surprising as it may seem to you, the subjects I mention here may seem obvious to you, things to be framed upstream, but the truth is that many people in companies, sometimes and even often very high-level decision-makers, arrive with an ambitious use case but without ever having asked themselves questions about subjects as basic as these, often rightly so because “it’s not their problem”. Yes, but it has to be someone’s problem.

Indeed, AI can also come to the rescue of knowledge management (Will AI save Knowledge Management?), but it still requires the will and awareness of the real problems.

For a long time, tools such as shared directories were used without a usage strategy or control of content quality. We have multiplied flows, storage, formats… without ever wondering what happened to the information once it was produced.

And now we expect AI to make all of this fluid, relevant and efficient.

AI can do many things but is not a magician: it merely reflects the state of the system in which it operates. By trying to restore value to information and knowledge, it only highlights the way in which they have been managed until now.

Today, the digital workplace is an open system (too open?), with an accumulation of unreliable content and poorly governed practices.

Back to the basics of information management

Information and knowledge management is a discipline as old as office computing, but certainly one of the least attractive, often seen as dusty. This is logical: what some see as a heritage is seen only as a document by those who use it, and not everyone takes the same care with it. It’s just a difference in perspective.

But now that these tools have become commoditized, storage space is infinite, technology is improving, and more information is produced every day than the day before, one might think that all this is no longer useful. On the contrary: current technologies have allowed us to do anything and everything in terms of information management, and if we want AI to help us survive in this context, we will have to help it by disciplining ourselves a little.

This means putting the subject of information governance back at the top of the list of priority subjects: who produces? for whom? what life cycle? what quality criteria?

It is then important to (re)learn how to qualify content. A validated file is not worth a draft or a “work in progress”. Up-to-date data takes precedence over old, unrevised entries.

The silos of applications will then have to be tackled. An AI without contextual transversality is still an improved search engine.

Finally, the aim will be to establish a culture of production and useful use of information. Any information that is not used or usable is digital waste. Any unused data is a cost, not an asset.

Bottom line

You must realize that the arrival of AI ultimately poses many problems and is potentially a real source of risk. Indeed, it will not solve the problems of digital disorder.

On the contrary, it will accelerate and amplify them if nothing is done, but strangely enough, that is the good news.

It acts as a mirror, reflecting exactly what it is given, and since its adoption is not a question, it will force us to see reality and take the necessary measures to remedy it. It acts, in a way, as a revealer of informational chaos and an exhortation to put an end to it.

It is therefore an opportunity to put an end to the informational debt of the digital workplace by asking ourselves what is the real quality of the environment in which we ask it to operate?

Because in a disorganized digital workplace, AI is brilliant but unreliable.

But this is not inevitable. Just an organizational choice.

AI-generated image by Canva.com

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent