Employees are starting to use AI tools in the workplace that they use in their personal lives, which is good news in terms of their maturity but creates security risks. This is the finding of a study conducted by Microsoft and Linkedin reported here by HRExecutive.
This is no surprise – in fact, the opposite would be surprising. And it’s even predictable: what the study and the article tell us is, with a hint of sensationalism, just the umpteenth remake of a story we know well: that of the consumerization of enterprise tools.
What are we talking about? The fact that employees are bringing into the company the tools they use at home, thus pre-empting any plans their employer might have in this area. This has been a common phenomenon since the mid-2000s, with the emergence of SaaS-mode tools, with financially attractive freemium models and interfaces that are far more user-friendly than the tools available in the company at the time. For my older readers, this will remind them of things like the arrival of wikis and social networks in the enterprise, and it was also the end-user who brought the iPhone into corporate fleets or encouraged BYOD policies.
Enterprise and the consumerization of IT: a changing balance of power
While there’s nothing new about this phenomenon, it’s now being tackled in a different way.
In the early days, it was a tug-of-war between employees and CIOs: one wanted to use simpler, more efficient tools, the other didn’t want to hear about it.
What were the motivations? More or less real security risks, the desire to retain control and even, at one time, a way of protecting one’s own existence. If the first two are legitimate, the third is no longer the case, and makes us smile in retrospect: with the spread of the SaaS model, some feared they would no longer be able to justify large teams and substantial budgets, before realizing, on the contrary, the advantages they could derive from it.
Times have changed since then, and we’re no longer in a power struggle. Users are just logically moving faster than enterprise software publishers, and there is a latency period between the moment when a technology – AI in our case – is available to the general public and when versions compatible with business constraints and needs arrive. It’s no longer a battle between users who want it and CIOs who say no. It’s just a question of speed of execution, and things eventually fall into place.
And what used to be a power struggle has now become a period of observation and learning for CIOs, which they even view positively, even if the risks must not be overlooked.
Consumer AI tools: an opportunity for businesses
This is an interesting situation for business for several reasons.
Validate trends
Rather than embarking on a lengthy and costly project involving emerging technologies without being totally sure that they correspond to a need or that their adoption will be effective, this phase enables businesses to validate that there is a demand among employees.
Validate use cases
You may be aware of a technology’s potential, but unaware of its use cases, or rather, have no idea of very specific needs in the field. When employees bring mass-market tools into their working environment, it’s an opportunity to observe general use cases that the business would have imagined on its own (but not always), but also much more specific needs at the edges.
Learn
When it comes to technology, governance is essential, and thinking about the governance of an emerging technology is quite complicated. As a precaution, you can be too restrictive at the outset, which can lead to the rejection or even proliferation of under-the-radar tools, with the attendant security risks. But being too lax also has its risks. Observation is the key to learning, and to putting the cursor in the right place, as long as this discovery phase does not expose sensitive data.
Acculturate, train and prepare for adoption
Change management and technology adoption are always a concern for businesses. This phase enables employees to familiarize themselves with AI, learn how to use it, and understand its limits. And when the enterprise tools arrive, much of the adoption work will have been carried out spontaneously, without the business even having to worry about it.
And this is precisely the argument that enterprise software vendors are using to get back into the game…
Consumer AI tools: a potential risk?
Using business data in an uncontrolled, unsecured environment is potentially dangerous. I say potentially, because it all depends on the criticality of the data in question. But we’ll see that when it comes to AI, the security risk is even greater.
Consumer AI is as secure…as consumer tools
I’m not going to dwell on this point, which to my mind is obvious. When you use public tools, you’re dealing with public, shared hosting, you don’t know where they’re located, and you have no idea how the data you’re communicating will be used.
In some cases this doesn’t matter (the marketing department using Canva to create visuals), in others it can be very dangerous.
Consumer AI limits business use cases
For the reasons I’ve just mentioned, it’s totally forbidden to use AI to process business data, which limits the scope of experimentation.
You don’t need long explanations to understand that if you put sensitive data (HR, Finance…) into a mass-market AI, you have no guarantee that it won’t be exploited and reused. And even if you’re promised the opposite.
So it’s urgent that we move on to enterprise solutions to make the most of AI’s potential.
Consumer AI even exposes data that doesn’t exist
Here we’re dealing with a specificity of AI: the risk no longer concerns data as such, but – and this can be worse – ideas, concerns, future things…
When you use a consumer AI to find out about a subject by asking it for some kind of summary, to prepare a presentation of great importance that is intended to remain confidential, short of revealing sensitive information you are revealing concerns, subjects of interest, intentions, projects, preoccupations.
In addition to capturing your data, an AI also captures what’s on your mind and can deduce your future plans. This is the ultimate in industrial spying, and perhaps, in my opinion, the worst danger when it comes to AI, because it’s not a danger that users think of spontaneously.
Bottom line
The use of mass-market AI solutions by employees in the business is no more or less dangerous than the use of any other mass-market tool, it’s just that we need to be aware of this when deciding what data we do or don’t put in.
However, it would be a mistake to restrict their use until the arrival of solutions designed for the business and its uses: for one thing, they cannot be totally banned (an employee can always use his phone or do it from home…), and this would mean losing opportunities to prepare the ground for enterprise AI. However, this transitional phase needs to be carefully managed, and people need to be made aware of the risks involved.
Added to this is a new risk that is unique to AI: its ability to capture not just raw data, but also to understand intentions.
Image : IA robot by kate3155 via Shutterstock