Big Data and analytics will stretch all over enterprises in the upcoming years and one of their primary target will be recruitment. Facing this unavoidable change, I see people having two kinds of reactions. The first, very popular among novelty lovers and change advocates is to praise the positive side and, anyway, say that since it’s unavoidable we’ll have to learn to do with. The seconde, more conservative, is to worry about a double risk. First because the automation of recruitment may lead to hire more and more clones, and second because it will cause more and more discrimination.
First let’s put an end to a fantasy : the hiring process won’t be processed by a machine from end-to-end. In the end the machine – even thought I prefer the word “intelligent machine” – will issue a recommendation. In the end an human will decide either to follow it or not. In the end, human judgement will matter.
That said we can’t afford to be denial and refuse to admit that automating recruitment may lead to recruit clone and discriminate. But – and that’s nothing but my own views – it means making a big misinterpretation of the word “automation”.
Today, recruiters already act like machines
Automating the “old way” of recruiting may be risky from this perspective. But what did automation mean until now ? A selection based on the college, the degree and experience in some specific industries or companies, the mastery of some languages. And some more personal criteria to make sure that, once hired, the candidate will perfectly blend into the mould. Once the characteristics of the “right candidate” were defined, they were applied mechanically…starting with having machine reading the resumes in the first place, doing nothing but looking for keywords before pushing – or not – the resume to human recruiters. And when humans used to perform the whole process, the rule being to stick to the criteria, there was no chance to get anything but clones. Add to that that recruiters themselves did not want to take any risk. In the end, the final customer, the manager, was never satisfied by the candidates proposed to him at the end of the process.
Automation, as it happens today, don’t start with a-priori criteria but from an a posteriori matching. It does not start with the assumption that some profiles will fit in the job better but acknowledges that some profiles usually perform better in a given position and in a given company than others. Maybe the result will be the same in most cases but today’s automation can also lead to promote atypical profiles.
Let’s consider a fictitious case.
A given recruiter may think, based on his experience, that someone with a degree in History may not be the right person for a Communication Director position. Analysing hundreds of thousands cases, an intelligent agent may come to the conclusion that the few similar profiles in the same position used to overperform compared to the average.
Replace the criteria with the one you want and you get the logic.
Machine can favor diversity…if they’re asked to
I even think we can go further. Machines can, at the enterprise or team level, identify how important diversity is (not because it believes in but because it measured it in terms of performance, turnover, engagement, innovation) and rank the diversity factor higher in its recommandation. “This candidate may be better but regarding to global and sustainable performance I recommend another one”.
If we consider all the cognitive biases that impact the recruitment process and all the candidates rejected based on considerations that have nothing to do with their ability to perform in a given position – even if the recruiter if not conscious of what he’s doing – and without claiming that machines are perfect – far from it – I believe in a tangible improvement of the recruitment performance which goal is not to hire people “like this” or “like that” but people that will be successful in their job and in the company.
To end, machines will never be anything but machines : they do what they’re told to and deliver the expected outcome. Including the needed safeguards in the software and be specific regarding to the specific outcome (performance, diversity, engagement, integration) is the job of humans, humans who, in fine, will have the final say when the final decision will be made.
The last unknown thing is to know whether this decision will balance the mistakes machines can make or, on the contrary, to reintroduce biases in a process that’s been bias-free until the last step.
But once again the problem is not and won’t be technology. It’s the people who design and use it.