Artificial Intelligence is here to stay but while its presence is widely accepted, its application in recruitment has many pitfalls. With the coronavirus economy and reinvigorated debate on inclusion and discrimination, it is important that AI helps us fix the bias, not exaggerate it.

While the coronavirus pandemic has paralysed many industries, some, including e-commerce and logistics have thrived, creating new jobs and throwing a lifeline to those facing unemployment. Hiring businesses need to act quickly, and to sift an inevitable high volume of applications, many will use AI-powered technology to speed up the process.

With its candidate auto-screening capability the technology can eliminate repetitive, time-consuming tasks such as the manual screening of applications. But it isn’t fool-proof, and with the reinvigorated debate in the US on inclusion and discrimination, companies need to ensure that AI is a friend to recruitment, and not a foe, for both recruiter and candidate.

Photo by Evangeline Shaw on Unsplash

The AI-powered recruitment is becoming popular. But its use is not always risk-free

According to the latest Artificial Intelligence in the Workplace research, 83% of white-collar workers believe that AI will help perform their tasks more efficiently in three years from now while staggering 96% expect this technology to add value to their jobs.

More specifically, the main merits of AI-powered recruitment are time savings and consistency: given the same input, by definition, AI will arrive at the same conclusion every time. But there are pitfalls, not least, the hidden biases that can be learned by AI algorithms as they filter candidates to locate the skills and qualities sought by the company. For example, past hiring patterns could mean the AI software favours graduates from certain universities, or of a certain gender. Amazon famously had to scrap its AI recruiting tool that showed bias against women.

Some companies apply AI and analytics to the facial expressions, movements, and behaviors of candidates to identify the better quality hire, but research is showing that some of these analytics tools produce skewed results that are discriminatory based on ethnicity.

Brian Kropp, chief of HR research at Gartner, says: “Companies must take the utmost care working with these tools. Often, cases of bias are not immediately obvious, and you have to check for secondary ripple effects, for example, it could be putting BAME candidates into jobs with lower career prospects.”

AI can also be used as a background screening tool. Adecco Italy’s 2019 Work Trends Study highlighted the use of social media platforms by candidates to search and apply for jobs, inadvertently revealing a great deal about their true personality. Hiring companies can use AI to collect and analyse this data to assess the quality and fit of that individual for the role on offer, but is this fair on the candidate?

Nikolas Kairinos, founder and CEO at edtech Sorros, argues that drawing correlations between social media activity and the suitability of someone for a specific job is still a largely subjective process. “Much work needs to be done to objectively create these mappings and prevent problematic usages,” he says.

As AI technology advances, companies may want to extend its role in the recruitment process, and by default, reduce the level of human intervention. For example, most recruiters cite culture fit as their top priority when hiring, particularly at the executive level, and assessment of culture fit traditionally requires human experience and intuition. Now it can be assessed by AI. Executive search firm ZRG uses an algorithm that measures an executive’s ‘culture fit score’ by analysing data points from their prior experience and behavioral traits, to assess how well they will fit in with their prospective colleagues.

Many see this trend towards replacing the human intervention with AI as a risky development, as succinctly captured by Boston Consulting Group’s Sylvain Duranton in his definition of the ‘human zero mindset’ where he said: ‘If human judgement is not kept in the loop, AI will bring a terrifying form of new bureaucracy, where bureaucracy will take more and more decisions without human input.’

Fixing a biased algorithm may be easier than fixing a biased human

In theory, because AI can learn ingrained biases in an organization, unsupervised AI systems will replicate bias better than humans, because they are trained to identify the patterns in data and efficiently recreate them. But the bigger challenge is the absence of a real dataset free of structural or implicit bias, as Khyati Sundaram, CEO of recruitment tech firm Applied, explains.

“Before the 20th century, very few women worked in the formal labor market, so an uninformed AI using only historic data to select a candidate would naturally conclude that women are less able, and would likely consistently give it to a man,” she says. “No sensible person would use 19th century data to help predict talent, but unsupervised AI tools can’t discern why that would be problematic.”

With the speed of evolution in AI technology, algorithmic bias will be easier to ‘fix’ than the unconscious human bias; those learned stereotypes that are unintentional, deeply ingrained, and able to influence behavior. To that end, the role of AI will continue to dominate in HR technology, but in reality, few would envision a future of recruitment where AI is left to function entirely autonomously.

The best recruitment model brings well-programmed AI and well-trained humans together

Companies need to consider the negative impact on the candidate experience, where application submissions prompt instant multiple online tests and lack any sense of human intervention in the process, and on their future ability to attract talent. Hiring marketplace Vettery uses AI to vet candidates, but ensures candidates are fully supported throughout the recruiting process by assigning account partners to the firm’s talent executives.

A consensus would surely conclude that optimal hiring outcomes for both employer and candidate rely on a recruitment model where well-programmed AI and well-trained humans operate together, the latter having the final say on which candidates to onboard, based on their personal knowledge, past experience, soft skills, and intuition.

Precisely for this reason, Alain Dehaze, CEO of The Adecco Group prefers to call AI “augmented intelligence” because its role should be to leverage human skills such as critical thinking and emotional intelligence, not replace their jobs entirely.

COVID-19 has transformed businesses operations beyond recognition, which will surely accelerate the adoption of AI in many workplaces. Investment in data science skills, as per last year’s Microsoft and General Assembly partnership, to ensure the analysis and use of data in an unbiased way is now a priority, while the reskilling and upskilling of employees to ensure the ethical use of the technology must also be on the agenda of companies relying on AI to power decision-making.

“The danger of AI bias is not an existential threat; rather, it is an opportunity to make lasting changes and improve diversity standards,” says Kairinos. “Once this step is taken, AI will enhance recruiting efforts and support the needs of both candidates and recruiters.”


News and Research