How can we help you?

A Thinking Business Publication

As the buzz about artificial intelligence has been gathering pace, with the potential to transform countless aspects of our personal and professional lives, the recruitment industry has found itself at the sharp end of new technology adoption.

Advances in the AI tools available to help companies source, recruit and retain staff have been so rapid that many businesses have found it is their HR and People teams that are at the cutting edge of decision-making on how such tools should be deployed.

Time-saving algorithms that are capable of communicating with candidates on behalf of companies, of shortlisting candidates and of setting up their interviews have been around for some time, bringing with them a raft of discrimination bias risks that companies are becoming more aware of. 

Now the next generation of tools goes further, with the ability to write job specifications by identifying what is missing from an existing team, for example, or to assess not just the technical fit of a candidate but also their personality fit, to tell you whether they’ll work well in your business. This presents a whole new range of challenges for employers looking to seize opportunities without tripping up.

Nicola Ihnatowicz, a partner in the employment department at Trowers & Hamlins, says: “There were some horror stories in the early days of AI recruitment, when employers' models didn’t shortlist any women, for example, because they were relying on data on who had made it to the top of the tech industry in the past decade.”

Companies are now aware of those challenges and much more mindful of ensuring algorithms combat bias and are properly overseen, she says. But from a business point of view, Ihnatowicz says there are now several risks.

“The first one is that the technology is moving so fast that it’s hard to keep up and understand what the tools are doing and what their impact might be,” she says. “That is not a reason not to get involved, but it is really important that a business understands what a tool is doing and what the company is trying to achieve with it.”

On discrimination risk, she adds: “That is something businesses need to be alive to, and I think they are. But that doesn’t mean we can assume all tools are okay. You have got to actively engage on this, talk to your AI provider, understand the technology and be inquisitive about the results, asking about the underlying data. If you are implementing something, you need to keep reviewing outcomes, so that you are aware if you are accidentally ending up with results skewed by age, gender or ethnicity, for example, and then you can make changes to stop that happening.”

When it comes to the latest generation of tools capable of assessing the cultural fit of a candidate, and even who they might best work alongside within the business, Ihnatowicz says it is important to be aware of unintended consequences.

“You might want to use terms like ‘a structured working environment’ to differentiate a professional services firm from a start-up tech business, which is fine, but you want to make sure that the AI doesn’t screen out people who might need flexibility for reasons such as childcare responsibilities as a result, or people with long commutes,” she says.

Danielle Ingham, also a partner in the employment and pensions team at Trowers, adds: “One potential problem is, what does "cultural fit" actually mean? Looks like everyone else? Has similar background and interests? You really have to make sure you are using the technology as a solution for greater inclusion and not unintentionally  exacerbating  problems. It's actually a great opportunity for employers to tailor something bespoke and  be really thoughtful about that, challenging the requirements in the same way that they would in a traditional recruitment process.”

Anthony Kelly, the founder of AI and blockchain recruitment specialists DeepRec.ai who recently joined one of our Trowers Tuesday sessions on being a Digital Employer of the Future, says: “The key is to always keep a human in the loop. Whatever process you introduce to automate and make your processes easier, having someone who can say this isn’t quite right or we’re not getting the desired results is the best way to achieve the right alignment with the business and the HR strategy.”

There are many great examples of high tech algorithms making a positive impact to combat employment bias, including through the use of contextual recruitment tools. At Trowers & Hamlins, we have started using a contextual recruitment tool as part of our graduate recruitment process, as it enables us to identify the best hires from the widest possible pools of candidates.

The software that we use allows us to assess candidates’ achievements in the context of their background. We ask applicants to fill in an entirely optional form at the outset of the process, which asks questions about the schools they attended, whether they were eligible for free school meals, whether they have ever spent time in care, and much more. 

It then produces a social mobility output for a candidate and a performance metric, showing how well someone did in their A-levels compared to every one else in their school, or their town, and highlighting the extent to which they have outperformed their peers.

Rachel Chapman, graduate recruitment and development manager at Trowers, says: “We might look at someone who got ABB in their A-levels and pass them over, but if the average at their school was DDD then we don’t want to miss them out. It is very much about screening in rather than screening out, and candidates absolutely win their places with no lowering of standards.”

She adds: “We look for resilience, determination and drive in our graduate recruits and candidates that have outperformed their peers are much more likely to have those attributes. Recent studies show outperformance at school is likely to lead to outperformance at work.”

Ihnatowicz says the key to success with all these recruitment tools is making sure a human takes the final decision. “That means you also have to be wary of confirmation bias,” she says. “Just because the computer says something, that doesn’t make it right. The other takeaway is to always be critical of the results and  keep models under review so that you can keep making improvements.”