Artificial Intelligence and discrimination


Share

Artificial Intelligence (AI) is an attention-grabbing concept. From suggestions of workplaces being overtaken by robots, to being managed by computer, or to promises of automated efficiency and productivity, the promises are huge. Can AI in the workplace live up to expectations and are there dangers to be wary of?

The problem with algorithms

For all their effectiveness and potential advantages, it is important not to forget that algorithms are computer codes written by humans for human purposes. They are a set of rules which a computer follows to solve a problem.

“The problem with algorithms is that they can discriminate on the grounds of protected characteristics when they become tainted by the unconscious assumptions and attitudes of their creators, or as a result of unbalanced or prejudiced data which they are programmed to rely on and even learn from.”

Dangers of using AI alone

Using AI alone as a means of recruitment can be risky. Amazon has had to scrap an AI recruitment tool on the basis that it discriminated against women. The tool used algorithms to give job applicants scores ranging from one to five stars. After a few months of use, Amazon realised that the system was discriminating against applicants for certain jobs, the reason for this being that Amazon’s computer models were trained to vet candidates on the basis of patterns submitted to the company over a ten year period.

As the majority of CVs came from men, reflecting a male dominance in the tech industry, the machines replicated these patterns and taught themselves that male candidates were to be preferred.

This algorithm was more likely to identify new employees who had similar experiences, backgrounds and interests as the current workforce. New recruits were therefore far more likely to be the same gender and race as existing employees. These biases are not necessarily easy to programme out, even once the programmers are aware of them.

Discrimination

If a job applicant is automatically rejected because they are different to existing employees, they may be able to bring a claim for indirect discrimination on the basis that the algorithm placed them at a particular disadvantage because of a protected characteristic. There may also be mileage for a direct discrimination claim if there is evidence that entire protected groups were systematically excluded from the recruitment process by the AI technology.

What steps should employers take?

While AI can certainly be used to make the recruitment process more efficient, employers will have to guard against any discriminatory elements it introduces into the process. “Automation bias”, which occurs when humans give undue weight to the conclusions presented by automated decision-makers, and ignore evidence suggesting they should make a different decision, needs to be protected against.

Managers need to understand how the AI is making its selection decisions, and should be trained to treat them as recommendations only. This isn’t always straight forward, with the software providers naturally keen to protect their IP and innovations. There should also always be HR involvement at an early point in the decision-making process to ensure that automation bias does not occur. Proper processes to vet potential candidates should be set up, involving a mixture of human involvement and AI to ensure that any potential claims of discrimination brought by rejected candidates can be robustly defended. Decisions produced by AI should also be kept under review over a period of time, and checked for potential discriminatory trends.

Insight

Thinking Real Estate – Issue 14

Explore
Insight

Understanding life sciences: the opportunity for real estate

Explore
Insight

The flight to sustainability for airport operators

Explore
News

Trowers promotes 10 to partnership

Explore
Insight

Gas safety instructions and service of Section 21 notices

Explore
Insight

People and planet: delivering net-zero inclusively

Explore