Is AI too biased to be helpful to HR?

Published in We are all biased!

Sep 14, 2020

3 mins

Is AI too biased to be helpful to HR?
author
Laetitia VitaudLab expert

Future of work author and speaker

AI: a solution to eliminate bias?

For a couple of decades now, there’s been a growing awareness in companies of the human biases that affect recruitment and management. Most recruiters now pay lip service to the idea that recruiting can be made better—fairer and more effective—if bias is eliminated from hiring.

Artificial intelligence, therefore, holds great promises: it can help prevent unconscious human bias and expand the pipeline. Numerous applications and platforms have been developed in recent years to provide HR departments with the proper tools to do better, fight discrimination and tap into under-exploited pools of female or minority candidates.

For example, Textio uses AI to help recruiters “find the right words by putting the world’s best hiring and language data insights right where you need them”. HireSweet aims to help companies “find exactly these candidates that are perfect for a job but not actively looking”. Drafted is a startup that promises to help you “hire talent through your company network”.

AI tools are made by (flawed) humans

AI learns from the data sets it is given, so bias can often set in, unbeknown to those who use it. AI-powered tools end up reproducing and sometimes amplifying the biases present in data collection and algorithm design. “Algorithmic bias” refers to the systematic errors in a computer system that create unfair outcomes, by privileging one arbitrary group of users over others.

The example of face-recognition software is a great illustration of the problem: “an MIT study of three commercial gender-recognition systems found they had error rates of up to 34% for dark-skinned women — a rate nearly 49 times that for white men.” Trained on incomplete, racially-biased data sets, face-recognition software tools are flawed. That’s why IBM decided to stop offering facial recognition software for “mass surveillance or racial profilingafter the killing of George Floyd.

The deepest-rooted source of bias in AI is the human behavior it is simulating”, says this HBR piece. On many platforms and applications, social biases of race, gender, sexuality, and ethnicity are inadvertently reinforced by algorithms programmed by biased humans. HR decisions helped by AI-powered tools are not immune. As the use of AI and algorithms is expected to grow in recruitment, some experts warn that HR tools should be audited to eliminate major in-built biases.

In tech, the problem of AI bias is pervasive because there is too little diversity. 73% of Amazon’s leadership is male. Facebook has only 32.6% of female leaders. AI-powered tools learn from historical data that doesn’t reflect any diversity. Furthermore, there is a huge problem of gender diversity in the field of AI and too few AI solutions are developed by women.

How can AI bias be addressed?

The fact that AI reproduces bias does not mean it should be abandoned completely. It means that rigorous processes should be implemented to make it better. Many of the flaws found in AI tools can in fact be addressed. That’s the goal of the OpenAI movement and the Future of Life Institute which have listed a set of design principles for making AI fairer and more ethical. The most important principle is that “AI should be designed so it can be audited and the bias found in it can be removed. An AI audit should function like the safety testing of a new car before someone drives it.

Here are four ways to eliminate AI bias and improve HR decisions:

  • Review your data sets before feeding it to the machine. AI learns from the data so if the data sets are better, it will learn better. Collect more diverse data and correct existing biases. Ideally, your data sets should reflect your target population. They should reflect your hiring objectives so you may have to remove biased historical data.

  • Build an ethics framework to address AI bias. That’s what Google aims to do with its “Advanced Technology External Advisory Council” (ATEAC). “With great power comes great responsibility”, as Spiderman’s uncle used to say. Any company using or developing AI should reflect upon that responsibility.

  • Always take bias into account when choosing your HR tools. Some AI solutions are designed to address the problem. Others aren’t. Choose wisely! Fortunately, there’s more and more awareness among innovators and that choice may become easier in the future.

  • Always keep humans involved in your HR processes. Whatever AI tools you use, you will make it more effective by putting people in the loop. AI works best when it works for hand in hand with humans who can check and audit the processes. AI should never work like a black box.

Ready to change the experience at work?

Discover our products

Discover