A growing body of research indicates that artificial intelligence systems used for job recruitment, which have become increasingly common, reinforce racial and gender inequality. Now, innovators are hoping to spur a kind of course correction by developing software that promises more accountability, and combats — rather than perpetuates — employment discrimination.
When AI was initially introduced into the hiring process, the potential benefits seemed promising. Not only would algorithms expedite information processing, the developers said, they had the capability to counter bias often found in human decision-making. The idea was that a computer would not hold the same biases as a person and not judge a candidate based on gender or race.
But now that AI has been used for several years, it is clear that software often reinforces inequality in job recruitment, instead of reducing it.
Computer algorithms are not biased because of a technical glitch or because of some sci-fi scenario in which the robot takes on a personality of its own. Many AI hiring systems are created using a company’s previous applicant history — which historically, has been favorable to cisgender, white men.
“If you are building a model using what has been historically successful, it automatically skews the rating system to favor what has been historically representative, which we know to be male, and predominantly white,” said Stephanie Lampkin, CEO and Founder of Blendoor, a job recruiting platform that hides candidates’ names and photos when matching them with companies, so that individuals can be evaluated based solely on skill or experience.
In short, AI systems are biased because people are, and the realization has forced some companies to rethink their use of AI in hiring.
Last year, Amazon got rid of its AI job recruiting tool after discovering that it was biased against women. Amazon had trained its algorithms to rate resumes based on patterns in past applicant history — but because women were so rare in that data set, the algorithm believed men were preferable, and poorly rated women’s applications.
Some entrepreneurs and developers think there are still effective ways to use AI in recruiting — with some adjustments.
Lampkin started Blendoor four years ago after she found herself repeatedly hitting walls in the hiring process. She said the difficult part of addressing bias is getting access to larger and more inclusive data sets.
At any given point, Blendoor is using algorithms to match about 105,000 candidates to open jobs at various companies. Job seekers can upload their resumes to the site, but the company also makes a specific effort to recruit people who are otherwise underrepresented in the workforce. That includes people of color, women and people with disabilities, by partnering with colleges — including historically black colleges and universities — and professional organizations.
Lampkin said in one instance, a company that used Blendoor to hire interns increased their underrepresented minority by six times the amount they had recruited the previous year.
Even the most advanced AI won’t affect the results of job recruitment if the data isn’t using the full scope of the qualified talent pool, Lampkin explained, adding that companies seem to care about these results, too.
Another way tech gurus are looking to reduce bias in AI hiring is by increasing accountability.
Data scientist Cathy O’Neil, the founder of O’Neil Risk Consulting & Algorithmic Auditing, or ORCAA, is developing the concept of the “accountable algorithm.” The idea is that algorithms should be checked or “audited” for fairness.
“Algorithms aren’t evil or inherently good. They can have negative effects in a given context,” O’Neil said. “Right now we’re asked to put — with blind faith — trust in whatever algorithms give to us. I’m trying to go to the next step and ask, what does it mean to have a trustworthy algorithm?”
For O’Neil, a trustworthy algorithm incorporates a third-party verification model. ORCAA works with companies to test their algorithms and determine whether they are disadvantaging any specific group. ORCAA then helps adjust the algorithms and retests them to make sure the issues are resolved. Once the concerns are addressed, the companies are awarded an ORCAA Seal of Approval, meaning their algorithms have been tested for issues like accuracy, bias, consistency, transparency, fairness and legal compliance.
It’s a third-party seal of approval that O’Neil said she hopes will become more widespread in the future.
“That is one model that I really enjoy thinking about,” said O’Neil. “In the future, stakeholders should be like, wait, why did this work? Why should we trust you?”
Dr. Safiya Umoja Noble, who co-directs the Center for Critical Internet Inquiry at UCLA and has studied how bias appears in search engines like Google, argues that companies themselves have to be held legally accountable. She said it’s insufficient to blame the issue on a product’s algorithm — especially when it’s a product that’s highly profitable or impactful in society.
“We need civil rights and human rights protections around the output of these systems, and what they actually do,” Noble said. “That’s the thing to look at, because there is no ‘nirvana state’ where there is going to be an unbiased algorithm.”