Recruiting tools that claim to use artificial intelligence to avoid gender and racial bias may not improve diversity in hiring and may actually perpetuate such biases, University of Cambridge researchers claimed Sunday, broadcasting the programs – which have drawn criticism in the past – as a way to use technology to offer a quick fix to a deeper problem.
In an article published by Philosophy and technologyresearchers examined claims from several companies that offer AI-powered recruiting tools, many of which claim to eliminate bias and promote diversity by masking candidates’ names, genders and other identifiers, and some of which rank candidates based on resume scans, online assessments and analyzes of candidates’ speech and facial expressions.
The researchers – two professors from the University of Cambridge’s Center for Gender Studies – argued that these tools may actually promote uniformity in hiring because they replicate the cultural biases of the “ideal candidate”, which has always been white or European men.
The tools also may not improve diversity because they are based on past company data and therefore may promote candidates who most closely resemble current employees.
There is “little accountability for how these products are built or tested,” said study co-author Eleanor Drage, a researcher at the University of Cambridge’s Center for Gender Studies, adding that technology could serve as “dangerous”. source of “misinformation about how recruitment can be ‘biased’ and made fairer. »
“By claiming that racism, sexism and other forms of discrimination can be eliminated from the hiring process using artificial intelligence, these companies reduce race and gender to insignificant data points, rather only to systems of power that shape how we move through the world,” Drage said in a statement.
Amazon announced in 2018 that it would stop using an AI-based recruiting tool to review applicants’ resumes after finding the system heavily discriminates against women. Indeed, the computer models she relied on were developed based on resumes submitted to the company over the past 10 years, which came primarily from male applicants.
Organizations are increasingly turning to AI to help manage recruitment processes. In a 2020 survey of more than 300 human resource managers cited by the authors of Sunday’s article, consulting firm Gartner found that 86% of employers use virtual technology in their hiring practices, a a trend that has accelerated since the Covid-19 pandemic forced many to shift work online. While some companies have argued that AI can provide a more cost-effective and faster hiring process, some experts have found that the systems tend to promote — rather than eliminate — racial and gender-biased hiring by replicating existing prejudices of the real world. Several U.S. lawmakers have sought to address bias in artificial intelligence systems because the technology continues to evolve rapidly and there are few laws to regulate it. The White House released a “Blueprint for an AI Bill of Rights” this week, which argues that the algorithms used for hiring have been shown to “reflect and reproduce existing undesirable inequalities” or incorporate new “biases and discrimination”. The plan – which is neither legally binding nor official government policy – calls on companies to ensure that AI does not discriminate or violate data privacy, and to inform users when the technology is used.
In a list of recommendations, the authors of The Sunday’s Philosophy and technology The paper suggests that companies developing AI technologies focus on broader systematic inequalities rather than “individualized cases of bias.” For example, they suggest software developers examine the categories used to sort, process and categorize applicants and how these categories can promote discrimination based on certain assumptions about gender and race. The researchers also argue that HR professionals should try to understand how AI recruiting tools work and what some of their potential limitations are.
The European Union has classified AI-based recruitment software and performance assessment tools as “high risk” in its new draft AI legal framework, meaning the tools would be subject to further scrutiny and should meet certain compliance requirements.
DC wants to lead the fight against AI bias (Axios)
#Experts #AIBased #Recruiting #Tools #Improve #Hiring #Diversity