AI filtering for job applications 'discriminates' based on race, religion
“Algorithmic discrimination” occurs when data the AI is trained on already carries biases against groups, thereby making unfavourable decisions against them, an expert says.
Job recruiters in today’s world use filtering systems powered by artificial intelligence (AI), which allegedly “algorithmically discriminates” against people based on their race, gender, language and religion.
Yeliz Bozkurt Gumrukcuoglu, a professor of private law at Ibn Haldun University in Istanbul, told Anadolu Agency that tech giants like Meta, Google and Amazon have been known to use such tools, as these automatic filtering tools provide time efficiency.
Gumrukcuoglu said that “algorithmic discrimination” occurs when the data the AI is trained on already carries biases against certain groups, thereby making unfavourable decisions against them.
The prejudices of those who develop these tools also contribute to the unfairness of AI-powered job recruitment, said Gumrukcuoglu, and as a result, she argued that considerably fewer people of colour and women can pass through the AI filtering.
She said this issue has been the subject of multiple lawsuits in many countries so far, first emerging in a case regarding Amazon.
“Amazon developed its recruitment algorithm with the data trained by the candidates accepted in the last decade, most of which were men. And when put into use, the filtering filtered out women candidates directly,” she said.
“Facebook, similarly, pushed its discriminatory filtering that emerged as a case in the US in 2018, and the Danish government imposed an administrative fine on the social media platform for the same reason,” she added.
Of music and hairstyles
However, it does not end there, as Gumrukcuoglu noted that some candidates were denied by recruitment algorithms for having traditionally Black hairstyles, living in allegedly highly immigrant-concentrated regions, or even listening to certain artists and genres of music.
She said that developers and companies using these recruitment filtering tools should comply with human rights and values, and competent authorities and governments should monitor closely the ethical violations.
“For example, the EU has decided to implement a regulation on the risk of AI being used in this field,” she said.
She noted that a global regulator on the use of AI in job recruitment may be needed to prevent discrimination.
“So long as our data is not properly protected, our digital footprint lands in the hands of companies and can be therefore used to evaluate us, as we are no longer judged solely by our resumes. Such is the case in Türkiye too,” she said.
“Filtering algorithms need to be developed to be more transparent and to be held accountable for their decisions, so two choices lay before us:
Will we let AI develop as rapidly as it is today, or will we step on the brakes a little and move forward slowly in an ethical framework? We will see the answer in the coming period,” she added.