Article | Open Access
The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐Making
Views: | 1893 | | | Downloads: | 1974 |
Abstract: Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large‐scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision‐making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes.
Keywords: ADM and risks of discrimination; AI and accountability; AI and risks of discrimination; AI and transparency; artificial intelligence; automated decision‐making; discrimination in recruitment; indirect AI use; platforms and discrimination
Published:
Issue:
Vol 12 (2024): Artificial Intelligence and Ethnic, Religious, and Gender-Based Discrimination
© Stefan Larsson, James Merricks White, Claire Ingram Bogusz. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 license (http://creativecommons.org/licenses/by/4.0), which permits any use, distribution, and reproduction of the work without further permission provided the original author(s) and source are credited.