Repository logo

Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination

Accepted version

Change log


Carter, Marie-Christine 


Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilise machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data- driven discrimination. However, proof of such is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal. Keywords: Non-discrimination law, machine learning, automated decision-making, recruitment



Journal Title

Maastricht Journal of European and Comparative Law (MJ)

Conference Name

Journal ISSN


Volume Title



Publisher DOI

Publisher URL

No funding for the publication of this paper. Please note I am completing my PHD under the OOO DTP AHRC studentship.