Discriminatory Algorithmic Administrative Decision-Making and Data: Reflections and Proposals
Keywords:
Algorithms, Discrimination, Transparency, Data, AdministrationAbstract
The increasing use of algorithms in public administration raises significant legal challenges, particularly regarding the risk of discrimination resulting from data-driven decision-making. This article explores the structural relationship between algorithms and data, highlighting how biases embedded in datasets may be reproduced and amplified through automated administrative decisions. Drawing on the Italian case of the “Good School” algorithm and other examples of automated decision-making in the allocation of public benefits, the study shows how algorithmic governance may generate discriminatory outcomes, especially in sensitive areas such as access to social assistance. The paper examines the role of algorithmic transparency and the duty to give reasons as key legal safeguards to ensure the legality, accountability, and reviewability of automated administrative decisions. It also analyses the contribution of European Union law—particularly the Charter of Fundamental Rights and the EU Artificial Intelligence Regulation—in establishing a regulatory framework aimed at preventing algorithmic discrimination. The article concludes by emphasising the central role of administrative law in guiding and regulating the use of algorithms within public administration in order to safeguard fundamental rights and uphold the right to good administration.