From the Department of Shameless Self-Promotion, here is the abstract for my new paper, "Dirty Data Labeled Dirt Cheap: Epistemic Injustice in Machine Learning Systems:"
"Artificial Intelligence (AI) and Machine Learning (ML) systems increasingly purport to deliver knowledge about people and the world or to assist people in doing so. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities, suggesting that they are “unfair” to members of those groups. However, critique based on formal concepts of fairness seems increasingly unable to account for these problems, partly because it may well be impossible to simultaneously satisfy intuitively plausible operationalizations of the concept and partly because fairness fails to capture structural power asymmetries underlying the data AI systems learn from. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. I argue that (1) pretrial detention systems and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does (un)fairness."
The path from idea to paper here was slow, but I hope the paper is convincing on the point that the literature on epistemic injustice can offer some needed resources for understanding harms caused by (some kinds of ) AI/algorithmic systems.
Recent Comments