This paper, which has been forthcoming in Journal of Medicine and Philosophy for a while, is my foray into AI and healthcare, particularly medical imaging. It synthesizes some of what I have to say about structural injustice in AI use (and why "bias" isn't the right way to assess it), and uses a really interesting case study from the literature to explore why it's important to understand AI as part of sociotechincal systems - and how understanding it as part of sociotechnical systems makes a big difference in seeing when/how it can be helpful (or not). Here's the abstract:
Enthusiasm about the use of AI in medicine has been tempered by concern that algorithmic systems can be unfairly biased against racially minoritized populations. This paper uses work on racial disparities in knee osteoarthritis diagnoses to underline that achieving justice in the use of AI in medical imaging will require attention to the entire sociotechnical system within which it operates, rather than isolated properties of algorithms. Using AI to make current diagnostic procedures more efficient risks entrenching existing disparities; a recent algorithm points to some of the problems in current procedures while highlighting systemic normative issues that need to be addressed while designing further AI systems. The paper thus contributes to a literature arguing that bias and fairness issues in AI be considered as aspects of structural inequality and injustice and to highlighting ways that AI can be helpful in making progress on these.
Recent Comments