Knee pain is common and debilitating, and it’s often caused by osteoarthritis in the knee. Treatment options range from analgesics (including opioids) to knee-replacement surgery. If you go to the doctor with arthritic knee pain, you can get an x-ray which can then be interpreted using standard rubrics like the Kellgren–Lawrence Grade (KLG) to quantify damage to your knee and then guide treatment options. The KLG isn’t perfect in that the correlation between pain and objective scores of damage to the knee isn’t perfect. Some people’s knees are a wreck and they report no pain; others have pain beyond what their KLG score indicate. But here’s the thing: Black patients consistently report more knee pain than white patients. They also tend to have more knee damage on the KLG – but even when you factor that in, Black patients report much more knee pain than white patients with comparable KLG scores. What’s going on?
One possibility is that factors external to the knee – stress, for example – explain the higher pain. If that’s the case, then patients need less knee treatment. But what if their knees were in worse shape? To answer that question, you’d have to ask yourself what in an x-ray indicated poor knee condition.
Disease is often measured through indicators, and we know that these indicators can lead to all sorts of complexity. In the context of Covid, for example, there are all sorts of questions about testing and sensitivity that I’ve talked about before. Along the way, I referred to a fantastic paper on malaria testing in sub-Saharan Africa – suffice it to say that “cases of malaria” reported to donor organizations is a difficult number to parse for reasons having to do with vagaries in testing and diagnosis.
In a new paper in Nature Medicine, a team led by Emma Pierson makes ingenious use of artificial intelligence to tackle the problem of racial disparities in knee pain. Since algorithms and data are so often implicated in increasing or magnifying racial disparities (see, for example, Safiya Noble on Google, or Timnit Gebru on facial recognition, or Margaret Hu’s chilling “Algorithmic Jim Crow”), it’s encouraging to learn about machine learning working to undermine racial disparities. Ordinarily, you train an algorithm to perform like an excellent clinician. In this case, that would mean training it to look at radiography and determine the correct KLG score. The trick here was to instead train it to look at pain: to determine what features of the x-ray predicted that the patient would report pain. It turns out that the algorithm’s diagnoses reduced racial disparities in diagnosis by a jaw-dropping 47%.
Continue reading "Epistemic Justice with AI: indicators, diverse training data and knee pain" »
Recent Comments