By Gordon Hull
Early on in the Covid-19 pandemic, I dedicated a post (and a short follow-up) to the idea that our knowledge of Covid-19 is mediated by the indicators we have to represent it, and that those indicators are themselves epistemically tricky. In particular, there’s a difficulty in understanding “Covid incidence,” because of difficulties in translating from “positive Covid tests” to how many Covid cases are actually present in a given population. The standard shorthand way of addressing this has been to look at percentage of positive tests, with the guideline that unless this percentage is low enough, the number of positive tests likely significantly under-represents the number of cases. The situation reminded me of the ambiguity of “malaria cases” in parts of Africa, where the dashboard tally of number of cases does not transparently communicate the number of people who actually have malaria.
The last couple of weeks have indicated the extent to which there’s a further STS point lurking. The standard test for Covid-19 is a PCR, which has the advantage of being highly sensitive. It has the disadvantage of requiring complicated reagents and a lab, and so one reason (and the only even faintly forgivable reason) for the lack of testing in the U.S. is bottlenecks at the lab and supply level. A number of folks have been arguing that there isn’t enough capacity in the system, even if it were done well, to test as many people as need testing. Certainly in the status quo, where lots of people have to wait a week to get their test results, the test is useless for a lot of purposes, since they’ll probably no longer be contagious by the time they get the test result. The test, in other words, doesn’t produce any actionable information.
Continue reading "Epistemology of Covid: Testing Technologies" »
Recent Comments