I thought I would make my inaugural post on NewAPPS a follow-up to Roberta's post about the retraction of the article in Food and Chemical Toxicology. I don't want to continue the debate about whether the retraction was justified; that debate can continue in the original thread. Here, I want to discuss one of the reasons why we should be paying vigilant attention to events such as these, and why their importance transcends the narrow confines of the particular scientific hypotheses being considered in the articles in question. What I worry most about is the extent to which pressures can be applied by commercial interests such as to shift the balance of “inductive risks” from producers to consumers by establishing conventional methodological standards in commercialized scientific research.
Inductive risk occurs whenever we have to accept or reject a hypothesis in the absence of certainty-conferring evidence. Suppose, for example, we have some inconclusive evidence for a hypothesis, H. Should we accept or reject H? Whether or not we should depends on our balance of inductive risks—on the importance we attach, in the ethical sense, of being right or wrong about H. In simple terms, if the risk of accepting H and being wrong outweighs the risk of rejecting H and being wrong, then we should reject H. But these risk are a function not only of the degree of belief we have in H, but also of negative utility we attach to each of those possibilities. In the appraisal of hypotheses about the safety of drugs, foods, and other consumables, these are sometimes called “consumer risk” (the risk of saying the item is safe and being wrong) and “producer risk” (the risk of saying the item is not safe and being wrong.)
In recent work, the philosopher Heather Douglas brought this 1950s concept of inductive risk back to our attention, and added a new twist. She argued that it is not only in the appraisal of hypothesis that we engage in balances of inductive risk, but also in the choice of methods. Suppose I am investigating the hypothesis that substance X causes disease D in rats. I give an experimental group of rats a large dose of X and then perform biopsies to determine what percentage of them has disease D. But how do I perform the biopsy? Suppose that there are two staining techniques I could use; one of them is more sensitive and the other is more specific. That is, one produces more false positives and the other more false negatives. Which one should I choose? Douglas points out that which one I will choose will depend on my inductive risk profile. To the extent that I fear consumer risk, I will choose the stain with more false positives. And vice versa. But that, she points out, depends on my social and ethical values.
In a subsequent paper, Torsten Wilholt argued that Douglas' insight gives rise to a puzzle: when is a choice of scientific methodology a case of bias? If no methodological choice can be justified in a value-free vacuum, then what is the difference between selecting a method on the basis of a choice of values that, say, leans more to the side of avoiding producer risk, and choosing one that it is outright biased in the favor of industry? Douglas' insights make this question more puzzling than it might have originally seemed. Wilholt offered a useful suggestion: a methodological choice counts as biased if it flouts an established, even if entirely conventional, methodological standard—in the absence of some adequate justification for doing so. Whether or not one accepts Wilholt's solution to this normative puzzle—as a descriptive claim about how distributions of inductive risk get settled, it strikes me as exactly right. Methodological standards are, qua conventions, encodings of the conventionally accepted, default balance of inductive risk between consumer and producer.
If you've skipped to the end for the punch line, it is this: we should be very careful about attempts by producers to set the agenda vis a vis conventional standards. And without getting bogged down in the particulars of the case Roberta blogged about (we can leave that for the other thread), we should be highly alarmed when there is even the appearance of a conflict of interest in play for someone who is exerting influence on the rigidification of a methodological standard for scientific experiments that test the safety of products that will come to market. I will leave it to others to decide if the article Roberta linked to makes the case that such an appearance exists in her case. But I smell something fishy. Two facts in particular contribute to the ichthyesque odor. The first is the appearance of a conflict of interest that arose when, right before issuing the retraction, the journal appointed a special editor with ties to Monsanto and a GMO-industry-funded group. The second is the focus, in the letter of retraction, on methodological features of the study, including the breed of animal used—a canonical sort of methodological choice that can move the bar of inductive risk in either direction.
Recent Comments