(Cross-posted at M-Phi)

In his 2008 paper ‘Logical dynamics meets logical pluralism?’, Johan van Benthem writes (p.185):

… Many observations in terms of structural rules address mere symptoms of some more basic underlying phenomenon. For instance, non-monotonicity is like ‘fever’: it does not tell you which disease causes it.

I’ve always been puzzled by this observation – among other reasons because I’m a non-monotonicity enthusiast, so it seemed odd to me to claim that non-monotonicity would be like the symptom of some disease! But beyond the disease metaphor, it was also not clear to me why Johan saw non-monotonicity as this unspecified, possibly multifaceted phenomenon. After all, there should be nothing esoteric about non-monotonicity: a non-monotonic consequence relation is one where addition of new premises/information may turn a valid consequence into an invalid one. The classical notion of validity has monotonicity as one of its defining features: once a consequence, always a consequence, come what may. This is why a mathematical proof, if indeed valid/correct, remains indefeasible for ever and ever, come what may.

The non-monotonic logics which developed in recent decades, such as circumscription and default logics, have as their main feature the fact that something that counts as a valid consequence may become invalid upon the arrival of new information. As described by A. Antonelli in the SEP entry on non-monotonic logics:

The term “non-monotonic logic” covers a family of formal frameworks devised to capture and represent

defeasible inference, i.e., that kind of inference of everyday life in which reasoners draw conclusions tentatively, reserving the right to retract them in the light of further information. Such inferences are called “non-monotonic” because the set of conclusions warranted on the basis of a given knowledge base, given as a set of premises, does not increase (in fact, it can shrink) with the size of the knowledge base itself. This is in contrast to standard logical frameworks (e.g., classical first-order) logic, whose inferences, being deductively valid, can never be “undone” by new information.

The monotonic nature of the classical notion of validity is precisely one of the aspects I want to investigate in more detail in the coming years with my ‘Roots of Deduction’ project, and the initial hypothesis is that necessary truth-preservation and monotonicity are not primitive concepts but rather corollaries of the notion of an *indefeasible* argument, understood in a dialogical setting. Now, last week I was in Konstanz presenting some of the preliminary results of the project, in particular how my ‘built-in opponent’ conception of deduction (I have a draft of a paper on this, available upon request) sheds new light on the model-theory vs. proof-theory debate on logical consequence. I was, as usual, focusing on monotonicity and claiming that the points made applied to classical logic and also to other logics where necessary truth-preservation is a defining feature of the consequence relation; I had in mind things like intuitionistic or relevant logic.

But as pointed out to me by Ole Hjortland during Q&A, the relevant consequence relation is *not* monotonic in that weakening fails. Weakening is the following structural property, in its sequent calculus formulation:

Γ, A => B, Δ

-------------------

Γ, A, C => B, Δ

(There is a counterpart on the right side too, but for the present purposes it is only left-weakening that matters.) In other words, if A implies B, then adding C to the set of premises is not going to change that – which is pretty much the same as the property of monotonicity. Now, weakening *fails* for relevant logic: given the requirement that the premises must somehow be topically ‘related’ to the conclusion – must be about the same ‘things’ – it naturally follows that one cannot add arbitrary premises and still maintain the required relevance relation between premises (antecedent) and conclusion (consequent).

But clearly, the reason why relevant logic is non-monotonic is profoundly different from the reason why, say, default logic is non-monotonic. In the first case, necessary truth-preservation remains a necessary – though not sufficient – property for the consequence relation. What relevant logicians want to say is that the fact that it is impossible for the premises to be the case while the conclusion is not the case is necessary but not sufficient for validity; something else is needed, namely a relation of relevance, which in practice blocks or at least restricts the *ex falso* rule, among others. By contrast, in a non-monotonic logic such as default logic, which operates with the notion of minimal/preferred models, necessary truth-preservation is not even a necessary condition for the consequence relation. There is also something very different about the corresponding responses to the arrival of new information which invalidates a previously established valid consequence: for relevant logic, new information is viewed as an intruder, disrupting the previously established relation of relevance between antecedent and consequent; for non-monotonic logics such as default and others, the effect of the arrival of new information for defeasible reasoning is precisely what is of interest.

I still need to think more carefully about the implications of all this for the central role that I ascribe to the monotonicity vs. non-monotonicity dichotomy in my ‘built-in opponent’ story, but at least for now I can say I understand better what Johan van Benthem means when he says that non-monotonicity is like ‘fever’; it arises from phenomena as diverse as relevance concerns and the defeasibility of ‘everyday’ reasoning.

## Recent Comments