By the standard view of logic, one way our finite minds are able to reason about infinite things is by a division of labor between proofs and models. If a set of sentences Gamma does entail a sentence alpha, then we grasp this by means of a finite proof. If the set does not, then we grasp that by means of a countermodel making true all of the premises in Gamma and not making true alpha. Thus, by the standard view, a “logic” is an account of provability and an account of models for some language. Then, the logic is complete when any arbitrary Gamma and alpha in the language are such that either there is a proof of alpha from Gamma or a countermodel making all of Gamma true and alpha false (and sound if these both don't happen simultaneously).
In “The Withering Away of Formal Semantics?” (Mind & Language Volume 1, Issue 4, pages 302–318, December 1986) Neil Tennant attempts to give a purely inferentialist account of what is going on when people use representationalist models show that entailment fails, and he makes a prima facie plausible suggestion for what we inferentialists should say. But (if the argument below is sound) something interesting happens to those of us moved by Dummett's argument from inferentialistm to intuitionism when we follow that suggestion.
Much of Tennant’s article concerns relevant logical results such as (a) Kahlmar’s theorem that for any row of a sentence’s classical truth table, if the sentence is true, then it is intuitionistically provable from the true atoms and negated false atoms on that row, and if the sentence is false, one can intuitionistically prove absurdity from the same (I would be really interested in the extent to which this might generalize to infinitary logics!), (b) interesting reduction results that guarantee in classical first order logic that one can always generate from a countermodel a set of sentences Phi that does what Tennant would have it do (if Phi includes Peano arithmetic, then the sentences are pretty low in the arithmetic hierarchy, if I remember right), and (c) a discussion of how Goedel’s theorem does not undermine his proposal. It’s a really rich discussion, one to which I think anyone interested in inferentialism should pay attention.
However, Tennant does not use his construal of countermodels to address the issue of completeness for intuitionistic logic. For by Tennant’s inferentialist model of model theory, to say a logic is complete would be to say that any arbitrary Gamma and alpha in the logic’s language are such that either (A) there is a proof of alpha from Gamma in the logic, or (B) there exists a set of sentences Phi such that (1) Phi does not prove (in the logic) absurdity, (2) Phi proves (in the logic) every member of gamma, and (3) Phi and alpha jointly prove (in the logic) absurdity.
But now consider an undecided sentence P of a theory that has no decision procedure. The intuitionist will not accept classical negation rules for this sentence. So we know that ~~P does not intuitionistically prove P. Now, given Tennant’s inferentialist reconstrual of model theory, then by disjunctive syllogism (Chrysippus’ Dog) the completeness of intuitionist logic would then entail that there exists a set of sentences Phi such that (1) Phi does not intuitionistically prove absurdity, (2) Phi intuitionistically proves ~~P, and (3) Phi and P jointly intuitionistically prove absurdity. But if (3) holds, then by intuitionist negation introduction (or conditional introduction if not P is defined as “if P, then absurdity”), Phi intuitionistically proves ~P. But then from this and (2) it follows that Phi proves both ~P and ~~P. But then by intuitionistic negation elimination (again conditional elimination if negation is defined in terms of conditional), it follows that Phi proves absurdity, contradicting (1)! Since this is perfectly general, it follows that there is no set of sentences Phi that can do the job.
So if Tennant is correct, then inferentialists should take intuitionistic logic to be necessarily incomplete!
There are a couple of interesting things here relevant to determining if (assuming there is no problem in my argument or presentation of Tennant’s view) whether this is a modus ponens for the incompleteness of intuitionist logic, or a modus tollens against Tennant’s view.
- As far as I know (and I may be totally off here; please correct me if I've got this wrong), all completeness proofs for existing model theories (such as Kripke semantics) for intuitionistic logic require the use of strictly classical reasoning. So the intuitionist has always been perilously close to having to claim that their logic would be complete (mod some model theory) if their logic were incomplete (since stronger logical principles are in needed to prove the completeness)! But if Tennant is right (and the above argument is sound), maybe this is a feature, not a bug.
- Inferentialists about the content of logical operators tend to be inferentialists about mathematics too. But then Goedel’s incompleteness theorems entails that the inferences that determine meaning are not axiomatizable, or equivalently that the set of meaning determining inferences are not machine checkable. Dummett’s discussion of this kind of “open textured” notion of proof in his article on Goedel does seem to spiel this as a kind of radical incompleteness of any formalization (maybe a better way to put this would be to be more explicit about the underlying logicism).
- Possible objection: I only got the argument to incompleteness by taking the provability relation in the statement of Tennant’s reformulation of model theory to be intuitionistic provability. But don’t the limitation results tell us exactly not to do this? The answer to this question ends up getting into a much bigger debate. However, I think in the context of asking if a given logic is complete it is licit to restrict the notions of provability in Tennant’s clauses to the question of whether the clause holds when restricted to the provability relation being discussed.
- Moreover, I think (and I’d have to check with a logician about this) that it is a consequence of the reduction procedures Tennant talks about in his paper that classical first order logic is complete in his sense. That is in classical first order logic any arbitrary Gamma and alpha are such that either (A) there is a proof of Gamma from alpha or (B) there exists a set of sentences Phi such that (1) Phi does not classically prove absurdity, (2) Phi classically proves every member of gamma, and (3) Phi and alpha jointly prove absurdity.
- The only thing I can see as fishy about the argument is the way we assume that ~~P does not entail P. When we are talking about general logical laws, this is of course true in intuitionistic logic. But when we instantiate on a particular P, who is to say? Maybe P ends up being part of some interesting decidable subtheory and we hadn't realized this before. But then it will have turned out that, for that P, ~~P did entail P. This is actually a pretty deep issue (I've got a forthcoming paper on Moore's Paradox and am co-writing a paper with Jeff Roland on Putnam's Brain in the Vat argument that show that there is much, much more widespread fallacy as a result of confusion surrounding this issue than one would expect; when thought through, it might undermine what I say above, though I think that in itself might yield something interesting).
The main reason this is interesting to me is that I’ve been thinking about purely proof-theoretic analogues to soundness and completeness. The tonk problematics yield good requirements on soundness and completeness of elimination rules, given introduction rules. Normalizability ensures that the elimination rule is sound (i.e. not too strong) with respect to the introduction rule, prohibiting tonk and quonk (I think I invented quonk while Roy Cook and I were writing our Analysis paper on negation and one element domains; I used the operator to good effect (refuting Christopher Peacocke's theory of content) in a paper in Logic and Logical Philosophy; anyhow, quonk has the intro rule for existential and elimination for universal). And Belnap’s requirement that a system should allow the proof of uniqueness within the logic entails that the elimination rule is complete (i.e. not too weak) relative to the introduction rule, prohibiting knot (probably someone invented this one before me, knot has the introduction for conjunction, and elimination for disjunction) and knouq (the knot version of quonk). Likewise, conservative extension requirements work as soundness requirements upon adding both intro and elimination rules for an operator to a pre-existing set of inferences [Inferentialists disagree about how exactly this should go; Restall thinks it should only be conservative with respect to atoms (and Brandom is committed to at least this much), and Tennant thinks it should be conservative with respect to all the other inferences (I think Brandom should be committed to this stronger principle, though his own recent logical framework is inconsistent with it).]
But I haven’t been able to find any analogous purely inferential versions of completeness for what you get when you add both introduction and elimination rules. The closest I can come is Tennant’s view discussed above. But maybe this is because the inferentialist should just say that any logical system that has a mechanically checkable proof relation and is rich enough to prove absurdity in the manner requried by Tennant's clauses just is incomplete? This is kind of more old school intuitionism, more Brouwerian than Heytingian, but I think Dummett’s piece on Goedel (as well as his scattered comments on Church’s Thesis, which in two papers I’ve reconstructed into valid proofs; in the one with Jason Megill we argued that Brandom is similarly committed) moved things in that direction in any case.
Anyhow, if anyone reading this has any corrections or suggested readings, that would be great!
Given that I'm not a logician, there is an even increased amount of the usual danger of posting a new logicky idea in the middle of the night (that is, that the light of day will expose an appalling error that not only mandates revision, but also renders the issue uninteresting). But even if this is the case, if anyone has any broader thoughts about what an inferentialist should say about completeness, or knows of any particularly illuminating papers or books that discuss this very issue, I’d be really grateful for the pointers.
Sometimes it is fun to think of possible anthologies for various areas of philosophy. The funnest part of these kinds of exercises is the possibility of putting in articles that you take to not have received sufficient critical discussion. If I was ever to do one for inferentialism and/or anti-representationalism, I would certainly include Neil Tennant’s “The Withering Away of Formal Semantics?” (Mind & Language Volume 1, Issue 4, pages 302–318, December 1986). The paper is a really interesting preparatory discussion of what the inferentialist should say about manifestly successful uses of representationalist model theory.
By the traditional representationalist account, the way our finite minds are able to reason about infinite things is by a division of labor between proofs and models. If a set of sentences Gamma does entail a sentence alpha, then we grasp this by means of a finite proof. If the set does not, then we grasp that by means of a countermodel making true all of the premises in Gamma and not making true alpha.
Typically, a “logic” is an account of provability and an account of models for some language. Then, by standard terminology, the logic is complete when any arbitrary Gamma and alpha in the language are such that either there is a proof of alpha from Gamma or a countermodel making all of Gamma true and alpha false (and sound if both of these never happen).
Tennant thus realizes that the inferentialist must say what is going on when models show that entailment fails, and his proposal is pretty straightforward. To say that a model M makes true all of Gamma and makes false alpha is just to say that there exists a set of sentences Phi such that (1) Phi is consistent (does not prove absurdity), (2) Phi proves every member of gamma, and (3) Phi and alpha jointly prove absurdity.
Much of Tennant’s article concerns relevant logical results such as (a) Kahlmar’s theorem that for any row of a sentence’s classical truth table, if the sentence is true, then it is intuitionistically provable from the true atoms and negated false atoms on that row, and if the sentence is false, one can intuitionistically prove absurdity from the same, (b) interesting reduction results that guarantee in classical first order logic that one can always convert a countermodel to a set of sentences Phi that does what Tennant would have it do (if Phi includes Peano arithmetic, then the sentences are pretty low in the arithmetic hierarchy, if I remember right), and (c) a discussion of how Goedel’s theorem does not undermine his proposal. It’s a really rich discussion, one to which I think anyone interested in inferentialism should pay attention.
However, Tennant does not use his construal of countermodels to address the issue of completeness for intuitionistic logic. For by Tennant’s inferentialist model of model theory, to say a logic is complete would be to say that any arbitrary Gamma and alpha in the logic’s language are such that either (A) there is a proof of alpha from Gamma in the logic, or (B) there exists a set of sentences Phi such that (1) Phi does not prove absurdity in the logic, (2) Phi proves every member of gamma in the logic, and (3) Phi and alpha jointly prove absurdity in the logic.
But now consider an undecided sentence P of a theory that has no decision procedure. The intuitionist will not accept classical negation rules for this sentence. So we know that ~~P does not intuitionistically prove P. Now, given Tennant’s inferentialist reconstrual of model theory, then by disjunctive syllogism (Chrysippus’ Dog) the completeness of intuitionist logic would then entail that there exists a set of sentences Phi such that (1) Phi does not intuitionistically prove absurdity, (2) Phi intuitionistically proves ~~P, and (3) Phi and P jointly intuitionistically prove absurdity. But if (3) holds, then by intuitionist negation introduction (or conditional introduction if not P is defined as “if P, then absurdity”), Phi intuitionistically proves ~P. But then from this and (2) it follows that Phi proves both ~P and ~~P. But then by intuitionistic negation elimination (again conditional elimination if negation is defined in terms of conditional), it follows that Phi proves absurdity, contradicting (1)! Since this is perfectly general, it follows that there is no set of sentences Phi that can do the job.
So if Tennant is correct, then inferentialists should take intuitionistic logic to be necessarily incomplete!
There are a couple of interesting things here relevant to determining if (assuming there is no problem in my argument or presentation of Tennant’s view) whether this is a modus ponens for the incompleteness of intuitionist logic, or a modus tollens against Tennant’s view. (1) As far as I know (and I may be totally off here), all completeness proofs for existing model theories (such as Kripke semantics) for intuitionistic logic all use strictly classical reasoning. So the intuitionist has always been perilously close to having to claim that their logic would be complete (mod some model theory) if their logic were incomplete (since stronger logical principles are in needed to prove the completeness)! But if Tennant is right, this is a feature, not a bug. (2) Inferentialists about the content of logical operators tend to be inferentialists about mathematics too. But then Goedel’s incompleteness theorems entails that the inferences that determine meaning are not axiomatizable, or equivalently that the set of meaning determining inferences are not machine checkable. Dummett’s discussion of this kind of “open textured” notion of proof in his article on Goedel does seem to spiel this as a kind of radical incompleteness of any formalization (maybe a better way to put this would be to be more explicit about the underlying logicism). (3) Possible objection: I only got the argument to incompleteness by taking the provability relation in the statement of Tennant’s reformulation of model theory to be intuitionistic provability. But don’t the limitation results tell us exactly not to do this? The answer to this question ends up getting into a much bigger debate. However, I think in the context of asking if a given logic is complete it is licit to restrict the notions of provability in Tennant’s clauses to the question of whether the clause holds, given the provability relation being discussed. Moreover, (4) I think (and I’d have to check with a logician about this) that it is a consequence of the reduction procedures Tennant talks about in his paper that classical first order logic is complete in his sense. That is in classical first order logic any arbitrary Gamma and alpha are such that either (A) there is a proof of Gamma from alpha or (B) there exists a set of sentences Phi such that (1) Phi does not classically prove absurdity, (2) Phi classically proves every member of gamma, and (3) Phi and alpha jointly prove absurdity.
The reason this is interesting to me is that I’ve been thinking about purely proof-theoretic analogues to soundness and completeness. The tonk problematics yield good requirements on soundness and completeness of elimination rules, given introduction rules. Normalizability ensures that the elimination rule is sound (i.e. not too strong) with respect to the introduction rule, prohibiting tonk and quonk. And Belnap’s requirement that a system should allow the proof of uniqueness within the logic entails that the elimination rule is complete (i.e. not too weak) relative to the introduction rule, prohibiting knot and knouq). Likewise, conservative extension requirements work as soundness requirements upon adding both intro and elimination rules for an operator to a pre-existing set of inferences (inferentialists disagree about how exactly this should go; Restall thinks it should only be conservative with respect to atoms (and Brandom is committed to as much), and Tennant thinks it should be conservative with respect to all the other inferences.
But I haven’t been able to find any analogous purely inferential versions of completeness for what you get when you add both introduction and elimination rules. The closest I can come is Tennant’s view. But maybe this is because the inferentialist should just say that any logical system that has a mechanically checkable proof relation just is incomplete? This is kind of more old school intuitionism, more Brouwerian than Heytingian, but I think Dummett’s piece on Goedel (as well as his scattered comments on Church’s Thesis, which in two papers I’ve reconstructed into a valid proof; in the one with Jason Megill we argued that Brandom is similarly committed) moved things in that direction in any case.
Anyhow, if anyone reading this has any corrections or suggested readings, that would be great. The danger of posting a new logicky idea like this in the middle of the night is that the light of day can sometimes expose an appalling error that renders the idea uninteresting. But even if this is the case, if anyone has any broader thoughts about what an inferentialist should say about completeness, I’d be really interested.
Recent Comments