I'm not a logician. Nor do I play one on T.V. So please be patient if I'm messing up something basic in what follows. An explanation of what I'm messing up and/or some relevant citations would be pretty helpful too.
Gregory Moore's masterful Zermelo's Axiom of Choice: Its Origins, Development, and Influence contains the following passage:
Vestiges of the first state - choosing an unspecified element from a single set - can be found in Euclid's Elements, if not earlier. Such choices formed the basis of the ancient method of proving a generalization by considering an arbitrary but definite object, and then executing the argument for that object. This first stage also included the arbitrary choice of an element form each of finitely many sets. It is important to understand that the Axiom was not needed for an arbitrary choice from a single set, even if the set contained infinitely many elements. For in a formal system a single arbitrary choice can be eliminated through the use of universal generalization or similar rule of inference. By induction on the natural numbers, such a procedure can be extended to any finite family of sets.
I don't get this at all.
As far as soundness*- I do recall strengthened versions of the downward Löwenheim–Skolem theorem, where not only is the cardinality of the model denumerable, but that the domain is the natural numbers and all of the predicates defined at a certain cutoff in the arithmetic hierarchy. Might results in this neighborhood show that using ZF without choice is strong (i.e. so the proof theory is still sound) enough to be used in countermodel construction in normal first-order semantics? Or, in terms of Moore's discussion, might this also show in some sense that the use of eigenvariables in standard classical logic is free from implicit use of the axiom of choice?
This latter issue is more closely related to the completeness worry. Consider a meta-theoretic demonstration that some existentially quantified sentence is logically true. If we don't avail ourselves of completeness results, this requires basically showing that every single model is such that at least one object in that model renders the formula (you get when a name for that object replaces the bound variable) true.
Clearly, if the axiom of choice is true, this is a licit procedure. The choice function will take each of an infinite number of models to the object assigned in each model to the variable made free when the existential quantifier stripped off. But it raises the question of whether some logics are such that the converse holds, that this is only in general a licit procedure if the axiom of choice is true. This is much closer to Moore's issue, as the treatment of the free variable in the meta language mirrors the use of eigenvariables in natural deduction proofs of the sentence.
If there's anything to this, then one would expect expressibility issues to loom large. For example ZF being fine for the semantics of expressively weaker logics, but ZFC needed for expressively stronger logics.
Likewise, there are interesting issues of constructivity and choice. Intuitionist logic has the existential property and classical logic does not. That is, if an existentially quantified statement is provable in intuitionist logic, then the formula with the quantifier removed and the resulting free variable replaced by a name is also provable. (Along with the similar disjunction property), this a concrete way in which intuitionist logic is more constructable than classical logic. Intuitively, this connects to the choicyness (even for first order logic) of meta-linguistically determining that an existentially quantified statement is logically true.
Moore's book is fantastic, so I suspect I'm missing something. But as I try to make sense of Moore with my limited set theory abilities, it would be cool if anyone wants to set me straight and/or direct me to some literature on these very issues.
[Notes*
I don't like this way of talking. Why not say that the issue is whether the semantics are complete with respect to the proof theory? Seriously, why doesn't Goedel entail that second order semantics are unsound rather than second order proof theory incomplete? Full second order entailment contains so much mathematics (e.g. if the continuum hypothesis is true, then its second order formulation is true in every second order model) why not take a natural deduction formulation of the proof theory to get at its logical core and relegate the extra stuff to mathematics? I should probably ask for an explanation of this in a stand alone post.
Kreisel worried about related issues so much that he had a hard time seeing what completeness proofs are for in the first place. He thought there only justification was that they entailed that truth via the semantics was recursively (/effectively, at least via Church's Thesis, but maybe also without; I don't know) enumerable.
It would have been great to talk with Kreisel about modal logic, because it seems clear that completeness proofs there do much more, so much so that it's bizarre that we typically teach meta-theory of first-order logic before teaching modal logic. Often at the end of a well taught normal meta-theory class, over half the class still has no idea why we've proved all this stuff. At the end of a well taught modal logic class, this simply doesn't happen. And those very same students will get why one studies meta-theory of regular first order logic.]
Recent Comments