(This will be the first in a series of posts designed to suggest that the mathematics of impredicativity - especially methods of definition that make use of revision-theoretic procedures - are relevant to empirical contexts. Everything I say in these grows out of joint work with my math colleague Joe Mourad.)
Two basic points about the notion of impredicativity: first, it is much broader than what non-expert philosophers tend to think of under the rubric of paradoxes, vicious circularity, and the like. Second, it is a property of definitions - or, more generally, procedures - not of concepts or sets, in the first instance. Given an appreciation of these points, it is not hard to see that the general phenomenon can pose important epistemological issues in contexts in which there are no infinite totalities in play, indeed, in the context of various empirical discussions.
(I apologize for the defective symbols below. If anyone knows how to put proper symbols into the typepad interface, I'd love to hear about it. Anyway, 'A' is a universal quantifier, 'E' and existential. F, G, etc will be open formulas - where one would normally use greek letters. As usual, capital letter variables are second order - ranging over sets of or functions on the first order domain. lower case variables range over the first order domain.)
Consider the following sentence, where X is a variable ranging over functions from integers to integers:
1."AX(F(X) -> X(0) = 1) where F is some unproblematic open formula.
This is a simple P11 (P for Pi; this means that there is only one second order quantifier and it is universal) sentence, but if this sentence is meaningful, then so must be the open formula
AX(F(X) -> X(y)=1) which defines the set S1 of those integers, y, satisfying it.
That set corresponds to the characteristic function:
S(y) = 1 iff AX[F(X) -> X(y)=1]
But now if we try to determine the truth value of 1, we find that one instance is
1i. A(S) -> S(0)=1)
So the truth of universally quantified 1, is a function of the truth of 1i, which is a compositional function of the truth of the subformula S(0)=1, which is itself equivalent to 1.
Impredicativity.
Now this does not mean that there is no other way to determine the truth of 1 or the elements of S1. Related to this, note that it doesn't imply that there is anything paradoxical or viciously circular about 1. Some impredicative definitions reach fixed points very nicely. (See Gupta's revision theory of truth, or Belnap and Gupta more generally on fixed point definitions. There is as well a mathematical literature on the topic that goes well beyond this work. But the basic point is that there may be a natural process of definition that begins with an approximation of S which feeds into an evaluation of 1, which requires a revision of S, etc. And if this process can be seen to reach a fixed point at some reasonable ordinal, all is well. Despite the impredicativity, the set can be constructively defined.
Still, this means that all it can mean to say that a given collection is impredicative - or that the truth of a given generalization is impredicative - is that every "acceptable" procedure that defines the set is impredicative.
And there are lots of things we might mean by "acceptable". Typically, the relevant notion is motivated epistemologically. We might limit ourselves to constructive procedures, or procedures of a certain complexity, or procedures that reach a fixed point by a particular ordinal, or even procedures that could in fact be carried out in a reasonable amount of time given existing computing resources.
This last means that the general notion of impredicativity might well have application to finite contexts of empirical research. I'll discuss more in posts next week, but for now, just notice that there are lots of contexts in which a distinction between first and second order quantification arises. Consider chess, for example. In the first order domain, we might consider quantifying over move number-move pairs. <18, Nf6> would be a typical pair. (These are the elements of a typical chess score, though I'm really numbering 'Plies' not moves since a move is a white move then a black move. Simpler to ignore that. Every odd move will, by rule be by white.)
Now chess is a finite game. The 50 move rule means that there is a fixed finite limit on how long a game can be, and this means that the total tree of possible sequences of moves is also finite. So if we wanted, we could just consider our first-order domain to be finite sequences of legal moves starting from the initial position. But this domain is a hell of a lot larger than the initial domain we discussed. The initial domain is not tiny. If we figure that the maximum game length is about 10,000 moves, and that there are 16 pieces which could move to each of 64 squares, we would get a domain of roughly 10,000x64x16 = 10, 240,000 pairs. But the total number of games is more like 10 to the 120th power. The former is well within the computing power of a decent ipad. The latter is something that no physically possible mechanism could exhaustively calculate in the life of the universe.
So let's let our first order domain be the pairs <x, y>, where x is an integer less than 10,000 (really, by one calculation that looks right to me, 11,799) and y a move. Then our second order domain is the set of legal sequences of pairs from the first order domain. Chess theory will then include all sorts of claims that quantify over such sequences of moves. ("For any strategy of black in position P, white has a response that involves blockading the queen-side and thereby holding the draw.") Again, the difference between these domains is that the first is epistemically tractable and the second is not. Our access to generalizations over the second will never be via an exhaustive checking of cases, but rather from an indirect argument.
Here's a concrete example:
A common claim by GMs is that if one has only a positional advantage - that is not a material advantage as well - one can force a win only if the opponent has at least two distinct positional weaknesses. Of course this generalization is intended as only as applying only to games that actually arise in practice. But even with this (vague) restriction, the intractability of the quantification does not diminish. So this is a generalization over games. It says
2. AG(G is a game that could arise in practice & EP (P is a position in G & in P white has only a positional advantage and is winning) -> black has two distinct positional weaknesses in P)
Again, if we could simply exhaustively calculate all legal sequences extending from P, this would just be a simple quantification over a fixed domain, with no essential impredicativity. But we can't. So we have a kind of finite epistemological impredicativity. Think about how one would evaluate such a claim. Let's say we try to refute the claim. We will look at a bunch of positions with solid positional advantages for white but only one weakness for black. Then we calculate a bit, looking at reasonable lines of attack for white and reasonable defenses for black. And at some point in each branch of actual calculation, we will see whether the resulting position looks - to the expert eye - to be winning or drawn. But here's the key: in so evaluating, we will make use of a principle like "white hasn't managed to force a second weakness, so black can draw by defending her one weakness." That is, we'll use the principle under consideration in evaluating positions as part of testing the principle under consideration.
This does not make the testing procedure trivially circular - as in a truth-teller sentence. After all, we might calculate a forcing line from a given position with one weakness and see that we can force a mate, or force the creation of a second weakness. (All this is complicated by the fact that the principle is only meant to be true "in general," and by the fact that "weakness" means "reasonable positional weakness". If the weakness is so significant that there is a forced mate, say, then it is a different sort of thing altogether. That's not a positional weakness but a tactical one. None of that affects the main point.) But it does mean that epistemically accessible methods are impredicative. What gives us confidence in the principle is that it seems stable under calculation. We revise our evaluation of P by looking at ways P can develop into P' and so far as we can tellour principle that one weakness is defensible seems stable, a fixed point, under such updating.
That is just the sort of argument one makes in an infinite context when pursuing transfinite revision-theoretic definitions. And it applies to finite empirical cases as well.
Recent Comments