I'm currently running a series of posts at M-Phi with sections of a paper I'm working on, 'Axiomatizations of arithmetic and the first-order/second-order divide', which may be of interest to at least some of the NewAPPS readership. It focuses on the idea that, when it comes to axiomatizing arithmetic, descriptive power and deductive power cannot be combined: axiomatizations that are categorical (using a highly expressive logical language, typically second-order logic) will typically be intractable, whereas axiomatizations with deductively better-behaved underlying logics (typically, first-order logic) will not be categorical -- i.e. will be true of models other than the intended model of the series of the natural numbers. Based on a distinction proposed by Hintikka between the descriptive use and the deductive use of logic in the foundations of mathematics, I discuss what the impossibility of having our arithmetical cake and eating it (i.e. of combining deductive power with expressive power to characterize arithmetic with logical tools) means for the first-order logic vs. second-order logic debate.
Formal/mathematical philosophy is a well-established approach within philosophical inquiry, having its friends as well as its foes. Now, even though I am very much a formal-approaches-enthusiast, I believe that fundamental methodological questions tend not to receive as much attention as they deserve within this tradition. In particular, a key question which is unfortunately not asked often enough is: what counts as a ‘good’ formalization? How do we know that a given proposed formalization is adequate, so that the insights provided by it are indeed insights about the target phenomenon in question? In recent years, the question of what counts as adequate formalization seems to be for the most part a ‘Swiss obsession’, with the thought-provoking work of Georg Brun, and Michael Baumgartner & Timm Lampert. But even these authors seem to me to restrict the question to a limited notion of formalization, as translation of pieces of natural language into some formalism. (I argued in chapter 3 of my book Formal Languages in Logic that this is not the best way to think about formalization.)
However, some of the pioneers in formal/mathematical approaches to philosophical questions did pay at least some attention to the issue of what counts as an adequate formalization. In this post, I want to discuss how Tarski and Carnap approached the issue, hoping to convince more ‘formal philosophers’ to go back to these questions. (I also find the ‘squeezing argument’ framework developed by Kreisel particularly illuminating, but will leave it out for now, for reasons of space.)
( From the graphic novel Logicomix, taken from this blog post by Richard Zach.)
“He doesn’t want to prove this or that, but to find out how things really are.” This is how Russell describes Wittgenstein in a letter to Lady Ottoline Morrell (as reported in M. Potter’s wonderful book Wittgenstein's Notes on Logic, p. 50 – see my critical note on the book). This may well be the most accurate characterization of Wittgenstein’s approach to philosophy in general, in fact a fitting description of the different phases Wittgenstein went through. Indeed, if there is a common denominator to the first, second, intermediate etc. Wittgensteins, it is the fundamental nature of the questions he asked: different answers, but similar questions throughout. So instead of proving ‘this or that’, for example, he asks what a proof is in the first place.
As some of you may have seen, we will be hosting the workshop ‘Proof theory and philosophy’ in Groningen at the beginning of December. The idea is to focus on the philosophical significance and import of proof theory, rather than exclusively on technical aspects. An impressive team of philosophically inclined proof theorists will be joining us, so it promises to be a very exciting event (titles of talks will be made available shortly).
For my own talk, I’m planning to discuss the main structural rules as defined in sequent calculus – weakening, contraction, exchange, cut – from the point of view of the dialogical conception of deduction that I’ve been developing, inspired in particular (but not exclusively) by Aristotle’s logical texts. In this post, I'll do a bit of preparatory brainstorming, and I look forward to any comments readers may have!
Some months ago I wrote two posts on the concept of indirect proofs: one presenting a dialogical conception of these proofs, and the other analyzing the concept of ‘proofs through the impossible’ in the Prior Analytics. Since then I gave a few talks on this material, receiving useful feedback from audiences in Groningen and Paris. Moreover, this week we hosted the conference ‘Dialectic and Aristotle’s Logic’ in Groningen, and after various talks and discussions I have come to formulate some new ideas on the topic of reductio proofs and their dialectical/dialogical underpinnings. So for those of you who enjoyed the previous posts, here are some further thoughts and tentative answers to lingering questions.
Recall that the dialogical conception I presented in previous posts was meant to address the awkwardness of the first speech act in a reductio proof, namely that of supposing precisely that which you intend to refute by showing that it entails an absurdity. From studies in the literature on math education, it is known that this first step can be very confusing to students learning the technique of reductio proofs. On the dialogical conception, however, no such awkwardness arises, as there is a division of roles between the agent who supposes the initial thesis to be refuted, and the agent who in fact derives an absurdity from the thesis.
“That's the problem with false proofs of true theorems: it's not easy to produce a counterexample.”
This is a comment by Jeffrey Shallit in a post on a purported proof of Fermat’s Last Theorem. (Incidentally, the author of the purported proof comments at M-Phi occasionally.) In all its apparent simplicity, this remark raises a number of interesting philosophical questions. (Being the pedantic philosopher that I am, I'll change a bit the terminology and use the phrase 'incorrect proof' instead of 'false proof', which I take to be a category mistake.)
First of all, the remark refers to a pervasive but prima facie slightly puzzling feature of mathematical practice: mathematicians often formulate alternative proofs of theorems that have already been proved. This may appear somewhat surprising on the assumption that mathematicians are (solely) in the business of establishing (mathematical) truths; now, if a given truth, a theorem, has already been established, what is the point of going down the same road again? (Or more precisely, going to the same place by taking a different road.) This of course shows that the assumption in question is false: mathematicians are not only interested in theorems, in fact they are mostly interested in proofs. (This is one of the points of Rav’s thought-provoking paper ‘Why do we prove theorems?’)
There are several reasons why mathematicians look for new proofs of previously established theorems, and John Dawson Jr.’s excellent ‘Why do mathematicians re-prove theorems?’ discusses a number of these reasons. The original proof may be seen as too convoluted or not sufficient explanatory – ideally, a proof shows not only that P is the case, but also why P is the case (more on this below). Alternatively, the proof may rely on notions and concepts alien to the formulation and understanding of the theorem itself, giving rise to concerns of purity. Indeed, recall that Colin McLarty motivates his search for a new proof of Fermat’s Last Theorem in these terms: “Fermat’s Last Theorem is just about numbers, so it seems like we ought to be able to prove it by just talking about numbers”. This is not the case of the currently available proof by Wiles, which relies on much heavier machinery.
"A person's Erdős–Bacon number is the sum of one's Erdős number—which measures the "collaborative distance" in authoring mathematical papers between that person and Hungarian mathematician Paul Erdős—and one's Bacon number—which represents the number of links, through roles in films, by which the individual is separated from American actor Kevin Bacon. The lower the number, the closer a person is to Erdős and Bacon, and this reflects a small world phenomenon in academia and entertainment."--Wikipedia. [HT: Wayne Myrvold] So, for example Bertrand Russell's Bacon number is the result of an appearance in a Bollywood film.
Those of you who have been following some of my blog posts
will recall my current research project ‘Roots of Deduction’, which aims at
unearthing (hopefully without damaging!) the conceptual and historical origins
of the very concept of a deductive argument as one where the truth of the
premises necessitates the truth of the conclusion. In particular, this past
year we’ve been reading the Prior
Analytics in a reading group, which has been a fantastic experience (highly
recommended!). For next year, the plan is to switch from logic to mathematics,
and look more closely into the development of deductive arguments in Greek
But here’s the catch: the members of the project are all much
more versed in the history of logic than in the history of mathematics, so we
can’t count on as much previous expertise for mathematics as we could in the
case of (Aristotelian) logic. Moreover, the history of ancient Greek
mathematics is a rather intimidating topic, with an enormous amount of
secondary literature and a notorious scarcity of primary sources (at least for
the earlier pre-Euclid period, which is what we would be interested in). So it
seems prudent to focus on a few specific aspects of the topic, and for now I
have in mind specifically the connections between mathematics and logic (and
philosophy) in ancient Greece. More generally, our main interest is not on the
‘contentual’ part of mathematical theories, but rather on the ‘structural’
part, in particular the general structure of arguments and the emergence of
necessarily truth-preserving arguments.
Last week I was in Munich for the excellent ‘Carnap on logic’ conference, which brought together pretty much everyone who’s someone in the world of Carnap scholarship. (And that excludes me -- still don’t know exactly why I was invited in the first place…) My talk was a comparison between Carnap’s notion of explication and my own conception of formalization, as developed in my book Formal Languages in Logic. In particular, I proposed a cognitive, empirically informed account of Carnap’s notion of the fruitfulness of an explication.
Anyway, I learned an awful lot about Carnap, and got to meet some great people I hadn’t yet met. But perhaps the talk I enjoyed most was Steve Awodey’s ‘On the invariance of logical truth’ (for those of you who have seen Steve lecturing before, this will come as no surprise…). The main point of Steve’s talk was to defend the claim that the notion of (logical) invariance that is now more readily associated with Tarski, in particular his lecture ‘What are logical notions?’ (1966, published posthumously in 1986), is already to be found in the work of Carnap of the 1930s. This in itself was already fascinating, but then Steve ended his talk by drawing some connections between the invariance debate in philosophy of logic and his current work on homotopy type theory. Now, some of you will remember that I am truly excited about this new research program, and since I’ve also spent quite some time thinking about invariance criteria for logicality (more on which below), it was a real treat to hear Steve relating the two debates. In particular, he gave me (yet another) reason to be excited about the homotopy type theory program, which is the topic of this blog post.
"The usual implicit assumption is that mathematical English could be formalized in a set-theoretic foundation such as ZFC, and this requires various conventions on what we can and can’t say in mathematical English. The goal of informal type theory is to develop conventions for a version of mathematical English whose “implicit underlying foundation” is instead type theory — specifically, homotopy type theory."--Mike Shulman.
"Writing a 500 pp. book on an entirely new subject, with 40 authors,
in 9 months is already an amazing achievement....But even more astonishing, in my humble opinion, is the mathematical
and logical content: this is an entirely new foundation, starting from
scratch and reaching to ,
the Yoneda lemma, the cumulative hierarchy of sets, and a new
constructive treatment of the real numbers — with a whole lot of new and
fascinating mathematics and logic along the way...But for all that, what is perhaps most remarkable about the book is what is not
in it: formalized mathematics. One of the novel things about HoTT is
that it is not just formal logical foundations of mathematics in principle: it
really is code that runs in a computer proof assistant... At the risk of sounding a bit grandiose, this represents something of a
“new paradigm” for mathematics: fully formal proofs that can be run on
the computer to ensure their absolute rigor, accompanied by informal
exposition that can focus more on the intuition and examples. Our goal
in this Book has been to provide a first, extended example of this new
style of doing mathematics."--Steve Awodey.
"I believe that [W.E.] Johnson, like McTaggart and Aristotle, deserves commentators." A.N. Prior (1949) MIND.
"Mesmerized by Homo economicus, who acts solely on egoism, economists
shy away from altruism almost comically. Caught in a shameful act of
heroism, they aver: "Shucks, it was only enlightened self interest."
Sometimes it is. At other times it may be only rationalization (spurious
for card-carrying atheists): "If I rescue somebody's son, someone will
I will not waste ink on face-saving tautologies. When the governess of
infants caught in a burning building reenters it unobserved in a
hopeless mission of rescue, casuists may argue; "She did it only to get
the good feeling of doing it. Because other-wise she wouldn't have done
it." Such argumentation (in Wolfgang Pauli's scathing phrase) is not
even wrong. It is just boring, irrelevant, and in the technical sense of
old-fashioned logical positivism "meaning-less." You do not understand
the logic and history of consumer demand theory — Pareto, W. E. Johnson,
Slutsky, Allen-Hicks, Hotelling, Samuelson, Houthakker,... — if you
think that is its content."--P. Samuelson (1993), The American Economic Review.
There is a school of thought that locates the origins of analytical philosophy in the Cambridge of the philosopher-economist, Sidgwick and his students. After all, in Sidgwick's writings we find all the analytical virtues, and it is, thus, no surprise that Rawls and Parfit treat him as our vital interlocuter. Those (that is, the circle around Sidwick) recognized in Boole's work -- to quote W.E. Johnson -- "the first great revolution in the study of formal logic...comparable in importance with that of the algebraical symbolists in the sixteenth century." (2.6, p. 136) While it is not the story I tend to tell (say, here and here), I like this approach because it reminds us of the non-trivial overlap between logicians and economists so distinctive of Cambridge between 1870-1940, and thus, puts Keynes (father and son) and Ramsey back into the origin of analytical philosophy.
Now, the logician-economist, W.E. Johnson (1858 – 1931), is a test-case for this school of thought. (Recall the significance of Johnson to of our very own Mohan [and here].) For, while Johnson does not belong to the British Idealists, he does not figure in the stories we tell about our origins at all (selective evidence: Landini's Russell nor Candlish's The Russel/Bradley Dispute do not even mention Johnson). Even Wikipidia claims that his "Logic was dated at the time of its publication, and Johnson can be seen as a member of the British logic "old guard" pushed aside" by Russell and Whitehead. Wikipedia fits our narrative of progress; yet what to make of Prior's judgment?
It is fair to say that the ‘received view’ about deductive inference, and about inference in general, is that it proceeds from premises to conclusion so as to produce new information (the conclusion) from previously available information (the premises). It is this conception of deductive inference that gives rise to the so-called ‘scandal of deduction’, which concerns the apparent lack of usefulness of a deductive inference, given that in a valid deductive inference the conclusion is already ‘contained’, in some sense or another, in the premises. This is also the conception of inference underpinning e.g. Frege’s logicist project, and much (if not all) of the discussions in the philosophy of logic of the last many decades. (In fact, it is also the conception of deduction of the most famous ‘deducer’ of all times, Sherlock Holmes.)
That an inference, and a deductive inference in particular, proceeds from premises to conclusion may appear to be such an obvious truism that no one in their sane mind would want to question it. But is this really how it works when an agent is formulating a deductive argument, say a mathematical demonstration?
Continuing on NewAPPS’ recentobsession with number theory, today I came across an interesting Slate article
on the new proof of the ‘bounded gaps’ conjecture. The whole article is worth
reading, but there is one particularly priceless quote (hyperlink in the
If you start thinking
really hard about what “random” really means,
first you get a little nauseated, and a little after that you find you’re
doing analytic philosophy. So let’s not go down
A few days ago
Eric had a post about an insightful text that has been making the rounds on the
internet, which narrates the story of a mathematical ‘proof’ that is for now
sitting somewhere in a limbo between the world of proofs and the world of
non-proofs. The ‘proof’ in question purports to establish the famous ABC
conjecture, one of the (thus far) main open questions in number theory. (Luckily,
a while back Dennis posted an extremely helpful and precise exposition of the
ABC conjecture, so I need not rehearse the details here.) It has been proposed
by the Japanese mathematician Shinichi
Mochizuki, who is widely regarded as an extremely talented mathematician. This
is important, as crackpot ‘proofs’ are proposed on a daily basis, but in many
cases nobody bothers to check them; a modicum of credibility is required to get
your peers to spend time checking your purported proof. (Whether this is fair
or not is beside the point; it is a sociological fact about the practice of
mathematics.) Now, Mochizuki most certainly does not lack credibility, but his
‘proof’ has been made public quite a few months ago, and yet so far there is no
verdict as to whether it is indeed a proof of the ABC conjecture or not. How
could this be?
As it turns out, Mochizuki
has been working pretty much on his own for the last 10 years, developing new
concepts and techniques by mixing-and-matching elements from different areas of
mathematics. The result is that he created his own private mathematical world,
so to speak, which no one else seems able (or willing) to venture into for now.
So effectively, as it stands his ‘proof’ is not communicable, and thus cannot
be surveyed by his peers.
Kim sympathizes with his frustrated colleagues, but suggests a
different reason for the rancor. “It really is painful to read other
people’s work,” he says. “That’s all it is… All of us are just too lazy
to read them.” Kim is also quick to defend his friend. He says Mochizuki’s reticence
is due to being a “slightly shy character” as well as his assiduous
work ethic. “He’s a very hard working guy and he just doesn’t want to
spend time on airplanes and hotels and so on.” O’Neil, however, holds Mochizuki accountable, saying that his refusal to cooperate places an unfair burden on his colleagues. “You don’t get to say you’ve proved something if you haven’t
explained it,” she says. “A proof is a social construct. If the
community doesn’t understand it, you haven’t done your job.”--Has the ABC Conjecture been solved? [HT: Clerk Shaw on Facebook]
This piece is a nice inside perspective on the 'political economy' and social epistemology of mathematical proof.
An annoyingly inaccurate, but touching obituary in the Washington Post. Not only did he solve one of the grand conjectures - and one of the easiest to explain to non-mathematicians - but he launched a subliterature in epistemology, by providing the classic case of indirect evidence of the existence of a proof.
Fields-medalist Terence Tao (among other feats, he spotted the mistake in Nelson’s purported proof of the inconsistency of arithmetic back
in 2011) has a blog post on the meaning of rigor in
mathematical practice. He files this post under the heading ‘career advice’,
but the post in fact touches upon some key issues in the philosophy of
mathematics, such as: What is the role of intuitions for mathematical knowledge?
What is the role of formalism and rigor in mathematics? How are ‘formal’ and ‘informal’
While Tao’s post is not intended to be a contribution to
the philosophy of mathematics as such, and while one may miss some of the depth
of the discussions found in the philosophical literature and elsewhere, I find
it illuminating to see how a practicing mathematician (and a brilliant one at
that) conceptualizes the role of rigor in mathematical practice. (Also, much of
what he says fits in nicely with some of the views about formalisms and proofs
that I’ve been defending in recent years, as I will argue below -- something that I couldn't let go unnoticed!)
A few days ago I wrote a post on a dialogical
conceptualization of indirect proofs. Not coincidentally, much of my thinking
on this topic at the moment is prompted by the Prior Analytics, as we are currently holding a reading group of the
text in Groningen. We are still making our way through the
text, but here are some potentially interesting preliminary findings.
I am deeply convinced that the emergence of the technique of
indirect proofs marks the very birth of the deductive method, as it is a
significant departure from more ‘mundane’ forms of argumentation (as I argued
before). So it is perhaps not surprising that the first fully-fledged logical text in
history, the Prior Analytics, offers a sophisticated account of indirect
In an earlier post, I made reference to Jacob Klein’s essay
about Husserl’s history of the origin of geometry. Klein’s own work is very
impressive as well (Burt Hopkins has a recent book on both Klein and Husserl [a NDPR review is here),
and reading through Klein's book has helped me to see one reason why Deleuze so freely
and regularly draws from both mathematics and art, though not just any
mathematics or any art. Deleuze was interested in a problematic as opposed to
axiomatic mathematics; and he was interested in a figural as opposed to
figurative art. What the two have in common is a certain form of abstraction.
In his commentary on Euclid, the 5th century
Greek philosopher Proclus defines indirect proofs, or ‘reductions to
impossibility’, in the following way (I owe this passage to W. Hodges, from
Every reduction to impossibility takes the contradictory of
what it intends to prove and from this as a hypothesis proceeds until it
encounters something admitted to be absurd and, by thus destroying its
hypothesis, conﬁrms the proposition it set out to establish.
Schematically, a proof by reduction is often represented as
It is well know that indirect proofs pose interesting
philosophical questions. What does it mean to assert something with the precise
goal of then showing it to be false, i.e. because it leads to absurd
conclusions? Why assert it in the first place? What kind of speech act is that?
It has been pointed out that the initial statement is not an assertion, but
rather an assumption, a supposition. But while we may, and in fact do, suppose
things that we know are not true in everyday life (say, in the kind of
counterfactual reasoning involved in planning), to suppose something precisely
with the goal of demonstrating its falsity is a somewhat awkward move, both
cognitively and pragmatically.
(A second in a series, drawn from joint work with K. Joseph Mourad.) How do we measure the complexity of decision procedures in poker? This is a question that is both complex and subtle, and seems to me interesting in thinking about the interplay between formal modeling of epistemological situations and more concrete strategic epistemic thinking.
(This will be the first in a series of posts designed to suggest that the mathematics of impredicativity - especially methods of definition that make use of revision-theoretic procedures - are relevant to empirical contexts. Everything I say in these grows out of joint work with my math colleague Joe Mourad.)
Two basic points about the notion of impredicativity: first, it is much broader than what non-expert philosophers tend to think of under the rubric of paradoxes, vicious circularity, and the like. Second, it is a property of definitions - or, more generally, procedures - not of concepts or sets, in the first instance. Given an appreciation of these points, it is not hard to see that the general phenomenon can pose important epistemological issues in contexts in which there are no infinite totalities in play, indeed, in the context of various empirical discussions.
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
(OK, so it looks
like I’m over-posting a bit today… Just one more!)
Between today and
tomorrow, the workshop ‘Groundedness in Semantics and Beyond’ is taking place
at MCMP in Munich, co-organized with the the ERC project Plurals,
Predicates, and Paradox led by Øystein
Linnebo. The workshop’s program seems excellent across the board, but the
opening talk is what really caught my attention: Patrick Suppes on ‘A
neuroscience perspective on the foundations of mathematics’. The abstract:
I mainly ask and partially answer three questions. First,
what is a number? Second, how does the brain process numbers? Third, what are
the brain processes by which mathematicians discover new theorems about
numbers? Of course, these three questions generalize immediately to mathematical
objects and processes of a more general nature. Typical examples are abstract
groups, high dimensional spaces or probability structures. But my emphasis is
not on these mathematical structures as such, but how we think about them. For the grounding of mathematics, I argue
that understanding how we think about mathematics and discover new results is
as important as foundations of mathematics in the traditional sense.
Number theory is notorious for producing conjectures that are easy to state but difficult to resolve. The Fermat theorem, stated in 1637—by Fermat, of course, in the margin of his copy of Diophantus’ Arithmetica—, requires nothing but a knowledge of basic arithmetic to comprehend fully. It was proved (by Andrew Wiles, building on the work of dozens of predecessors) only in 1995. The Goldbach conjecture (that every even number is the sum of two primes) and the twin primes conjecture (that there are infinitely many pairs of prime numbers p, q such that p + 2 = q), stated long ago, remain open.
A newer conjecture of this sort is the “ABC” conjecture. It has been a topic of excitement among mathematicians lately because a mathematician has made a credible claim to have proved it—but by idiosyncratic methods that other mathematicians will have to master before they can evaluate the proof. Proving it, moreover, would resolve a number of other outstanding problems in number theory.
▶ See the Wikipedia entry for more; see also Michael Nielsen’s very helpful page and list of references. I should note that of the news stories he refers to, the best is that from Nature; the New Scientist story should be ignored.)
In what follows I will describe in elementary terms the conjecture and its mathematical significance. The methods used by Shinichi Mochizuki in his claimed proof are very far from elementary. I won’t discuss them; follow the links if you want to know more. In a future post I will consider some philosophical questions suggested by the theorem and its proof.
Neil Levy kindly called my attention to the story: "A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’." As LRB reports, "The paper was created using Mathgen, an online random maths paper generator." Unfortunately, "Neither Marcie Rathke nor the University of Southern North Dakota at Hoople is willing to pay the ‘processing charges’ levied by Advances in Pure Mathematics, so we will never know if the work would actually have made it to publication." The exchange between 'author' and journal is priceless.
So, what did this hoax expose? LRB concludes the following:
Academic journals depend on peer review to ensure the rigour and value
of submissions. The less prestigious the journal, the harder it is to
find competent reviewers and the lower they will have to set the
threshold, until at some point we arrive at, essentially,
accept-all-comers vanity publishing. The murkier the business model and
the lower the standards outside the mainstream, the harder it is for
academics to challenge the status of the prestige journals, locking
academics into the situation Glen Newey describes.
Short version: Science is often said to be committed to reals, because physics, for example, essentially makes use of sentences with real-quantifiers. But we have perfectly good countable, well-founded, constructive models of full second order arithmetic. So why can't physics, for example, simply explicitly embrace one of these as what they are working over and thereby radically simplify their alleged ontological commitments?
It seems to me that there is an issue with the epistemology of domains of quantification that has important implications for the epistemology and semantics of math generally, and which has received less attention than it deserves. In quick outline, the point is this:
A quantificational sentence has a determinate meaning only if there is some determinate fact of the matter as to what its domain of quantification is.
So one knows what one is saying with such a sentence only if one knows what domain one is quantifying over. If we are discussing anything as complex as the reals - equivalently second order arithmetic - and mean to quantify over the "intended model" - that is, do not specify some constructable model as our domain - then we do not know what we are quantifying over.
Thus, we do not know what we are saying when we make claims with second order arithmetic quantifiers.
Today the whole of the internet seems to be celebrating Alan Turing's 100th birthday -- and rightly so, of course. Google in particular has one of its amazing doodles, depicting an interactive Turing machine. Here's a video on how to solve the doodle: