As most kids (I suspect), my daughters sometimes play ‘upside down world’, especially when I ask them something to which they should say ‘yes’, but instead they say ‘no’ and immediately regret it: ‘Upside down world!’ The upside down world game basically functions as a truth-value flipping operator: if you say yes, you mean no, and if you say no, you mean yes.
My younger daughter recently came across the upside down world paradox: if someone asks you ‘are you playing upside down world?’, all kinds of weird things happen to each of the answers you may give. If you are not playing upside down world, you will say no; but if you are playing upside down world you will also say no. So the ‘no’ answer underdetermines its truth-value, a bit like the no-no paradox. Now for the ‘yes’ answer: if you are playing upside down world and say ‘yes’, then that means ‘no’, and so you are not playing the game after all if you are speaking truthfully. But then your ‘yes’ was a genuine yes in the first place, and so you are playing the game and said yes, which takes us back to the beginning. (In other words, 'no' is the only coherent answer, but it still doesn't say anything about whether you are actually playing the game or not.)
I was asked to write a review of Terry Parsons' Articulating Medieval Logic for the Australasian Journal of Philosophy. This is what I've come up with so far. Comments welcome!
Scholars working on (Latin) medieval logic can be viewed as populating a spectrum. At one extremity are those who adopt a purely historical and textual approach to the material: they are the ones who produce the invaluable modern editions of important texts, without which the field would to a great extent simply not exist; they also typically seek to place the doctrines presented in the texts in a broader historical context. At the other extremity are those who study the medieval theories first and foremost from the point of view of modern philosophical and logical concerns; various techniques of formalization are then employed to ‘translate’ the medieval theories into something more intelligible to the modern non-historian philosopher. Between the two extremes one encounters a variety of positions. (Notice that one and the same scholar can at times wear the historian’s hat, and at other times the systematic philosopher’s hat.) For those adopting one of the many intermediary positions, life can be hard at times: when trying to combine the two paradigms, these scholars sometimes end up displeasing everyone (speaking from personal experience).
Terence Parsons’ Articulating Medieval Logic occupies one of these intermediate positions, but very close to the second extremity; indeed, it represents the daring attempt to combine the author’s expertise in natural language semantics, linguistics, and modern philosophy with his interest in medieval logical theories (which arose in particular from his decade-long collaboration with Calvin Normore, to whom the book is dedicated). For scholars of Latin medieval logic, the fact that such a distinguished expert in contemporary philosophy and linguistics became interested in these medieval theories only confirms what we’ve known all along: medieval logical theories have intrinsic systematic interest; they are not only curious museum pieces.
Despite not being the first to employ modern logical techniques to analyze medieval theories, Parsons' approach is quite unique (one might even say idiosyncratic). It seems fair to say that nobody has ever before attempted to achieve what he wants to achieve with this book. A passage from the book’s Introduction is quite revealing with respect to its goals:
It is no news to anyone that the concept of consistency is a hotly debated topic in philosophy of logic and epistemology (as well as elsewhere). Indeed, a number of philosophers throughout history have defended the view that consistency, in particular in the form of the principle of non-contradiction (PNC), is the most fundamental principle governing human rationality – so much so that rational debate about PNC itself wouldn’t even be possible, as famously stated by David Lewis. It is also the presumed privileged status of consistency that seems to motivate the philosophical obsession with paradoxes across time; to be caught entertaining inconsistent beliefs/concepts is really bad, so blocking the emergence of paradoxes is top-priority. Moreover, in classical as well as other logical systems, inconsistency entails triviality, and that of course amounts to complete disaster.
Since the advent of dialetheism, and in particular under the powerful assaults of karateka Graham Priest, PNC has been under pressure. Priest is right to point out that there are very few arguments in favor of the principle of non-contradiction in the history of philosophy, and many of them are in fact rather unconvincing. According to him, this holds in particular of Aristotle’s elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we disagree on its exact structure. At any rate, it is worth noticing that, unlike David Lewis, Aristotle did think it was possible to debate with the opponent of PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of non-contradiction and consistency are still widely viewed as cornerstones of the very concept of rationality.
However, in the spirit of my genealogical approach to philosophical issues, I believe that an important question to be asked is: What’s the big deal with consistency in the first place? What does it do for us? Why do we want consistency so badly to start with? When and why did we start thinking that consistency was a good norm to be had for rational discourse? And this of course takes me back to the Greeks, and in particular the Greeks before Aristotle.
Hand and Kvanvig argue that Tennant's solution would be analogous to a set theorist responding to Russell's Paradox by proposing naive set theory with Frege's comprehension axiom restricted to instances that don't allow one to prove absurdity from that instance and the other axioms (call this theory N'). For Hand and Kvanvig this is a reductio of Tennant, and in my response* I argued that Tennant's solution was not actually analogous to N'.
If I remember right, Hand and Kvanvig argue that N' is bad because it doesn't illuminate the nature of sets in the way we properly expect of solutions to paradoxes. But they don't go into the logical properties of N' at all, and tonight I'm thinking that this is actually an important question in its own right. Let's just consider consistency, completeness, and axiomatizability.
Some time ago, I wrote a blog post defending the idea that a particular family of non-monotonic logics, called preferential logics, offered the resources to explain a number of empirical findings about human reasoning, as experimentally established. (To be clear: I am here adopting a purely descriptive perspective and leaving thorny normative questions aside. Naturally, formal models of rationality also typically include normative claims about human cognition.)
In particular, I claimed that preferential logics could explain what is known as the modus ponens-modus tollens asymmetry, i.e. the fact that in experiments, participants will readily reason following the modus ponens principle, but tend to ‘fail’ quite miserably with modus tollens reasoning – even though these are equivalent according to classical as well as many non-classical logics. I also defended (e.g. at a number of talks, including one at the Munich Center for Mathematical Philosophy which is immortalized in video here and here) that preferential logics could be applied to another well-known, robust psychological phenomenon, namely what is known as belief bias. Belief bias is the tendency that human reasoners seem to have to let the believability of a conclusion guide both their evaluation and production of arguments, rather than the validity of the argument as such.
Well, I am now officially taking most of it back (and mostly thanks to working on these issues with my student Herman Veluwenkamp).
This summer I'm trying to get a little bit up to speed on modality issues by doing an independent study with some students.* I've started looking ahead to Williamson's recent magnum opus and this little bit of the preface weirded me out:
Since cosmological theories in physics are naturally understood as embodying no restriction of their purview to exclude Lewis's multiple spatiotemporal systems, many of which are supposed to violate their laws, his cosmology is inconsistent with physicists', and so in competition with them as a theory of total spatiotemporal reality. On such matters, physicists may be felt to speak with more authority than metaphysicians. The effect of Lewis's influential and ingenious system-building was to keep centre stage a view that imposed Quine's puritan standards on modality long after Quine's own eliminativist application of those standards have been marginalized (Williamson 2013, xii)
I don't get this at all.
The connection between Lewisian Genuine Realism and Quine's eliminativism is a promissory note that I assume he'll cash in later, but the first bit just makes no sense to me. In On the Plurality of Worlds, Lewis explicitly says that the nomologically possible worlds will be a subset of all possible worlds and he discusses physically impossible forms of space time in this context. He has to do this, since possible worlds are individuated by the space-time which each world shares with itself. But nowhere does he make claims about which class of worlds will be the nomologically possible ones.
Hitler does not like Gödel's theorem one bit. Perhaps surprisingly, he displays a sophisticated understanding of the implications and presuppositions of the theorem. (In other words, there's some very solid philosophy of logic in the background -- I think I could teach a whole course only on the material presupposed here.)
(Courtesy of Diego Tajer, talented young logician from Buenos Aires, giving continuation to the best Monty Python tradition!)
My co-writer* Joshua Heller is currently working on a project connections between vagueness literature, literature on semantic underdetermination, and new work on metaphysical indeterminacy.**
One thing we're both interested in exploring the next few weeks is the extent to which Evans' argument against ontic vagueness applies to either semantic underdetermination or metaphysical indeterminacy. But I'm about ten years out of date on the vagueness literature. The last time I dipped my toe in this, it seemed like everyone was trying to save supervaulationism from Williamson's criticisms about wide and narrow entailment and from the charge that it has no advantages over three valued systems with respect to modelling higher order vagueness. I didn't think there was any consensus on Evans' argument then.
Is there now anything approaching a consensus among people working on vagueness about Evans' argument? If so, what should I read? Have any of the new people working on metaphysical indeterminacy or semantic underdetermination said anything interesting about Evans' argument?
This is clearly invalid, because there are anti-symmetric relations (note that if it were valid one can prove ExEx (Rxx) from ExEy (Rxy), and that ExEx(Rxx) has the same truth conditions as Ex(Rxx)).
Intuitively, Existential Introduction should be restricted so that one cannot replace a name or eigenvariable with a variable that is already bound in the sentence. But I can't find this restriction in Barker-Plummer/Barwise/Etchemendy. My friend couldn't find it in the old red Mates book either, so we think it's not unlikely that we're both missing something obvious.
I turned to the soundness proof in Barker-Plummer/Barwise/Etchemendy and they leave the case of Existential Introduction as an excercise to the reader. It has a little star next to it showing that it is a difficult problem. I'm wondering if it's impossible. But, again, it's more likely that I'm missing something, so if anyone who teaches from the book could take a look my introductory logic students would appreciate it.
In much of the philosophy of language and mind coming out of the late Wittgenstein and/or early Heidegger, a distinction is made between merely following a norm versus also being able to correctly assess whether others are following that norm. Note that the Brandom of "Dasein, the Being that Thematizes" (in Tales of the Mighty Dead) and the Mark Okrent of "On Layer Cakes" both mark this distinction, though they disagree on whether the latter ability requires language. Okrent (whose objects that Brandom's view entails that human aphaisics and non-linguistic deaf adults have no minds) writes:
Because all tool use is embedded in a context of instrumental rationality, there is more to using a hammer correctly than using it as others do. Sometimes it is possible to use a hammer better than the others do, even if no one else has ever done it in that way, and no one else recognizes that one is doing so, because the norm that defines this use as ‘better’ is independent of what is actually recognized within the community. That norm is the norm of instrumental rationality: it is good to do that which would achieve one’s ends most completely and most efficiently, were anyone to do it in that way. For the same reason, it is sometimes possible for a member of a society to improve a hammer, or repair it, by giving it a structure that no hammer has previously had in that society.
I am currently supervising a student writing a paper on Wittgenstein’s notion of therapy as a metaphilosophical concept. The paper relies centrally on a very useful distinction discussed in N. Rescher’s 1985 book The Strife of Systems (though I do not know whether it was introduced there for the first time), namely the distinction between prescriptive vs. descriptive metaphilosophy (the topic of chap. 14 of the book).
The descriptive issue of how philosophy has been done is one of factual inquiry largely to be handled in terms of the history of the field. But the normative issue of how philosophy should be done – or significant questions, adequate solutions, and good arguments – is something very different. (Rescher 1985, 261)
Rescher goes on to argue that descriptive metaphilosophy is not part of philosophy at all; it is a branch of factual inquiry, namely the history of philosophy and perhaps its sociology. Prescriptive metaphilosophy, by contrast, is real philosophy: methodological claims on how philosophy should be done are themselves philosophical claims. (Full disclosure: I haven’t read the whole chapter, only what google books allows me to see…) Rescher’s position as described here seems to be quite widespread, encapsulating the ‘disdain’ with which not only descriptive metaphilosophy, but also the history of philosophy in general, is often viewed by ‘real philosophers’. And yet, this position seems to me to be fundamentally wrong (and this is also the claim that my student is defending in his paper).
(Notice that to discuss the status of descriptive metaphilosophy as philosophy, we need to go meta-metaphilosophical! It’s turtles all the way up, or down, depending on how you look at it.)
Genuine Realists about modality typically understand propositional content to be a function of the set of worlds where that proposition is true (the set of worlds might include impossible ones). Actualist Realists take the dependence to go in the other direction, taking a world to be a function of the set of propositions true at that world. Since this function is almost always identity,* let's treat it as such in what follows.
Kaplan established a cardinality paradox against Genuine Realism analogous to an earlier paradox about the set of all propositions put forward by Russell. Russell's paradox** is now taken analogically to present a problem for Actual Realists.
Here's how Kaplan's paradox goes. Assume the set of all possible worlds has the cardinality K. Then, by Cantor's Theorem, the powerset of the set of possible worlds has a greater cardinality. But if a proposition is a set of worlds, then the cardinality of the set of propositions is greater than the cardinality of the set of worlds. O.K. so far. But let's consider for each proposition a world where one being is thinking that proposition.**** But then the set of worlds has at least the cardinality as that of the set of propositions. Contradiction.
I'm currently running a series of posts at M-Phi with sections of a paper I'm working on, 'Axiomatizations of arithmetic and the first-order/second-order divide', which may be of interest to at least some of the NewAPPS readership. It focuses on the idea that, when it comes to axiomatizing arithmetic, descriptive power and deductive power cannot be combined: axiomatizations that are categorical (using a highly expressive logical language, typically second-order logic) will typically be intractable, whereas axiomatizations with deductively better-behaved underlying logics (typically, first-order logic) will not be categorical -- i.e. will be true of models other than the intended model of the series of the natural numbers. Based on a distinction proposed by Hintikka between the descriptive use and the deductive use of logic in the foundations of mathematics, I discuss what the impossibility of having our arithmetical cake and eating it (i.e. of combining deductive power with expressive power to characterize arithmetic with logical tools) means for the first-order logic vs. second-order logic debate.
Over on Facebook, Bijan Parsia asked a really great question.
[... are there] any critical reasoning courses/textbooks out there that focus at the dialectical (or beyond) level rather than at the argument level. My recollection is that they are very focused at the individual argument level with an unhealthy focus on fallacies rather than thinking very much about overall cognitive strategies (esp. in group settings) or other goals than the cognitive. I recall getting a lot of that from phil of science classes and pedagogy and (interestingly) online dissuasion analysis (see the "poisonous people" video floating about, or even troll bestiaries), but not so much from critical reasoning (which often was shoehorned into a symbolic logic class).
While I haven't taught critical reasoning in a few years, I also can't recall having run across anything like what Bijan is looking for here. But I don't think it's difficult to see why materials of the sort would be of great value. In fact, I can see how they would be very helpful not just in the 'critical reasoning' context, but more broadly as part of the kind of instruction might give in philosophical process in a lot of our classes.
And with that, I throw the question out to the rest of you. Do you know of materials of this sort? Have you developed something of your own that you'd like to share?
I'm not a logician. Nor do I play one on T.V. So please be patient if I'm messing up something basic in what follows. An explanation of what I'm messing up and/or some relevant citations would be pretty helpful too.
Vestiges of the first state - choosing an unspecified element from a single set - can be found in Euclid's Elements, if not earlier. Such choices formed the basis of the ancient method of proving a generalization by considering an arbitrary but definite object, and then executing the argument for that object. This first stage also included the arbitrary choice of an element form each of finitely many sets. It is important to understand that the Axiom was not needed for an arbitrary choice from a single set, even if the set contained infinitely many elements. For in a formal system a single arbitrary choice can be eliminated through the use of universal generalization or similar rule of inference. By induction on the natural numbers, such a procedure can be extended to any finite family of sets.
Formal/mathematical philosophy is a well-established approach within philosophical inquiry, having its friends as well as its foes. Now, even though I am very much a formal-approaches-enthusiast, I believe that fundamental methodological questions tend not to receive as much attention as they deserve within this tradition. In particular, a key question which is unfortunately not asked often enough is: what counts as a ‘good’ formalization? How do we know that a given proposed formalization is adequate, so that the insights provided by it are indeed insights about the target phenomenon in question? In recent years, the question of what counts as adequate formalization seems to be for the most part a ‘Swiss obsession’, with the thought-provoking work of Georg Brun, and Michael Baumgartner & Timm Lampert. But even these authors seem to me to restrict the question to a limited notion of formalization, as translation of pieces of natural language into some formalism. (I argued in chapter 3 of my book Formal Languages in Logic that this is not the best way to think about formalization.)
However, some of the pioneers in formal/mathematical approaches to philosophical questions did pay at least some attention to the issue of what counts as an adequate formalization. In this post, I want to discuss how Tarski and Carnap approached the issue, hoping to convince more ‘formal philosophers’ to go back to these questions. (I also find the ‘squeezing argument’ framework developed by Kreisel particularly illuminating, but will leave it out for now, for reasons of space.)
A few days ago, while trying to open the interwebs thingy to allow me to start entering my grades, I was prevented from doing so by a pop-up menu that referenced LSU's Policy Statement 67. The text included unsubstantiated and highly dubious claims such as that most workplace problems are the result of drugs and alcohol abuse by workers. And this was only a few weeks after all of the chairs at LSU had to provide verification that every single faculty member had read a hysterical message from our staff and administrative overlords that justified expanding the extension of pee-tested employees at LSU to now include faculty. The wretched communiqué justified pee-testing faculty because of new evidence showing that marijuana is harmful to 13 year olds.*
Anyhow, when I scrolled to the bottom of the popup, I had to click a button saying not only that I read the document but also that I "agreed" with it.
I honestly don't get this. Are my beliefs a condition of employment at LSU? There was no button that said I read it but didn't agree with it.
( From the graphic novel Logicomix, taken from this blog post by Richard Zach.)
“He doesn’t want to prove this or that, but to find out how things really are.” This is how Russell describes Wittgenstein in a letter to Lady Ottoline Morrell (as reported in M. Potter’s wonderful book Wittgenstein's Notes on Logic, p. 50 – see my critical note on the book). This may well be the most accurate characterization of Wittgenstein’s approach to philosophy in general, in fact a fitting description of the different phases Wittgenstein went through. Indeed, if there is a common denominator to the first, second, intermediate etc. Wittgensteins, it is the fundamental nature of the questions he asked: different answers, but similar questions throughout. So instead of proving ‘this or that’, for example, he asks what a proof is in the first place.
This week, we’ve had a new round of discussions on the ‘combative’ nature of philosophy as currently practiced and its implications, prompted by a remark in a column by Jonathan Wolff on the scarcity of women in the profession. (Recall the last wave of such discussions, then prompted by Rebecca Kukla’s 3AM interview.) Brian Leiter retorted that there’s nothing wrong with combativeness in philosophy (“Insofar as truth is at stake, combat seems the right posture!”). Chris Bertram in turn remarked that this is the case only if “there’s some good reason to believe that combat leads to truth more reliably than some alternative, more co-operative approach”, which he (apparently) does not think there is. Our own John Protevi pointed out the possible effects of individualized grading for the establishment of a competitive culture.
As I argued in a previous post on the topic some months ago, I am of the opinion that adversariality can have a productive, positive effect for philosophical inquiry, but not just any adversariality/combativeness. (In that post, I placed the discussion against the background of gender considerations; I will not do so here, even though there are obvious gender-related implications to be explored.) In fact, what I defend is a form of adversariality which combines adversariality/opposition with a form of cooperation.
This is a beautiful review. It is clear on technical issues; it is very critical, albeit respectful. It is informative to experts and non-experts alike; the formal apparatus is used to provide clarity not to create an esoteric, gated garden. It calls attention to unexplored alternative positions, and does so not just to keep scholarly score, but especially in order to illuminate the philosophical possibility space. It also contains a touch of wicked humor. (I return to that below.)
Note that Takashi Yagisawa (the reviewer) does not offer a detailed summary of the book; it is, thus, not balanced. Readers have to trust his judgment that he has focused on the central issues that are relevant to the community. Only competent readers of the whole book can decide, thus, if the review is fair. For some, the lack of summary may be a fatal flaw. Such people think that the main duty of a review is to tell people what's in a book. While that is important (which is why judicious summaries are often part of great reviews), it need not trump other considerations of the sort mentioned in the first paragraph.
Two weeks ago, I wrote a post proposing a dialogical perspective on structural rules. In fact, at that point I offered an analysis of only one structural rule, namely left-weakening, and promised that I would come back for more. In this post, I will discuss contraction and exchange (for both, I again restrict myself to the left cases). (I will assume that readers are familiar with the basic principles of my dialogical approach to deductive proofs, as recapped in my previous post on structural rules.)
Contraction, in particular, is very significant, given the recent popularity of restriction on contraction as a way to block the derivation of paradoxes such as the Liar and Curry. What does contraction mean, generally speaking? Contraction is the rule according to which two or more copies of a given formula in a sequent can be collapsed into each other (contracted); in other words, the idea is that the number of copies should not matter for the derivation of the conclusion:
Evolutionary accounts of deductive reasoning have been
enjoying a fair amount of popularity in the last decades. Some of those who
have defended views of this kind are Cooper, Maddy, and more recently Joshua
Schechter. The basic idea is that an explanation for why we have developed
the ability to reason deductively (if indeed we have developed this ability!)
is that it conferred a survival advantage to those individuals who possessed it among our ancestors, who in
turn were reproductively more successful than those individuals in the
ancestral population who did not possess this ability. In other words,
deductive reasoning would have arisen as an adaptation
in humans (and possibly in non-human animals too, but I will leave this question
aside). Attractive though it may seem at first sight (and I confess having had
a fair amount of sympathy for it for a while), this approach faces a number of
difficulties, and in my opinion is ultimately untenable. (Some readers will not
be surprised to hear this, if they recall a previous post where I argue that
deductive reasoning is best seen as a cultural product, not as a biological,
genetically encoded endowment in humans.)
In this post, I will spell out what I take to be the main
flaw of such accounts, namely the fact that they seem incompatible with the
empirical evidence on deductive reasoning in human reasoners as produced by
experimental psychology. In this sense, these accounts fall prey to the same
mistake that plagues many evolutionary accounts of female orgasm, in particular
those according to which female orgasm has arisen as an adaptation in the human
species. To draw the parallel between the case for deductive reasoning and the
case for the female orgasm, I will rely on Elisabeth Lloyd’s fantastic book The Case of the Female Orgasm (which, as
it so happens, I had the pleasure of re-reading during my vacation last
As some of you may have seen, we will be hosting the workshop ‘Proof theory and philosophy’ in Groningen at the beginning of December. The idea is to focus on the philosophical significance and import of proof theory, rather than exclusively on technical aspects. An impressive team of philosophically inclined proof theorists will be joining us, so it promises to be a very exciting event (titles of talks will be made available shortly).
For my own talk, I’m planning to discuss the main structural rules as defined in sequent calculus – weakening, contraction, exchange, cut – from the point of view of the dialogical conception of deduction that I’ve been developing, inspired in particular (but not exclusively) by Aristotle’s logical texts. In this post, I'll do a bit of preparatory brainstorming, and I look forward to any comments readers may have!
Some months ago I wrote two posts on the concept of indirect proofs: one presenting a dialogical conception of these proofs, and the other analyzing the concept of ‘proofs through the impossible’ in the Prior Analytics. Since then I gave a few talks on this material, receiving useful feedback from audiences in Groningen and Paris. Moreover, this week we hosted the conference ‘Dialectic and Aristotle’s Logic’ in Groningen, and after various talks and discussions I have come to formulate some new ideas on the topic of reductio proofs and their dialectical/dialogical underpinnings. So for those of you who enjoyed the previous posts, here are some further thoughts and tentative answers to lingering questions.
Recall that the dialogical conception I presented in previous posts was meant to address the awkwardness of the first speech act in a reductio proof, namely that of supposing precisely that which you intend to refute by showing that it entails an absurdity. From studies in the literature on math education, it is known that this first step can be very confusing to students learning the technique of reductio proofs. On the dialogical conception, however, no such awkwardness arises, as there is a division of roles between the agent who supposes the initial thesis to be refuted, and the agent who in fact derives an absurdity from the thesis.
Those of you who have been following some of my blog posts
will recall my current research project ‘Roots of Deduction’, which aims at
unearthing (hopefully without damaging!) the conceptual and historical origins
of the very concept of a deductive argument as one where the truth of the
premises necessitates the truth of the conclusion. In particular, this past
year we’ve been reading the Prior
Analytics in a reading group, which has been a fantastic experience (highly
recommended!). For next year, the plan is to switch from logic to mathematics,
and look more closely into the development of deductive arguments in Greek
But here’s the catch: the members of the project are all much
more versed in the history of logic than in the history of mathematics, so we
can’t count on as much previous expertise for mathematics as we could in the
case of (Aristotelian) logic. Moreover, the history of ancient Greek
mathematics is a rather intimidating topic, with an enormous amount of
secondary literature and a notorious scarcity of primary sources (at least for
the earlier pre-Euclid period, which is what we would be interested in). So it
seems prudent to focus on a few specific aspects of the topic, and for now I
have in mind specifically the connections between mathematics and logic (and
philosophy) in ancient Greece. More generally, our main interest is not on the
‘contentual’ part of mathematical theories, but rather on the ‘structural’
part, in particular the general structure of arguments and the emergence of
necessarily truth-preserving arguments.
Last week I was in Munich for the excellent ‘Carnap on logic’ conference, which brought together pretty much everyone who’s someone in the world of Carnap scholarship. (And that excludes me -- still don’t know exactly why I was invited in the first place…) My talk was a comparison between Carnap’s notion of explication and my own conception of formalization, as developed in my book Formal Languages in Logic. In particular, I proposed a cognitive, empirically informed account of Carnap’s notion of the fruitfulness of an explication.
Anyway, I learned an awful lot about Carnap, and got to meet some great people I hadn’t yet met. But perhaps the talk I enjoyed most was Steve Awodey’s ‘On the invariance of logical truth’ (for those of you who have seen Steve lecturing before, this will come as no surprise…). The main point of Steve’s talk was to defend the claim that the notion of (logical) invariance that is now more readily associated with Tarski, in particular his lecture ‘What are logical notions?’ (1966, published posthumously in 1986), is already to be found in the work of Carnap of the 1930s. This in itself was already fascinating, but then Steve ended his talk by drawing some connections between the invariance debate in philosophy of logic and his current work on homotopy type theory. Now, some of you will remember that I am truly excited about this new research program, and since I’ve also spent quite some time thinking about invariance criteria for logicality (more on which below), it was a real treat to hear Steve relating the two debates. In particular, he gave me (yet another) reason to be excited about the homotopy type theory program, which is the topic of this blog post.
As some readers will recall, we’ve been holding a reading group of the Prior Analytics
in Groningen over the last academic year, which then prompted me to write (too?) many posts inspired by
this venerable work (here and here, for example). We are nearly finished, only
three more chapters to go (so just one more session). But interestingly,
towards the end things are getting increasingly strange. Up to chapter 18 of
book B (which traditionally receives much less attention than its more famous
sibling, book A), things were still following the usual Aristotelian pattern of
extreme systematicity and strenuous examination of cases. But as we got to
chapter B19, there was a sudden change of gears: B19 and B20 are explicit
applications of the theory of syllogistic to dialectical situations (needless
to say, these made me very happy), and B21 is really about epistemology
and quite out of place in the context of the Prior Analytics (though also very interesting). (Some scholars
think that these are older layers of the text, which then somehow ended up
being placed at the very end.)
At B22 it looked like we were back on track
with the usual analysis of cases in the figures, but there was still a surprise
in store. Towards the end of the chapter, Aristotle presents a
puzzling discussion of ‘opposites’, one of which is preferable over the other.
He writes (Smith translation):
When A and B are two opposites, of which A
is preferable to B, and D is preferable in the same way to its opposite C, then
if <the combination of> A and C is preferable to <the combination
of> B and D, then A is preferable to D. (68a25-28)
It is fair to say that the ‘received view’ about deductive inference, and about inference in general, is that it proceeds from premises to conclusion so as to produce new information (the conclusion) from previously available information (the premises). It is this conception of deductive inference that gives rise to the so-called ‘scandal of deduction’, which concerns the apparent lack of usefulness of a deductive inference, given that in a valid deductive inference the conclusion is already ‘contained’, in some sense or another, in the premises. This is also the conception of inference underpinning e.g. Frege’s logicist project, and much (if not all) of the discussions in the philosophy of logic of the last many decades. (In fact, it is also the conception of deduction of the most famous ‘deducer’ of all times, Sherlock Holmes.)
That an inference, and a deductive inference in particular, proceeds from premises to conclusion may appear to be such an obvious truism that no one in their sane mind would want to question it. But is this really how it works when an agent is formulating a deductive argument, say a mathematical demonstration?
[The following is the consequence of discussion with F.A. Muller, Lieven Decock, and Victor Gijsbers. They should be blamed for my mistakes.--ES]
A few weeks ago I complained that Ted Sider's approach to "knee-jerk realism" is dismissive toward views that do not share his (ahum) fundamental outlook (and I mused a bit about the sociology of knowledge that facilitates such dismissiveness). One worrisome consequence is that Sider fails to see objections to his view when they ought to be staring him in the face. Consider the following two passages from Ted Sider's Writing the Book of the World:
I hold that the fundamental is also determinate. "The fundamental is determinate" is not particularly clear, and improving the situation is difficult because there are so many different ways to understand what "determinacy" amounts to, but perhaps we can put it thus. First, no special-purpose vocabulary that is distinctive of indeterminacy...carves at the joints. Second, fundamental languages obey classical logic. The combination of these two claims is perhaps the best way to cash out the elusive dogma that vagueness and other forms of indeterminacy are not "in the world." (137)
The continuum hypothesis is sometimes said to be indeterminate. But suppose that mundane set-theoretic truths, such as the axiom of extensionality, are fundamental. Then by the combinatorial principle, the continuum hypothesis must be determinate, since it can be stated using only expressions that occur in mundane set-theoretic truths (namely, logical expressions and the predicate ∈). Thus we have a surprising result: the fundamentality of the mundane truths of set-theory requires the non-mundane continuum hypothesis to be determinate. (151)
Now, first, the "sometimes said to be," is an odd locution. After all, it was proven that if ZFC is consistent then the continuum hypothesis can neither be proven nor disproven in it (see here for a good intro). Second, in the context of Sider's program ("mundane set-theoretic truths"), abandoning ZFC is not on the table. Third, I know that One's modus ponens is another's modus tollens, but Sider has no "result" here--he ought to be facing up to the fact that there is a straightforward objection against his claim that the "fundamental languages" obey classical logic and mundane set theory: there is no reason to think the continuum hypothesis is determinate. To think otherwise is an act of faith (recall my observation about the odd religiosity of his so-called "knee-jerk realism"). So, I stand by my earlier claim that there is something troubling about an agenda-setting book that wishes away obvious problems with the program.
A few days ago
Eric had a post about an insightful text that has been making the rounds on the
internet, which narrates the story of a mathematical ‘proof’ that is for now
sitting somewhere in a limbo between the world of proofs and the world of
non-proofs. The ‘proof’ in question purports to establish the famous ABC
conjecture, one of the (thus far) main open questions in number theory. (Luckily,
a while back Dennis posted an extremely helpful and precise exposition of the
ABC conjecture, so I need not rehearse the details here.) It has been proposed
by the Japanese mathematician Shinichi
Mochizuki, who is widely regarded as an extremely talented mathematician. This
is important, as crackpot ‘proofs’ are proposed on a daily basis, but in many
cases nobody bothers to check them; a modicum of credibility is required to get
your peers to spend time checking your purported proof. (Whether this is fair
or not is beside the point; it is a sociological fact about the practice of
mathematics.) Now, Mochizuki most certainly does not lack credibility, but his
‘proof’ has been made public quite a few months ago, and yet so far there is no
verdict as to whether it is indeed a proof of the ABC conjecture or not. How
could this be?
As it turns out, Mochizuki
has been working pretty much on his own for the last 10 years, developing new
concepts and techniques by mixing-and-matching elements from different areas of
mathematics. The result is that he created his own private mathematical world,
so to speak, which no one else seems able (or willing) to venture into for now.
So effectively, as it stands his ‘proof’ is not communicable, and thus cannot
be surveyed by his peers.