I'm currently running a series of posts at M-Phi with sections of a paper I'm working on, 'Axiomatizations of arithmetic and the first-order/second-order divide', which may be of interest to at least some of the NewAPPS readership. It focuses on the idea that, when it comes to axiomatizing arithmetic, descriptive power and deductive power cannot be combined: axiomatizations that are categorical (using a highly expressive logical language, typically second-order logic) will typically be intractable, whereas axiomatizations with deductively better-behaved underlying logics (typically, first-order logic) will not be categorical -- i.e. will be true of models other than the intended model of the series of the natural numbers. Based on a distinction proposed by Hintikka between the descriptive use and the deductive use of logic in the foundations of mathematics, I discuss what the impossibility of having our arithmetical cake and eating it (i.e. of combining deductive power with expressive power to characterize arithmetic with logical tools) means for the first-order logic vs. second-order logic debate.
Over on Facebook, Bijan Parsia asked a really great question.
[... are there] any critical reasoning courses/textbooks out there that focus at the dialectical (or beyond) level rather than at the argument level. My recollection is that they are very focused at the individual argument level with an unhealthy focus on fallacies rather than thinking very much about overall cognitive strategies (esp. in group settings) or other goals than the cognitive. I recall getting a lot of that from phil of science classes and pedagogy and (interestingly) online dissuasion analysis (see the "poisonous people" video floating about, or even troll bestiaries), but not so much from critical reasoning (which often was shoehorned into a symbolic logic class).
While I haven't taught critical reasoning in a few years, I also can't recall having run across anything like what Bijan is looking for here. But I don't think it's difficult to see why materials of the sort would be of great value. In fact, I can see how they would be very helpful not just in the 'critical reasoning' context, but more broadly as part of the kind of instruction might give in philosophical process in a lot of our classes.
And with that, I throw the question out to the rest of you. Do you know of materials of this sort? Have you developed something of your own that you'd like to share?
I'm not a logician. Nor do I play one on T.V. So please be patient if I'm messing up something basic in what follows. An explanation of what I'm messing up and/or some relevant citations would be pretty helpful too.
Vestiges of the first state - choosing an unspecified element from a single set - can be found in Euclid's Elements, if not earlier. Such choices formed the basis of the ancient method of proving a generalization by considering an arbitrary but definite object, and then executing the argument for that object. This first stage also included the arbitrary choice of an element form each of finitely many sets. It is important to understand that the Axiom was not needed for an arbitrary choice from a single set, even if the set contained infinitely many elements. For in a formal system a single arbitrary choice can be eliminated through the use of universal generalization or similar rule of inference. By induction on the natural numbers, such a procedure can be extended to any finite family of sets.
Formal/mathematical philosophy is a well-established approach within philosophical inquiry, having its friends as well as its foes. Now, even though I am very much a formal-approaches-enthusiast, I believe that fundamental methodological questions tend not to receive as much attention as they deserve within this tradition. In particular, a key question which is unfortunately not asked often enough is: what counts as a ‘good’ formalization? How do we know that a given proposed formalization is adequate, so that the insights provided by it are indeed insights about the target phenomenon in question? In recent years, the question of what counts as adequate formalization seems to be for the most part a ‘Swiss obsession’, with the thought-provoking work of Georg Brun, and Michael Baumgartner & Timm Lampert. But even these authors seem to me to restrict the question to a limited notion of formalization, as translation of pieces of natural language into some formalism. (I argued in chapter 3 of my book Formal Languages in Logic that this is not the best way to think about formalization.)
However, some of the pioneers in formal/mathematical approaches to philosophical questions did pay at least some attention to the issue of what counts as an adequate formalization. In this post, I want to discuss how Tarski and Carnap approached the issue, hoping to convince more ‘formal philosophers’ to go back to these questions. (I also find the ‘squeezing argument’ framework developed by Kreisel particularly illuminating, but will leave it out for now, for reasons of space.)
A few days ago, while trying to open the interwebs thingy to allow me to start entering my grades, I was prevented from doing so by a pop-up menu that referenced LSU's Policy Statement 67. The text included unsubstantiated and highly dubious claims such as that most workplace problems are the result of drugs and alcohol abuse by workers. And this was only a few weeks after all of the chairs at LSU had to provide verification that every single faculty member had read a hysterical message from our staff and administrative overlords that justified expanding the extension of pee-tested employees at LSU to now include faculty. The wretched communiqué justified pee-testing faculty because of new evidence showing that marijuana is harmful to 13 year olds.*
Anyhow, when I scrolled to the bottom of the popup, I had to click a button saying not only that I read the document but also that I "agreed" with it.
I honestly don't get this. Are my beliefs a condition of employment at LSU? There was no button that said I read it but didn't agree with it.
( From the graphic novel Logicomix, taken from this blog post by Richard Zach.)
“He doesn’t want to prove this or that, but to find out how things really are.” This is how Russell describes Wittgenstein in a letter to Lady Ottoline Morrell (as reported in M. Potter’s wonderful book Wittgenstein's Notes on Logic, p. 50 – see my critical note on the book). This may well be the most accurate characterization of Wittgenstein’s approach to philosophy in general, in fact a fitting description of the different phases Wittgenstein went through. Indeed, if there is a common denominator to the first, second, intermediate etc. Wittgensteins, it is the fundamental nature of the questions he asked: different answers, but similar questions throughout. So instead of proving ‘this or that’, for example, he asks what a proof is in the first place.
This week, we’ve had a new round of discussions on the ‘combative’ nature of philosophy as currently practiced and its implications, prompted by a remark in a column by Jonathan Wolff on the scarcity of women in the profession. (Recall the last wave of such discussions, then prompted by Rebecca Kukla’s 3AM interview.) Brian Leiter retorted that there’s nothing wrong with combativeness in philosophy (“Insofar as truth is at stake, combat seems the right posture!”). Chris Bertram in turn remarked that this is the case only if “there’s some good reason to believe that combat leads to truth more reliably than some alternative, more co-operative approach”, which he (apparently) does not think there is. Our own John Protevi pointed out the possible effects of individualized grading for the establishment of a competitive culture.
As I argued in a previous post on the topic some months ago, I am of the opinion that adversariality can have a productive, positive effect for philosophical inquiry, but not just any adversariality/combativeness. (In that post, I placed the discussion against the background of gender considerations; I will not do so here, even though there are obvious gender-related implications to be explored.) In fact, what I defend is a form of adversariality which combines adversariality/opposition with a form of cooperation.
This is a beautiful review. It is clear on technical issues; it is very critical, albeit respectful. It is informative to experts and non-experts alike; the formal apparatus is used to provide clarity not to create an esoteric, gated garden. It calls attention to unexplored alternative positions, and does so not just to keep scholarly score, but especially in order to illuminate the philosophical possibility space. It also contains a touch of wicked humor. (I return to that below.)
Note that Takashi Yagisawa (the reviewer) does not offer a detailed summary of the book; it is, thus, not balanced. Readers have to trust his judgment that he has focused on the central issues that are relevant to the community. Only competent readers of the whole book can decide, thus, if the review is fair. For some, the lack of summary may be a fatal flaw. Such people think that the main duty of a review is to tell people what's in a book. While that is important (which is why judicious summaries are often part of great reviews), it need not trump other considerations of the sort mentioned in the first paragraph.
Two weeks ago, I wrote a post proposing a dialogical perspective on structural rules. In fact, at that point I offered an analysis of only one structural rule, namely left-weakening, and promised that I would come back for more. In this post, I will discuss contraction and exchange (for both, I again restrict myself to the left cases). (I will assume that readers are familiar with the basic principles of my dialogical approach to deductive proofs, as recapped in my previous post on structural rules.)
Contraction, in particular, is very significant, given the recent popularity of restriction on contraction as a way to block the derivation of paradoxes such as the Liar and Curry. What does contraction mean, generally speaking? Contraction is the rule according to which two or more copies of a given formula in a sequent can be collapsed into each other (contracted); in other words, the idea is that the number of copies should not matter for the derivation of the conclusion:
Evolutionary accounts of deductive reasoning have been
enjoying a fair amount of popularity in the last decades. Some of those who
have defended views of this kind are Cooper, Maddy, and more recently Joshua
Schechter. The basic idea is that an explanation for why we have developed
the ability to reason deductively (if indeed we have developed this ability!)
is that it conferred a survival advantage to those individuals who possessed it among our ancestors, who in
turn were reproductively more successful than those individuals in the
ancestral population who did not possess this ability. In other words,
deductive reasoning would have arisen as an adaptation
in humans (and possibly in non-human animals too, but I will leave this question
aside). Attractive though it may seem at first sight (and I confess having had
a fair amount of sympathy for it for a while), this approach faces a number of
difficulties, and in my opinion is ultimately untenable. (Some readers will not
be surprised to hear this, if they recall a previous post where I argue that
deductive reasoning is best seen as a cultural product, not as a biological,
genetically encoded endowment in humans.)
In this post, I will spell out what I take to be the main
flaw of such accounts, namely the fact that they seem incompatible with the
empirical evidence on deductive reasoning in human reasoners as produced by
experimental psychology. In this sense, these accounts fall prey to the same
mistake that plagues many evolutionary accounts of female orgasm, in particular
those according to which female orgasm has arisen as an adaptation in the human
species. To draw the parallel between the case for deductive reasoning and the
case for the female orgasm, I will rely on Elisabeth Lloyd’s fantastic book The Case of the Female Orgasm (which, as
it so happens, I had the pleasure of re-reading during my vacation last
As some of you may have seen, we will be hosting the workshop ‘Proof theory and philosophy’ in Groningen at the beginning of December. The idea is to focus on the philosophical significance and import of proof theory, rather than exclusively on technical aspects. An impressive team of philosophically inclined proof theorists will be joining us, so it promises to be a very exciting event (titles of talks will be made available shortly).
For my own talk, I’m planning to discuss the main structural rules as defined in sequent calculus – weakening, contraction, exchange, cut – from the point of view of the dialogical conception of deduction that I’ve been developing, inspired in particular (but not exclusively) by Aristotle’s logical texts. In this post, I'll do a bit of preparatory brainstorming, and I look forward to any comments readers may have!
Some months ago I wrote two posts on the concept of indirect proofs: one presenting a dialogical conception of these proofs, and the other analyzing the concept of ‘proofs through the impossible’ in the Prior Analytics. Since then I gave a few talks on this material, receiving useful feedback from audiences in Groningen and Paris. Moreover, this week we hosted the conference ‘Dialectic and Aristotle’s Logic’ in Groningen, and after various talks and discussions I have come to formulate some new ideas on the topic of reductio proofs and their dialectical/dialogical underpinnings. So for those of you who enjoyed the previous posts, here are some further thoughts and tentative answers to lingering questions.
Recall that the dialogical conception I presented in previous posts was meant to address the awkwardness of the first speech act in a reductio proof, namely that of supposing precisely that which you intend to refute by showing that it entails an absurdity. From studies in the literature on math education, it is known that this first step can be very confusing to students learning the technique of reductio proofs. On the dialogical conception, however, no such awkwardness arises, as there is a division of roles between the agent who supposes the initial thesis to be refuted, and the agent who in fact derives an absurdity from the thesis.
Those of you who have been following some of my blog posts
will recall my current research project ‘Roots of Deduction’, which aims at
unearthing (hopefully without damaging!) the conceptual and historical origins
of the very concept of a deductive argument as one where the truth of the
premises necessitates the truth of the conclusion. In particular, this past
year we’ve been reading the Prior
Analytics in a reading group, which has been a fantastic experience (highly
recommended!). For next year, the plan is to switch from logic to mathematics,
and look more closely into the development of deductive arguments in Greek
But here’s the catch: the members of the project are all much
more versed in the history of logic than in the history of mathematics, so we
can’t count on as much previous expertise for mathematics as we could in the
case of (Aristotelian) logic. Moreover, the history of ancient Greek
mathematics is a rather intimidating topic, with an enormous amount of
secondary literature and a notorious scarcity of primary sources (at least for
the earlier pre-Euclid period, which is what we would be interested in). So it
seems prudent to focus on a few specific aspects of the topic, and for now I
have in mind specifically the connections between mathematics and logic (and
philosophy) in ancient Greece. More generally, our main interest is not on the
‘contentual’ part of mathematical theories, but rather on the ‘structural’
part, in particular the general structure of arguments and the emergence of
necessarily truth-preserving arguments.
Last week I was in Munich for the excellent ‘Carnap on logic’ conference, which brought together pretty much everyone who’s someone in the world of Carnap scholarship. (And that excludes me -- still don’t know exactly why I was invited in the first place…) My talk was a comparison between Carnap’s notion of explication and my own conception of formalization, as developed in my book Formal Languages in Logic. In particular, I proposed a cognitive, empirically informed account of Carnap’s notion of the fruitfulness of an explication.
Anyway, I learned an awful lot about Carnap, and got to meet some great people I hadn’t yet met. But perhaps the talk I enjoyed most was Steve Awodey’s ‘On the invariance of logical truth’ (for those of you who have seen Steve lecturing before, this will come as no surprise…). The main point of Steve’s talk was to defend the claim that the notion of (logical) invariance that is now more readily associated with Tarski, in particular his lecture ‘What are logical notions?’ (1966, published posthumously in 1986), is already to be found in the work of Carnap of the 1930s. This in itself was already fascinating, but then Steve ended his talk by drawing some connections between the invariance debate in philosophy of logic and his current work on homotopy type theory. Now, some of you will remember that I am truly excited about this new research program, and since I’ve also spent quite some time thinking about invariance criteria for logicality (more on which below), it was a real treat to hear Steve relating the two debates. In particular, he gave me (yet another) reason to be excited about the homotopy type theory program, which is the topic of this blog post.
As some readers will recall, we’ve been holding a reading group of the Prior Analytics
in Groningen over the last academic year, which then prompted me to write (too?) many posts inspired by
this venerable work (here and here, for example). We are nearly finished, only
three more chapters to go (so just one more session). But interestingly,
towards the end things are getting increasingly strange. Up to chapter 18 of
book B (which traditionally receives much less attention than its more famous
sibling, book A), things were still following the usual Aristotelian pattern of
extreme systematicity and strenuous examination of cases. But as we got to
chapter B19, there was a sudden change of gears: B19 and B20 are explicit
applications of the theory of syllogistic to dialectical situations (needless
to say, these made me very happy), and B21 is really about epistemology
and quite out of place in the context of the Prior Analytics (though also very interesting). (Some scholars
think that these are older layers of the text, which then somehow ended up
being placed at the very end.)
At B22 it looked like we were back on track
with the usual analysis of cases in the figures, but there was still a surprise
in store. Towards the end of the chapter, Aristotle presents a
puzzling discussion of ‘opposites’, one of which is preferable over the other.
He writes (Smith translation):
When A and B are two opposites, of which A
is preferable to B, and D is preferable in the same way to its opposite C, then
if <the combination of> A and C is preferable to <the combination
of> B and D, then A is preferable to D. (68a25-28)
It is fair to say that the ‘received view’ about deductive inference, and about inference in general, is that it proceeds from premises to conclusion so as to produce new information (the conclusion) from previously available information (the premises). It is this conception of deductive inference that gives rise to the so-called ‘scandal of deduction’, which concerns the apparent lack of usefulness of a deductive inference, given that in a valid deductive inference the conclusion is already ‘contained’, in some sense or another, in the premises. This is also the conception of inference underpinning e.g. Frege’s logicist project, and much (if not all) of the discussions in the philosophy of logic of the last many decades. (In fact, it is also the conception of deduction of the most famous ‘deducer’ of all times, Sherlock Holmes.)
That an inference, and a deductive inference in particular, proceeds from premises to conclusion may appear to be such an obvious truism that no one in their sane mind would want to question it. But is this really how it works when an agent is formulating a deductive argument, say a mathematical demonstration?
[The following is the consequence of discussion with F.A. Muller, Lieven Decock, and Victor Gijsbers. They should be blamed for my mistakes.--ES]
A few weeks ago I complained that Ted Sider's approach to "knee-jerk realism" is dismissive toward views that do not share his (ahum) fundamental outlook (and I mused a bit about the sociology of knowledge that facilitates such dismissiveness). One worrisome consequence is that Sider fails to see objections to his view when they ought to be staring him in the face. Consider the following two passages from Ted Sider's Writing the Book of the World:
I hold that the fundamental is also determinate. "The fundamental is determinate" is not particularly clear, and improving the situation is difficult because there are so many different ways to understand what "determinacy" amounts to, but perhaps we can put it thus. First, no special-purpose vocabulary that is distinctive of indeterminacy...carves at the joints. Second, fundamental languages obey classical logic. The combination of these two claims is perhaps the best way to cash out the elusive dogma that vagueness and other forms of indeterminacy are not "in the world." (137)
The continuum hypothesis is sometimes said to be indeterminate. But suppose that mundane set-theoretic truths, such as the axiom of extensionality, are fundamental. Then by the combinatorial principle, the continuum hypothesis must be determinate, since it can be stated using only expressions that occur in mundane set-theoretic truths (namely, logical expressions and the predicate ∈). Thus we have a surprising result: the fundamentality of the mundane truths of set-theory requires the non-mundane continuum hypothesis to be determinate. (151)
Now, first, the "sometimes said to be," is an odd locution. After all, it was proven that if ZFC is consistent then the continuum hypothesis can neither be proven nor disproven in it (see here for a good intro). Second, in the context of Sider's program ("mundane set-theoretic truths"), abandoning ZFC is not on the table. Third, I know that One's modus ponens is another's modus tollens, but Sider has no "result" here--he ought to be facing up to the fact that there is a straightforward objection against his claim that the "fundamental languages" obey classical logic and mundane set theory: there is no reason to think the continuum hypothesis is determinate. To think otherwise is an act of faith (recall my observation about the odd religiosity of his so-called "knee-jerk realism"). So, I stand by my earlier claim that there is something troubling about an agenda-setting book that wishes away obvious problems with the program.
A few days ago
Eric had a post about an insightful text that has been making the rounds on the
internet, which narrates the story of a mathematical ‘proof’ that is for now
sitting somewhere in a limbo between the world of proofs and the world of
non-proofs. The ‘proof’ in question purports to establish the famous ABC
conjecture, one of the (thus far) main open questions in number theory. (Luckily,
a while back Dennis posted an extremely helpful and precise exposition of the
ABC conjecture, so I need not rehearse the details here.) It has been proposed
by the Japanese mathematician Shinichi
Mochizuki, who is widely regarded as an extremely talented mathematician. This
is important, as crackpot ‘proofs’ are proposed on a daily basis, but in many
cases nobody bothers to check them; a modicum of credibility is required to get
your peers to spend time checking your purported proof. (Whether this is fair
or not is beside the point; it is a sociological fact about the practice of
mathematics.) Now, Mochizuki most certainly does not lack credibility, but his
‘proof’ has been made public quite a few months ago, and yet so far there is no
verdict as to whether it is indeed a proof of the ABC conjecture or not. How
could this be?
As it turns out, Mochizuki
has been working pretty much on his own for the last 10 years, developing new
concepts and techniques by mixing-and-matching elements from different areas of
mathematics. The result is that he created his own private mathematical world,
so to speak, which no one else seems able (or willing) to venture into for now.
So effectively, as it stands his ‘proof’ is not communicable, and thus cannot
be surveyed by his peers.
A few weeks ago Jeff Bell had a post with
the suggestive title ‘What is philosophy?’, referring to the eponymous work by
Deleuze and Guattari. Now, as it turns out, a similar question, ‘What is
logic?’, has preoccupied David Marans, a lecturer in logic at St. Thomas
University in Miami, for quite some time. He thus created the Logic Gallery, a
compilation of statements by philosophers and logicians across the centuries (starting with Aristotle and ending with Tim Williamson) on the
nature of logic (or else comments from which one can infer the author’s
conception of logic). It is a useful glimpse at the different views on logic
held across time, and as such well worth spending some time on. In particular, it illustrates the early influence of the conception of logic as pertaining to debating and discussing, and the later predominance of the conception of logic as pertaining to thought and mental processes.
Marans welcomes suggestions for improvement and additional submissions. (I should
perhaps add the proviso that there are no exact references to the original
location of the quotes, and also that there may be some issues of translation.)
I have a fairly technical post over at M-Phi on desiderata for formal/axiomatic theories of truth, picking up on some ideas I discussed in a NewAPPS blog post of two years ago. My feeling is that it is a bit too technical/specific for cross-posting here at NewAPPS, but do check it out if formal theories of truth is what you like to read about on a Sunday afternoon!
A few days ago I wrote a post on a dialogical
conceptualization of indirect proofs. Not coincidentally, much of my thinking
on this topic at the moment is prompted by the Prior Analytics, as we are currently holding a reading group of the
text in Groningen. We are still making our way through the
text, but here are some potentially interesting preliminary findings.
I am deeply convinced that the emergence of the technique of
indirect proofs marks the very birth of the deductive method, as it is a
significant departure from more ‘mundane’ forms of argumentation (as I argued
before). So it is perhaps not surprising that the first fully-fledged logical text in
history, the Prior Analytics, offers a sophisticated account of indirect
In his commentary on Euclid, the 5th century
Greek philosopher Proclus defines indirect proofs, or ‘reductions to
impossibility’, in the following way (I owe this passage to W. Hodges, from
Every reduction to impossibility takes the contradictory of
what it intends to prove and from this as a hypothesis proceeds until it
encounters something admitted to be absurd and, by thus destroying its
hypothesis, conﬁrms the proposition it set out to establish.
Schematically, a proof by reduction is often represented as
It is well know that indirect proofs pose interesting
philosophical questions. What does it mean to assert something with the precise
goal of then showing it to be false, i.e. because it leads to absurd
conclusions? Why assert it in the first place? What kind of speech act is that?
It has been pointed out that the initial statement is not an assertion, but
rather an assumption, a supposition. But while we may, and in fact do, suppose
things that we know are not true in everyday life (say, in the kind of
counterfactual reasoning involved in planning), to suppose something precisely
with the goal of demonstrating its falsity is a somewhat awkward move, both
cognitively and pragmatically.
(A second in a series, drawn from joint work with K. Joseph Mourad.) How do we measure the complexity of decision procedures in poker? This is a question that is both complex and subtle, and seems to me interesting in thinking about the interplay between formal modeling of epistemological situations and more concrete strategic epistemic thinking.
(This will be the first in a series of posts designed to suggest that the mathematics of impredicativity - especially methods of definition that make use of revision-theoretic procedures - are relevant to empirical contexts. Everything I say in these grows out of joint work with my math colleague Joe Mourad.)
Two basic points about the notion of impredicativity: first, it is much broader than what non-expert philosophers tend to think of under the rubric of paradoxes, vicious circularity, and the like. Second, it is a property of definitions - or, more generally, procedures - not of concepts or sets, in the first instance. Given an appreciation of these points, it is not hard to see that the general phenomenon can pose important epistemological issues in contexts in which there are no infinite totalities in play, indeed, in the context of various empirical discussions.
A well-known phenomenon in the empirical study of human
reasoning is the so-called Modus
Ponens-Modus Tollens asymmetry. In reasoning experiments, participants
almost invariably ‘do well’ with MP (or at least something that looks like MP –
see below), but the rate for MT success drops considerably (from almost 100%
for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any
theory purporting to describe human reasoning accurately must account for this
asymmetry. Now, given that for classical logic (and other non-classical
systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.
As noted by Oaksford and Chater (‘Probability logic and the Modus Ponens-ModusTollens asymmetry in conditional inference’, in this 2008 book), some theories
of human reasoning (mental rules, mental models) explain the asymmetry at what
is known as the algorithmic level (a terminology proposed by Marr (1982)) –
that is, in terms of the mental process that (purportedly) implement deductive
reasoning in a human mind. So according to these theories, performing MT is
harder than performing MP (for a variety of reasons), which is why reasoners,
while still trying to reason deductively, have difficulties with MT. Other
theorists defend that participants are not in fact trying to reason deductively
at all, so the asymmetry is not related to some presumed competence-performance
gap. (Marr’s term to refer to the general goal of the processes, rather than
the processes themselves, is ‘computational level’ – the terminology is
somewhat unnatural, but it has now become standard.) Oaksford and Chater are
among those favoring an analysis at the computational level, in their case
proposing a Bayesian, probabilistic account of human reasoning as a normative
theory not only explaining but also justifying
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
I am currently finishing a paper on the semantic and logical properties
of 'seem'. As 'seem' is a subject-raising verb, we can treat 'it seems'
as a sentential operator. This raises the question of how this operator
behaves logically. Is it hyperintensional? Does it distribute over
conjunction? Over disjunction? Over conditionals? Does it commute with
I think it's fairly obvious that 'it seems' is hyperintensional. It
seems to Lois Lane that Superman is not Clark Kent but it doesn't seem
to her that Superman is not Superman. The other questions are harder.
(OK, so it looks
like I’m over-posting a bit today… Just one more!)
Between today and
tomorrow, the workshop ‘Groundedness in Semantics and Beyond’ is taking place
at MCMP in Munich, co-organized with the the ERC project Plurals,
Predicates, and Paradox led by Øystein
Linnebo. The workshop’s program seems excellent across the board, but the
opening talk is what really caught my attention: Patrick Suppes on ‘A
neuroscience perspective on the foundations of mathematics’. The abstract:
I mainly ask and partially answer three questions. First,
what is a number? Second, how does the brain process numbers? Third, what are
the brain processes by which mathematicians discover new theorems about
numbers? Of course, these three questions generalize immediately to mathematical
objects and processes of a more general nature. Typical examples are abstract
groups, high dimensional spaces or probability structures. But my emphasis is
not on these mathematical structures as such, but how we think about them. For the grounding of mathematics, I argue
that understanding how we think about mathematics and discover new results is
as important as foundations of mathematics in the traditional sense.
Thomas Bradwardine (first half of the 14th
century) is well known for his decisive contributions to physics (he was one of
the founders of the Merton School of Calculators) as well as for his
theological work, in particular his defense of Augustinianism in De Causa Dei. He also led an eventful
life, accompanying Edward III to the battlefield as his confessor, and dying of
the Black Death in 1349 one week after a hasty return to England to take up his
new appointment as the Archbishop of Canterbury.
What is thus far less well known about
Bradwardine is that, prior to these adventures, in the early to mid-1320s, he
worked extensively on logical topics. In this period, he composed his logical tour de force: his treatise on
insolubles. Insolubles were logical puzzles to which Latin medieval authors
devoted a considerable amount of attention (Spade & Read 2009). What is
special about insolubles is that they often involve some kind of self-reference
or self-reflection. The paradigmatic insoluble is what is now known (not a term
used by the medieval authors themselves) as the liar paradox: ‘This sentence is not true’. If it is true, then it
is not true; but if it is not true, then what it says about itself is correct,
namely that it is not true, and thus it is true after all. Hence, we are forced
to conclude that the sentence is both true and false, which violates the principle
of bivalence. It is interesting to note that, in the hands of Tarski, Kripke
and other towering figures, the liar and similar paradoxes re-emerged in the 20th
century as one of the main topics within philosophy of logic and philosophical
logic, and remain to this day a much discussed topic.