A few days ago
Eric had a post about an insightful text that has been making the rounds on the
internet, which narrates the story of a mathematical ‘proof’ that is for now
sitting somewhere in a limbo between the world of proofs and the world of
non-proofs. The ‘proof’ in question purports to establish the famous ABC
conjecture, one of the (thus far) main open questions in number theory. (Luckily,
a while back Dennis posted an extremely helpful and precise exposition of the
ABC conjecture, so I need not rehearse the details here.) It has been proposed
by the Japanese mathematician Shinichi
Mochizuki, who is widely regarded as an extremely talented mathematician. This
is important, as crackpot ‘proofs’ are proposed on a daily basis, but in many
cases nobody bothers to check them; a modicum of credibility is required to get
your peers to spend time checking your purported proof. (Whether this is fair
or not is beside the point; it is a sociological fact about the practice of
mathematics.) Now, Mochizuki most certainly does not lack credibility, but his
‘proof’ has been made public quite a few months ago, and yet so far there is no
verdict as to whether it is indeed a proof of the ABC conjecture or not. How
could this be?
As it turns out, Mochizuki
has been working pretty much on his own for the last 10 years, developing new
concepts and techniques by mixing-and-matching elements from different areas of
mathematics. The result is that he created his own private mathematical world,
so to speak, which no one else seems able (or willing) to venture into for now.
So effectively, as it stands his ‘proof’ is not communicable, and thus cannot
be surveyed by his peers.
Kim sympathizes with his frustrated colleagues, but suggests a
different reason for the rancor. “It really is painful to read other
people’s work,” he says. “That’s all it is… All of us are just too lazy
to read them.” Kim is also quick to defend his friend. He says Mochizuki’s reticence
is due to being a “slightly shy character” as well as his assiduous
work ethic. “He’s a very hard working guy and he just doesn’t want to
spend time on airplanes and hotels and so on.” O’Neil, however, holds Mochizuki accountable, saying that his refusal to cooperate places an unfair burden on his colleagues. “You don’t get to say you’ve proved something if you haven’t
explained it,” she says. “A proof is a social construct. If the
community doesn’t understand it, you haven’t done your job.”--Has the ABC Conjecture been solved? [HT: Clerk Shaw on Facebook]
This piece is a nice inside perspective on the 'political economy' and social epistemology of mathematical proof.
An annoyingly inaccurate, but touching obituary in the Washington Post. Not only did he solve one of the grand conjectures - and one of the easiest to explain to non-mathematicians - but he launched a subliterature in epistemology, by providing the classic case of indirect evidence of the existence of a proof.
Fields-medalist Terence Tao (among other feats, he spotted the mistake in Nelson’s purported proof of the inconsistency of arithmetic back
in 2011) has a blog post on the meaning of rigor in
mathematical practice. He files this post under the heading ‘career advice’,
but the post in fact touches upon some key issues in the philosophy of
mathematics, such as: What is the role of intuitions for mathematical knowledge?
What is the role of formalism and rigor in mathematics? How are ‘formal’ and ‘informal’
While Tao’s post is not intended to be a contribution to
the philosophy of mathematics as such, and while one may miss some of the depth
of the discussions found in the philosophical literature and elsewhere, I find
it illuminating to see how a practicing mathematician (and a brilliant one at
that) conceptualizes the role of rigor in mathematical practice. (Also, much of
what he says fits in nicely with some of the views about formalisms and proofs
that I’ve been defending in recent years, as I will argue below -- something that I couldn't let go unnoticed!)
A few days ago I wrote a post on a dialogical
conceptualization of indirect proofs. Not coincidentally, much of my thinking
on this topic at the moment is prompted by the Prior Analytics, as we are currently holding a reading group of the
text in Groningen. We are still making our way through the
text, but here are some potentially interesting preliminary findings.
I am deeply convinced that the emergence of the technique of
indirect proofs marks the very birth of the deductive method, as it is a
significant departure from more ‘mundane’ forms of argumentation (as I argued
before). So it is perhaps not surprising that the first fully-fledged logical text in
history, the Prior Analytics, offers a sophisticated account of indirect
In an earlier post, I made reference to Jacob Klein’s essay
about Husserl’s history of the origin of geometry. Klein’s own work is very
impressive as well (Burt Hopkins has a recent book on both Klein and Husserl [a NDPR review is here),
and reading through Klein's book has helped me to see one reason why Deleuze so freely
and regularly draws from both mathematics and art, though not just any
mathematics or any art. Deleuze was interested in a problematic as opposed to
axiomatic mathematics; and he was interested in a figural as opposed to
figurative art. What the two have in common is a certain form of abstraction.
In his commentary on Euclid, the 5th century
Greek philosopher Proclus defines indirect proofs, or ‘reductions to
impossibility’, in the following way (I owe this passage to W. Hodges, from
Every reduction to impossibility takes the contradictory of
what it intends to prove and from this as a hypothesis proceeds until it
encounters something admitted to be absurd and, by thus destroying its
hypothesis, conﬁrms the proposition it set out to establish.
Schematically, a proof by reduction is often represented as
It is well know that indirect proofs pose interesting
philosophical questions. What does it mean to assert something with the precise
goal of then showing it to be false, i.e. because it leads to absurd
conclusions? Why assert it in the first place? What kind of speech act is that?
It has been pointed out that the initial statement is not an assertion, but
rather an assumption, a supposition. But while we may, and in fact do, suppose
things that we know are not true in everyday life (say, in the kind of
counterfactual reasoning involved in planning), to suppose something precisely
with the goal of demonstrating its falsity is a somewhat awkward move, both
cognitively and pragmatically.
(A second in a series, drawn from joint work with K. Joseph Mourad.) How do we measure the complexity of decision procedures in poker? This is a question that is both complex and subtle, and seems to me interesting in thinking about the interplay between formal modeling of epistemological situations and more concrete strategic epistemic thinking.
(This will be the first in a series of posts designed to suggest that the mathematics of impredicativity - especially methods of definition that make use of revision-theoretic procedures - are relevant to empirical contexts. Everything I say in these grows out of joint work with my math colleague Joe Mourad.)
Two basic points about the notion of impredicativity: first, it is much broader than what non-expert philosophers tend to think of under the rubric of paradoxes, vicious circularity, and the like. Second, it is a property of definitions - or, more generally, procedures - not of concepts or sets, in the first instance. Given an appreciation of these points, it is not hard to see that the general phenomenon can pose important epistemological issues in contexts in which there are no infinite totalities in play, indeed, in the context of various empirical discussions.
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
(OK, so it looks
like I’m over-posting a bit today… Just one more!)
Between today and
tomorrow, the workshop ‘Groundedness in Semantics and Beyond’ is taking place
at MCMP in Munich, co-organized with the the ERC project Plurals,
Predicates, and Paradox led by Øystein
Linnebo. The workshop’s program seems excellent across the board, but the
opening talk is what really caught my attention: Patrick Suppes on ‘A
neuroscience perspective on the foundations of mathematics’. The abstract:
I mainly ask and partially answer three questions. First,
what is a number? Second, how does the brain process numbers? Third, what are
the brain processes by which mathematicians discover new theorems about
numbers? Of course, these three questions generalize immediately to mathematical
objects and processes of a more general nature. Typical examples are abstract
groups, high dimensional spaces or probability structures. But my emphasis is
not on these mathematical structures as such, but how we think about them. For the grounding of mathematics, I argue
that understanding how we think about mathematics and discover new results is
as important as foundations of mathematics in the traditional sense.
Number theory is notorious for producing conjectures that are easy to state but difficult to resolve. The Fermat theorem, stated in 1637—by Fermat, of course, in the margin of his copy of Diophantus’ Arithmetica—, requires nothing but a knowledge of basic arithmetic to comprehend fully. It was proved (by Andrew Wiles, building on the work of dozens of predecessors) only in 1995. The Goldbach conjecture (that every even number is the sum of two primes) and the twin primes conjecture (that there are infinitely many pairs of prime numbers p, q such that p + 2 = q), stated long ago, remain open.
A newer conjecture of this sort is the “ABC” conjecture. It has been a topic of excitement among mathematicians lately because a mathematician has made a credible claim to have proved it—but by idiosyncratic methods that other mathematicians will have to master before they can evaluate the proof. Proving it, moreover, would resolve a number of other outstanding problems in number theory.
▶ See the Wikipedia entry for more; see also Michael Nielsen’s very helpful page and list of references. I should note that of the news stories he refers to, the best is that from Nature; the New Scientist story should be ignored.)
In what follows I will describe in elementary terms the conjecture and its mathematical significance. The methods used by Shinichi Mochizuki in his claimed proof are very far from elementary. I won’t discuss them; follow the links if you want to know more. In a future post I will consider some philosophical questions suggested by the theorem and its proof.
Neil Levy kindly called my attention to the story: "A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’." As LRB reports, "The paper was created using Mathgen, an online random maths paper generator." Unfortunately, "Neither Marcie Rathke nor the University of Southern North Dakota at Hoople is willing to pay the ‘processing charges’ levied by Advances in Pure Mathematics, so we will never know if the work would actually have made it to publication." The exchange between 'author' and journal is priceless.
So, what did this hoax expose? LRB concludes the following:
Academic journals depend on peer review to ensure the rigour and value
of submissions. The less prestigious the journal, the harder it is to
find competent reviewers and the lower they will have to set the
threshold, until at some point we arrive at, essentially,
accept-all-comers vanity publishing. The murkier the business model and
the lower the standards outside the mainstream, the harder it is for
academics to challenge the status of the prestige journals, locking
academics into the situation Glen Newey describes.
Short version: Science is often said to be committed to reals, because physics, for example, essentially makes use of sentences with real-quantifiers. But we have perfectly good countable, well-founded, constructive models of full second order arithmetic. So why can't physics, for example, simply explicitly embrace one of these as what they are working over and thereby radically simplify their alleged ontological commitments?
It seems to me that there is an issue with the epistemology of domains of quantification that has important implications for the epistemology and semantics of math generally, and which has received less attention than it deserves. In quick outline, the point is this:
A quantificational sentence has a determinate meaning only if there is some determinate fact of the matter as to what its domain of quantification is.
So one knows what one is saying with such a sentence only if one knows what domain one is quantifying over. If we are discussing anything as complex as the reals - equivalently second order arithmetic - and mean to quantify over the "intended model" - that is, do not specify some constructable model as our domain - then we do not know what we are quantifying over.
Thus, we do not know what we are saying when we make claims with second order arithmetic quantifiers.
Today the whole of the internet seems to be celebrating Alan Turing's 100th birthday -- and rightly so, of course. Google in particular has one of its amazing doodles, depicting an interactive Turing machine. Here's a video on how to solve the doodle:
This week I read an extremely interesting paper by Kenny Easwaran, ‘Probabilistic proofs and transferability’, which appeared in Philosophia Mathematica in 2009. Kenny had heard me speak at the Formal Epistemology Workshop in Munich a few weeks ago, and thought (correctly!) that there were interesting connections between the concept of transferability that he develops in the paper and my ‘built-in opponent’ conception of logic and deductive proofs; so he kindly drew my attention to his paper. Because I believe Kenny is really on to something deep about mathematics in his paper, I thought it would be a good idea to elaborate a bit on these connections in a blog post, hoping that it will be of interest to a number of people besides the two of us!
… Many observations in terms of structural rules address mere symptoms of some more basic underlying phenomenon. For instance, non-monotonicity is like ‘fever’: it does not tell you which disease causes it.
I’ve always been puzzled by this observation – among other reasons because I’m a non-monotonicity enthusiast, so it seemed odd to me to claim that non-monotonicity would be like the symptom of some disease! But beyond the disease metaphor, it was also not clear to me why Johan saw non-monotonicity as this unspecified, possibly multifaceted phenomenon. After all, there should be nothing esoteric about non-monotonicity: a non-monotonic consequence relation is one where addition of new premises/information may turn a valid consequence into an invalid one. The classical notion of validity has monotonicity as one of its defining features: once a consequence, always a consequence, come what may. This is why a mathematical proof, if indeed valid/correct, remains indefeasible for ever and ever, come what may.
And now, something much more serious from The Guardian: an opinion piece by African-American mathematician Jonathan Farley on racism in mathematics.
[T]here are no black winners of the Fields medal, the "Nobel prize of mathematics". [...] In reality, black mathematicians face career-retarding racism that white Fields medallists never encounter. Three stories will suffice to make this point.
He then goes on to narrate three very depressing stories, the last one about himself. It makes for sobering reading, and it does resonate with the stories we've been hearing about what it's like to be a member of a racial minority in the philosophy profession as well.
UPDATE: On a positive note, it occurred to me that, in this context, it would also be fitting to highlight the Infinite Possibilities series of conferences, whose aim is to celebrate and promote diversity in the mathematical sciences both on the gender and the ethnic/racial dimension. It is a conference "designed to promote, educate, encourage and support minority women interested in mathematics and statistics." The latest installment took place a few weeks ago, and had my fellow country-woman Valeria de Paiva among the keynote speakers. A wonderful initiative!
Paul Livingston’s paper presents a comparative analysis of Gödel’s incompleteness results, Priest on diagonalization, and Derrida on différance. One of the goals seems to be to show that there are significant analogies between these different concepts, which would be at odds with the fact that Derrida’s ideas encounter great hostility among most philosophers in the analytic tradition. In particular, Derrida’s différance and the concept/technique of diagonalization are presented as being each other’s counterpart (a view earlier defended by Graham Priest in Beyond the Limits of Thought).
But crucially, while différance is presented as cropping up in any discourse whatsoever, for a particular language/formal system to have the kind of imcompleteness identified by Gödel specifically with respect to arithmetic, certain conditions must hold of the system. So a fundamental disanalogy between what could be described as the ‘Gödel phenomenon’ (incompleteness arising from diagonalization and the formulation of a so-called Gödel sentence) and Derrida’s différance concerns the scope of each of them: the latter is presented as a perfectly general phenomenon, while the former is provably restricted to a specific (albeit significant) class of languages/formal systems. Although Livingston does not fail to mention that a system must have certain expressive characteristics for the Gödel phenomenon to emerge, it seems to me that he downplays this aspect in order to establish the comparison between différance and diagonalization. (There is much more that I could say on Livingstone’s thought-provoking piece, but for reasons of space I will focus on this particular aspect.)
I just sent out the announcement for the summer school on formal methods in philosophy that I am organizing. It seems to me that more sustained methodological discussions of applications of formal methods in philosophy are at this point much needed. The summer school is an attempt to foster such debates and motivate students and young researchers to be attentive to the to methodological issues underlying their work. See below for the official announcement, and check the webpage of the summer school for further details.
The Fibonacci numbers are those in the following sequence of integers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 etc. By definition, the first two numbers are 0 and 1, and each subsequent number is the sum of the previous two. The sequence is named after Fibonacci, aka Leonardo of Pisa, who introduced the sequence (known already in Indian mathematics) to Western audiences in his famous book Liber Abaci (1202) – which, by the way, is also one of the main sources for the dissemination of Hindu-Arabic numerals in Europe, no less. (Fibonacci had learned ‘Eastern’ mathematics while studying to become a merchant in North Africa -- see an earlier post on the importation of Indian and Arabic mathematics into Europe through a sub-scientific, merchant tradition.)
What do mathematics, chess and philosophy have in common? Among many other things, they have a glaring gap between men and women. And the reason in all three cases may be cultural, rather than biological.
NewAPPS will soon reach its millionth page view. How many is that? The image below contains a million dots.
To see them all, download it (right-click or control-click on the image and choose one of the “Download” options), view it in Preview or something similar and blow it up five or six times. The file’s name is “OybTz.png”, and even though it contains a million dots it is very small.
Just for you compulsive types, I’ve made one dot red.
In the exchange over Dennis' wonderful post on infinitesimals, Dennis writes, "Casting doubt on someone’s pronouncements is very far from devising a consistent theory to show them false (consistent because it has models in the category of sets). A mathematician would regard that as the difference between gilt and gold."
I call Dennis' move here (and it is one that Russell also was frequently attracted to), an instance of "Newton's Challenge to Philosophy." That is, a philosopher appeals to natural science (or mathematics) to settle a dispute within philosophy. Let me grant Dennis' claim about the "mathematician." But within philosophy burden-shifting is no small achievement. Note, in particular, that Russell appeals to mathematics in order to condemn Leibniz's wrong turn to "speculation." That is to say, it is one thing to get the math wrong or to be unable to provide a mathematical proof for a claim within mathematics. It is another thing to make a claim to the effect that metaphysics of mathematics has been settled. I suspect it was inevitable that Russell would be wrong about the latter.
Last week, the foundations of mathematics community was shaken by yet another claim of the inconsistency of Peano Arithmetic (PA). This time, it was put forward by Edward Nelson, professor of mathematics in Princeton, who claimed to have found a proof of the inconsistency of PA. A few months ago, quite some stir had been caused when Fields medalist V. Voevodsky seemed to be saying that the consistency of PA was an open question; but Nelson’s claim was much more radical; he claimed to have proved that PA was outright inconsistent! (Here is a great post by Jeff Ketland with a crash-course on PA and a discussion of ways in which it might be inconsistent.)
Nelson announced his results on the FOM mailing list on September 26th 2011, providing links to two formulations of the proof: one in book form and one in short-summary form. Very quickly, a few math-oriented blogs had posts up on the topic; we all wanted to understand the outlines of Nelson’s purported proof, and most of us bet all our money on the possibility that there must be a mistake in the proof. External evidence strongly suggests that PA is consistent, in particular in that so many robust mathematical results would have to be revised if PA were inconsistent (not to mention several proofs of the consistency of arithmetic in alternative systems, such as Gentzen’s proofs -- see here).
Indeed, it did not take long for someone to find an apparent loophole in Nelson’s purported proof, and not just someone: math prodigy and Fields medalist Terence Tao (UCLA), who is considered by many as the most brilliant mathematician currently in activity. The venue in which Tao first formulated his reservations was somewhat original: on the G+ thread opened by John Baez on the topic. (So those who dismiss social networks as a pure waste of time have at least one occurrence of actual top-notch science being done in a social network to worry about!)
Today the shortlist for the 2011 Royal Society Winton Prize for Science Books was announced. As it turns out, I’m a big fan of popular science books; when they are good, they are not only entertaining to read, but I often find insights and ideas that I then go on to use in my academic philosophical work. Of course, they are mostly a starting point, as you still need to do your homework and check the actual scientific articles/sources, but comprehensive overviews can be a valuable source of insight and inspiration.
The nominated books are:
· Alex’s Adventures in Numberland by Alex Bellos (Bloomsbury)
· Through the Language Glass: How Words Colour Your World by Guy Deutscher (William Heinemann)
· The Disappearing Spoon by Sam Kean (Doubleday (UK); Little, Brown and Company (USA) )
· The Wavewatcher’s Companion by Gavin Pretor-Pinney (Bloomsbury)
· Massive: The Missing Particle That Sparked the Greatest Hunt in Science by Ian Sample (Basic Books (USA); Virgin Books (UK))
· The Rough Guide to The Future by Jon Turney (Rough Guides)
In the official announcement you can read short descriptions of each of them, and download the first chapters of each. So far, I’ve only read Alex’s Adventures in Numberland, which I very much enjoyed (as reported on this M-Phi post), but pretty much all the others look like they could be really interesting.
A few weeks ago I wrote a post on blind mathematicians, discussing the case of Bernard Morin and the eversion of the sphere in particular. I had been thinking about blind mathematicians then because I was working on a paper on the role of external symbolic systems (written systems such as notations in particular) for mathematical reasoning and mathematical practice. I have now completed a first, preliminary draft of the paper, and uploaded it on my academia website (it's on top of the list under 'Papers'). Should anyone be interested in taking a look, comments would be most welcome! I discuss the case of Bernard Morin all the way at the end of the paper, as well as the case of Jason Padgett, the man with acquired savant syndrome who sees shapes as fractals and can hand-draw fractals of pretty much any image you can think of. Here is the abstract: