Yesterday the Guardianpublished the results of a research conducted on the over 70 million comments that have been placed at Guardian articles over the years. The question was: is there a pattern in who gets most abusive comments? Given the Guardian’s policy to block comments (blocked by moderators) when they are not aligned with the spirit of constructive debate, this constitutes an extremely dataset to explore online behavior (it is reassuring by the way that only 2% of the 70m comments were blocked!). It has been long felt that women, and in particular women speaking from a feminist perspective, receive much online abuse in reaction to what they write. (Comment sections are one such venue, but think also of Twitter and other social media platforms.) But crunching the numbers is the right way to go if one wants to move from the level of ‘impressions’ to more concrete corroboration. The results will probably not come across as surprising:
Although the majority of our regular opinion writers are white men, we found that those who experienced the highest levels of abuse and dismissive trolling were not. The 10 regular writers who got the most abuse were eight women (four white and four non-white) and two black men. Two of the women and one of the men were gay. And of the eight women in the “top 10”, one was Muslim and one Jewish.
And the 10 regular writers who got the least abuse? All men.
All of you reading this will certainly have witnessed the uproar this week in response to a paper published in Synthese which is problematic, to say the least, for a number of different reasons. (It is worth noticing, as has been often noticed, that this paper has been online for 22 months, but presumably having appeared in the latest printed edition of Synthese, those on the Synthese mailing list will have received a notification, and someone actually took the trouble of checking the paper. From there on, it went ‘viral’ through the usual channels – Facebook, blogs etc.) In particular, it contains a passage with clear homophobic and sexist content. [But see UPDATE below.] But this is not the only issue with the paper, which overall seems to be below the level of scholarship that one would expect in a journal like Synthese.
[Full disclosure: I’ve known the author, JYB, for many years, and have attended a number of the events he regularly organizes. He was supportive of my career at its early stages. I know two of the Synthese editors-in-chief quite well, and the third I have close indirect contacts with (he is a regular collaborator of one of my closest colleagues). I have 5 papers published in Synthese, two of which are forthcoming in two different special issues.]
There is no question to me that this paper should not have been published in its current form. JY Béziau has made important contributions to logic earlier in his career, but in recent years his work has not been of the same caliber as his earlier work (this is also the opinion of a number of people I’ve talked to much before this episode). So purely on the basis of the paper’s merits, the decision to publish it in Synthese (whoever made the decision) seems to have been misguided. Adding to that the homophobic and sexist content, then the decision to publish it is not only misguided but also deeply disturbing. But the issue I want to discuss here is: what does this say about the editorial process in Synthese? Does this episode warrant calls for the resignation of the current editors-in-chief?
It is well known that philosophers like to argue, and one of the things they like to argue about is arguing itself. Argumentation is frequently (and rightly, to my mind) taken to be a core feature of philosophical practice, and thus how to argue becomes a central topic for philosophical methodology. But many have claimed that the centrality of argumentation within philosophy is a weakness rather than a strength, deploring the excessively adversarial nature of argumentation in philosophy. Critics point out that philosophers are trained to find objections, counterexamples, rebuttals etc. to what their philosophical interlocutors say, who are tellingly described as one’s opponents. On this conception, argumentation is a duel between two opponents, and only one of them can win; blood will often ensue. Much of the criticism has been motivated by feminist concerns: aggressive, adversarial styles of argumentation are oppressive towards women and other disadvantaged groups, emphasizing competition (which is often presented to be an essentially ‘male’ feature) at the expense of cooperative, presumably more productive endeavors. Some of the authors having defended ideas along these lines are Janice Moulton and Andrea Nye (see here for a survey article by C. Hundleby).
A few years ago I became interested in how the presumed adversarial nature of philosophical argumentation affected not only the practice but also the outcome of philosophical investigation. It seemed to me that, while some of the feminist criticism definitely struck a cord if not with the theory at least with the practice of philosophy in some (well, many) quarters, the general critical stance that is characteristic of philosophical interactions was still an essential and epistemically valuable feature of the philosophical method. (Btw, it may be worth noting that this is not unique to philosophy; mathematics seems to proceed by ‘proofs and refutations’ (Lakatos), and in many if not all of the empirical and social sciences, objections and criticism are the bread-and-butter of the theorist.)
I’ve just been promoted to (junior)* full professor in Groningen, and while I’m still duly enjoying the accompanying feeling of achievement and recognition, it got me thinking about how I got here. It does not take much to conclude that, while I've worked incredibly hard for this, I was also *extremely* lucky. I know countless people who work just as hard as I do (or more), and who are as good as I am at what they do (or better), and yet do not get similar professional recognition. It takes an incredible amount of luck and, yes, privilege, for things to work out. So let me comment on two kinds of luck that may play a role in one’s professional development.
The first kind is simply the luck to have been dealt rather generous cards in life. While I am a woman in a male-dominated field, and while I had to overcome hurdles related to coming from the ‘periphery’ of academic action (originally from Brazil, and then developing my career in the Netherlands, which is ok but frankly not Top of the Pops), for the rest I’ve been extremely privileged. My parents were both academics (my mother still is), so in terms of academic support at home I was particularly well served. For a number of reasons, I also never had to worry about economical hardship and financial stability, and thus I could choose the risk of an academic career without having to worry whether one day I’d have no food on my plate. And, last but not least, I am white, not differently abled, cis, and I fit reasonably well within certain stereotypical standards of beauty.
Let me refer to this kind of luck as privilege-luck, and it is still a matter of luck because I might just as well have been born in different circumstances, and things might have been very different. One way in which privilege-luck manifests itself very conspicuously is with the so-called ‘pedigree’ phenomenon; depending on where you go to school (both undergraduate and graduate), your career will develop in very different ways. But we all know very well that the school you end up going to is almost entirely determined by the kind of socio-economical background you can fall back on.
45 years ago, Michael Jackson and his troupe of brothers famously claimed that counting is easy peasy. But how easy is it really? (We’ll leave aside the matter of the simplicity of A B C and do re mi for present purposes!)
Counting and basic arithmetic operations are often viewed as paradigmatic cases of ‘easy’ mental operations. It might seem that we are all ‘born’ with the innate ability for basic arithmetic, given that we all seem to engage in the practice of counting effortlessly. However, as anyone who has cared for very young children knows, teaching a child how to count is typically a process requiring relentless training. The child may well know how to recite the order of numbers (‘one, two, three…’), but from that to associating each of them to specific quantities is a big step. Even when they start getting the hang of it, they typically do well with small quantities (say, up to 3), but things get mixed up when it comes to counting more items. For example, they need to resist the urge to point at the same item more than once in the counting process, something that is in no way straightforward!
The later Wittgenstein was acutely aware of how much training is involved in mastering the practice of counting and basic arithmetic operations. (Recall that he was a schoolteacher for many years in the 1920s!) Indeed, counting and adding objects can be described as a specific and rather peculiar language game which must be learned by training, and which raises all kinds of philosophical questions pertaining to what it is exactly that we are doing when we count things. Perhaps my favorite passage in the whole of the Remarks on the Foundations of Mathematics is #37 in part I:
What does philosophy have to say about difficult life decisions? Recently, there has been quite some interest in what philosophers have to say on this; for example, Ruth Chang’s TED talk on how to make hard choices has had over 3,5 million views. And recently, the new book by L.A. Paul, Transformative Experience, has been making quite a splash in the American mainstream media, with references in venues such as the New Yorker and the New York Times. A transformative experience is one that so fundamentally changes the person who undergoes it that she acquires a new self altogether, because she is transformed in a profound way. (See here for the shorter, article version of this idea.) The quintessential transformative experience for Paul is becoming a parent, and other examples include the death of a loved one, emigrating to a new country, among others.
One of the upshots of this conception of transformative experience is that, for many of the most important decisions in life, we simply have no way of evaluating the pros and cons of each side because we have no idea of what we’re getting into. As put by the influential journalist David Brooks in the New York Times,
Paul’s point is that we’re fundamentally ignorant about many of the biggest choices of our lives and that it’s not possible to make purely rational decisions. “You shouldn’t fool yourself,” she writes. “You have no idea what you are getting into.”
This is the second and final part of my 'brief introduction' to formal methods in philosophy to appear in the forthcoming Bloomsbury Philosophical Methodology Reader, being edited by Joachim Horvath. (Part I is here.) In this part I present in more detail the four papers included in the formal methods section, namely Tarski's 'On the concept of following logically', excerpts from Carnap's Logical Foundations of Probability, Hansson's 2000 'Formalization in philosophy', and a commissioned new piece by Michael Titelbaum focusing in particular (though not exclusively) on Bayesian epistemology.
Some of the pioneers in formal/mathematical approaches to philosophical questions had a number of interesting things to say on the issue of what counts as an adequate formalization, in particular Tarski and Carnap – hence the inclusion of pieces by each of them in the present volume. Indeed, both in his paper on truth and in his paper on logical consequence (in the 1930s), Tarski started out with an informal notion and then sought to develop an appropriate formal account of it. In the case of truth, the starting point was the correspondence conception of truth, which he claimed dated back to Aristotle. In the case of logical consequence, he was somewhat less precise and referred to the ‘common’ or ‘everyday’ notion of logical consequence.
These two conceptual starting points allowed Tarski to formulate what he described as ‘conditions of material adequacy’ for the formal accounts. He also formulated criteria of formal correctness, which pertain to the internal exactness of the formal theory. In the case of truth, the basic condition of material adequacy was the famous T-schema; in the case of logical consequence, the properties of necessary truth-preservation and of validity-preserving schematic substitution. Unsurprisingly, the formal theories he then went on to develop both passed the test of material adequacy he had formulated himself. But there is nothing particularly ad hoc about this, since the conceptual core of the notions he was after was presumably captured in these conditions, which thus could serve as conceptual ‘guides’ for the formulation of the formal theories.
There is a Bloomsbury Philosophical Methodology Reader in the making, being edited by Joachim Horvath (Cologne). Joachim asked me to edit the section on formal methods, which will contain four papers: Tarski's 'On the concept of following logically', excerpts from Carnap's Logical Foundations of Probability, Hansson's 2000 'Formalization in philosophy', and a commissioned new piece by Michael Titelbaum focusing in particular (though not exclusively) on Bayesian epistemology. It will also contain a brief introduction to the topic by me, which I will post in two installments. Here is part I: comments welcome!
Since the inception of (Western) philosophy in ancient Greece, methods of regimentation and formalization, broadly understood, have been important items in the philosopher’s toolkit (Hodges 2009). The development of syllogistic logic by Aristotle and its extensive use in centuries of philosophical tradition as a formal tool for the analysis of arguments may be viewed as the first systematic application of formal methods to philosophical questions. In medieval times, philosophers and logicians relied extensively on logical tools other than syllogistic (which remained pervasive though) in their philosophical analyses (e.g. medieval theories of supposition, which come quite close to what is now known as formal semantics). But the level of sophistication and pervasiveness of formal tools in philosophy has increased significantly since the second half of the 19th century. (Frege is probably the first name that comes to mind in this context.)
It is commonly held that reliance on formal methods is one of the hallmarks of analytic philosophy, in contrast with other philosophical traditions. Indeed, the birth of analytic philosophy at the turn of the 20th century was marked in particular by Russell’s methodological decision to treat philosophical questions with the then-novel formal, logical tools developed for axiomatizations of mathematics (by Frege, Peano, Dedekind etc. – see (Awodey & Reck 2002) for an overview of these developments), for example in his influential ‘On denoting’ (1905). (Notice though that, from the start, there is an equally influential strand within analytic philosophy focusing on common sense and conceptual analysis, represented by Moore – see (Dutilh Novaes & Geerdink forthcoming).) This tradition was then continued by, among others, the philosophers of the Vienna Circle, who conceived of philosophical inquiry as closely related to the natural and exact sciences in terms of methods. Tarski, Carnap, Quine, Barcan Marcus, Kripke, and Putnam are some of those who have applied formal techniques to philosophical questions. Recently, there has been renewed interest in the use of formal, mathematical tools to treat philosophical questions, in particular with the use of probabilistic, Bayesian methods (e.g. formal epistemology). (See (Papineau 2012) for an overview of the main formal frameworks used for philosophical inquiry.)
This is the final post in my series on reductio ad absurdum from a dialogical perspective. Here is Part I, here is Part II, here is Part III, here is Part IV, and here is Part V. I now return to the issues raised in the earlier posts equipped with the dialogical account of deduction, and of reductio ad absurdum in particular.
A general dialogical schema for reductio ad absurdum, following Proclus’ description but inspired by the Socratic elenchus, might look like this:
Interlocutor 1 commits to A (either prompted by a question from interlocutor 2, or spontaneously), which corresponds to assuming the initial hypothesis.
Interlocutor 2 leads the initial hypothesis to absurdity, typically by relying on additional discursive commitments of 1 (which may be elicited by 2 through questions).
Interlocutor 2 concludes ~A.
The main difference between the monological and the dialogical versions of a reductio is thus that in the latter there is a kind of division of labor that is absent from the former (as noted above). The agent making the initial assumption is not the same agent who will lead it to absurdity, and then conclude its contradictory. And so, the perceived pragmatic awkwardness of making an assumption precisely with the goal of ‘destroying’ it seems to vanish. Moreover, the adversarial component provides a compelling rationale for the general idea of ‘destroying’ the initial hypothesis; indeed, while the adversarial component is present in all deductive arguments (in particular given the requirement of necessary truth preservation, as argued above), it is even more pronounced in the case of reductio arguments, that is the procedure whereby someone’s discursive commitments are shown to be collectively incoherent since they lead to absurdity. There remains the question of why interlocutor 1 would want to engage in the dialogue at all, but presumably she simply wishes to voice a discursive commitment to A. From there on, the wheel begins to spin, mostly through 2’s actions.
This is the fifth installment of my series of posts on reductio ad absurdum from a dialogical perspective. Here is Part I, here is Part II, here is Part III, and here is Part IV. In this post I discuss a closely related argumentative strategy, namely dialectical refutation, and argue that it can be viewed as a genealogical ancestor of reductio ad absurdum.
Those familiar with Plato’s Socratic dialogues will undoubtedly recall the numerous instances in which Socrates, by means of questions, elicits a number of discursive commitments from his interlocutors, only to go on to show that, taken collectively, these commitments are incoherent. This is the procedure known as an elenchus, or dialectical refutation.
The ultimate purpose of such a refutation may range from ridiculing the opponent to nobler didactic goals. The etymology of elenchus is related to shame, and indeed at least in some cases it seems that Socrates is out to shame the interlocutor by exposing the incoherence of their beliefs taken collectively (for example, so as to exhort them to positive action, as argued in (Brickhouse & Smith 1991)). However, as noted by Socrates himself in the Gorgias (470c7-10), refuting is also what friends do to each other, a process whereby someone rids a friend of nonsense. An elenchus can also have pedagogical purposes, in interactions between masters and pupils.
There has been much discussion in the secondary literature on what exactly an elenchus is, as well as on whether there is a sufficiently coherent core of properties for what counts as an elenchus, beyond a motley of vaguely related argumentative strategies deployed by Socrates (Carpenter & Polansky 2002). (A useful recent overview is (Wolfsdorf 2013); see also (Scott 2002).) For our purposes, it will be useful to take as our starting point the description of the ‘Socratic method’ in an influential article by G. Vlastos (1983) (a much shorter version of the same argument is to be found in (Vlastos 1982), and I'll be referring to the shorter version). Vlastos distinguishes two kinds of elenchi, the indirect elenchus and the standard elenchus:
This is the fourth installment of my series of posts on reductio ad absurdum arguments from a dialogical perspective. Here is Part I, here is Part II, and here is Part III. In this post I offer a précis of the dialogical account of deduction which I have been developing over the last years, which will then allow me to return to the issue of reductio arguments equipped with a new perspective in the next installments. I have presented the basics of this conception in previous posts, but some details of the account have changed, and so it seems like a good idea to spell it out again.
In this post, I present a brief account of the general dialogical conception of deduction that I endorse. Its relevance for the present purposes is to show that a dialogical conception of reductio ad absurdum arguments is not in any way ad-hoc; indeed, the claim is that this conception applies to deductive arguments in general, and thus a fortiori to reductio arguments. (But I will argue later on that the dialogical component is even more pronounced in reductio arguments than in other deductive arguments.)
Let us start with what can be described as functionalist questions pertaining to deductive arguments and deductive proofs. What is the point of deductive proofs? What are they good for? Why do mathematicians bother producing mathematical proofs at all? While these questions are typically ignored by mathematicians, they have been raised and addressed by so-called ‘maverick’ philosophers of mathematics, such as Hersh (1993) and Rav (1999). One promising vantage point to address these questions is the historical development of deductive proof in ancient Greek mathematics, and on this topic the most authoritative study remains (Netz 1999). Netz emphasizes the importance of orality and dialogue for the emergence of classical, ‘Euclidean’ mathematics in ancient Greece:
Greek mathematics reflects the importance of persuasion. It reflects the role of orality, in the use of formulae, in the structure of proofs… But this orality is regimented into a written form, where vocabulary is limited, presentations follow a relatively rigid pattern… It is at once oral and written… (Netz 1999, 297/8)
This is the third installment of my series of posts on reductio ad absurdum arguments from a dialogical perspective. Here is Part I, and here is Part II. In this post I discuss issues pertaining specifically to the last step in a reductio argument, namely that of going from reaching absurdity to concluding the contradictory of the initial hypothesis.
One worry we may have concerning reductio arguments is what could be described as ‘the culprit problem’. This is not a worry clearly formulated in the protocols previously described, but one which has been raised a number of times when I presented this material to different audiences. The basic problem is: we start with the initial assumption, which we intend to prove to be false, but along the way we avail ourselves to auxiliary hypotheses/premises. Now, it is the conjunction of all these premises and hypotheses that lead to absurdity, and it is not immediately clear whether we can single out one of them as the culprit to be rejected. For all we know, others may be to blame, and so there seems to be some arbitrariness involved in singling out one specific ingredient as responsible for things turning sour.
To be sure, in most practical cases this will not be a real concern; typically, the auxiliary premises we avail ourselves to are statements on which we have a high degree of epistemic confidence (for example, because they have been established by proofs that we recognize as correct). But it remains of philosophical significance that absurdity typically arises from the interaction between numerous elements, any of which can, in theory at least, be held to be responsible for the absurdity. A reductio argument, however, relies on the somewhat contentious assumption that we can isolate the culprit.
However, culprit considerations do not seem to be what motivates Fabio’s dramatic description of this last step as “an act of faith that I must do, a sacrifice I make”. Why is this step problematic then? Well, in first instance, what is established by leading the initial hypothesis to absurdity is that it is a bad idea to maintain this hypothesis (assuming that it can be reliably singled out as the culprit, e.g. if the auxiliary premises are beyond doubt). How does one go from it being a bad idea to maintain the hypothesis to it being a good idea to maintain its contradictory?
This is a series of posts with sections of the paper on reductio ad absurdum from a dialogical perspective that I am working on right now. This is Part II, here is Part I. In this post I discuss issues in connection with the first step in a reductio argument, that of assuming the impossible.
We can think of a reductio ad absurdum as having three main components, following Proclus’ description:
(i) Assuming the initial hypothesis.
(ii) Leading the hypothesis to absurdity.
(iii) Concluding the contradictory of the initial hypothesis.
I discuss two problems pertaining to (i) in this post, and two problems pertaining to (iii) in the next post. (ii) is not itself unproblematic, and we have seen for example that Maria worries whether the ‘usual’ rules for reasoning still apply once we’ve entered the impossible world established by (i). Moreover, the problematic status of (i) arises to a great extent from its perceived pragmatic conflict with (ii). But the focus will be on issues arising in connection with (i) and (iii).
A reductio proof starts with the assumption of precisely that which we want to prove is impossible (or false). As we’ve seen, this seems to create a feeling of cognitive dissonance in (some) reasoners: “I do not know what is true and what I pretend [to be] true.” (Maria) This may seem surprising at first sight: don’t we all regularly reason on the basis of false propositions, such as in counterfactual reasoning? (“If I had eaten a proper meal earlier today, I wouldn’t be so damn hungry now!”) However, as a matter of fact, there is considerable empirical evidence suggesting that dissociating one’s beliefs from reasoning is a very complex task, cognitively speaking (to ‘pretend that something is true’, in Maria’s terms). The belief bias literature, for example, has amply demonstrated the effect of belief on reasoning, even when participants are told to focus only on the connections between premises and conclusions. Moreover, empirical studies of reasoning behavior among adults with low to no schooling show their reluctance to reason with premises of which they have no knowledge (Harris 2000; Dutilh Novaes 2013). From this perspective, reasoning on the basis of hypotheses or suppositions may well be something that requires some sort of training (e.g. schooling) to be mastered.
As some readers may recall, I ran a couple of posts on reductio proofs from a dialogical perspective quite some time ago (here and here). I am now *finally* writing the paper where I systematize the account. In the coming days I'll be posting sections of the paper; as always, feedback is most welcome! The first part will focus on what seem to be the cognitive challenges that reasoners face when formulating reductio arguments.
For philosophers and mathematicians having been suitably ‘indoctrinated’ in the relevant methodologies, the issues pertaining to reductio ad absurdum arguments may not become immediately apparent, given their familiarity with the technique. And so, to get a sense of what is problematic about these arguments, let us start with a somewhat dramatic but in fact quite accurate account of what we could describe as the ‘phenomenology’ of producing a reductio argument, in the words of math education researcher U. Leron:
We begin the proof with a declaration that we are about to enter a false, impossible world, and all our subsequent efforts are directed towards ‘destroying’ this world, proving it is indeed false and impossible. (Leron 1985, 323)
In other words, we are first required to postulate this impossible world (which we know to be impossible, given that our very goal is to refute the initial hypothesis), and then required to show that this impossible world is indeed impossible. The first step already raises a number of issues (to be discussed shortly), but the tension between the two main steps (postulating a world, as it were, and then proceeding towards destroying it) is perhaps even more striking. As it so happens, these are not the only two issues that arise once one starts digging deeper.
To obtain a better grasp of the puzzling nature of reductio arguments, let us start with a discussion of why these arguments appear to be cognitively demanding – that is, if we are to believe findings in the math education literature as well as anecdotal evidence (e.g. of those with experience teaching the technique to students). This will offer a suitable framework to formulate further issues later on.
For my MA course on Wittgenstein earlier this year, students had to write a short essay, blog post-style, on the Tractatus. One of them, Joseph Wilcox, took up the challenge of asking what exactly it means to say that Wittgenstein's project in the Tractatus is essentially a Kantian project -- something I kept hammering on them relentlessly. (To me at least this seems like the best and perhaps the only way I can make sense of the Tractatus!) The result is the insightful post below. (Proud teacher here!)
By Joseph Wilcox
Wittgenstein [in the Tractatus] is a Kantian philosopher. Or so I'm told.
What exactly does it mean to say that someone is a Kantian philosopher? I always find it hard to grasp what is meant by such comparisons. Is it some fundamental belief that they share? Is it a field of thought that they both enter into? Is it a common goal that guides their thinking?
As often seems to be the case when it comes to philosophy, I am inclined to say that all the options must have some truth to them. In the case of Wittgenstein, however, I've been led to believe that it is the goal he sets out to achieve that forms the main connection between him and the lifework of his Prussian predecessor. What is it then, that both of these thinkers desire above everything else? The answer is to limit. To designate a point or level beyond which something does not or may not extend or pass. To place a restriction on the size or amount of something permissible or possible. On first looking, this doesn't seem like a very encouraging, confident or even useful objective. Why in the world would we bother to spend our precious time thinking about that which we can't reach? Isn't it far more interesting to seek to pass over such borders? Isn't it more inspiring to think that the impossible can serve as a beacon to aspire to? Isn't the thought of placing limits a token of the kind of pessimism that might cause one to give up hope?
The European Society for Analytic Philosophy was created in 1990, with the mission to promote collaboration and exchange of ideas among philosophers working within the analytic tradition, in Europe as well as elsewhere. It has thus been responsible for organizing major conferences every 3 years, the highly successful ECAP’s.
The current Steering Committee (of which I am a member), under the leadership of current president Stephan Hartmann, is seeking to expand the ways in which we can serve the (analytic) philosophical community in Europe. We will of course continue to organize ECAP, which will take place in 2017, and for which we already have a fantastic lineup of invited speakers (check it out!). But we are also considering various ways in which we can provide valuable services to the ESAP members, such as negotiating journal access with publishers (this is still in the making), among other initiatives. In particular, the brand-new website of ESAP is now online, and the goal is, among others, to concentrate useful information for (analytic) philosophers working in Europe all in one place.
However, we are only getting started, and at this points suggestions on how ESAP can truly support and galvanize the analytic philosophy community in Europe (as well as strengthening ties with colleagues elsewhere) are much welcome! We haven’t even started with an official membership system yet, precisely because we first want to have a number of services in place so as to make membership to the ESAP an attractive proposition. What are the initiatives and services we could provide that would really make a difference and facilitate the activities of our members? Comments with suggestions below would be much appreciated!
The best teacher I’ve ever had in my life was my history teacher in my first year at the Lycée Claude Monet in Paris: Monsieur (Denis) Corvol. Aged 14, I had just arrived from Brazil to spend two years in France with my parents, who were on an extended research leave from their positions as medicine professors in São Paulo, Brazil. I barely spoke French upon arrival, and to say that the first months were tough is an understatement. Many of the teachers seemed to be particularly harsh on me, and one (the math teacher) said in front of everyone in class: “if you can’t solve this problem, and you obviously don’t speak French very well, I wonder what you are doing in this class”.
But there was Monsieur Corvol, whose unorthodox teaching methods included talking about a variety of topics that seemed to have no connection whatsoever with the content we were supposed to be learning (the French Revolution and so forth – for that, he told us to go read the textbook on our own). (Years later I realized he was some sort of Habermasian, emphasizing inter-subjective communication and rational discourse.) When I arrived, he spent some two or three classes talking about Brazil -- what a remarkable country it was, how much the French could learn from Brazil -- in an obvious maneuver to make me feel more welcome, and to invite my classmates to engage with me in more positive ways.
From time to time I remember Monsieur Corvol with much fondness; many of the things I heard from him for the first time still reverberate with me. One of them, which I am reminded of now with the ongoing disaster of the migrant crisis in Europe, was: “Migrants are the bravest people in the world.” Migrants are the people who have the courage to fight for a better life in a new, unknown, possibly inhospitable country; for that, they must be resourceful and determined. Lucky is the country that can count on the drive and ambition of migrants, as a wonderful recent campaign in the UK has also highlighted. The 800 people who died in the Mediterranean Sea, many of whom children, should be remembered as among the bravest people in the world.
(I am currently finishing a paper on the definition of the syllogism according to Aristotle, Ockham, and Buridan. I post below the section where I present a dialogical interpretation of Aristotle's definition.)
Aristotle’s definition of ‘syllogismos’ in Prior Analytics (APri) 24b18-22 is among one of the most commented-upon passages of the Aristotelian corpus, by ancient as well as (Arabic and Latin) medieval commentators. He offers very similar definitions of syllogismos in the Topics, Sophistical Refutations, and the Rhetoric, but the one in APri is the one having received most attention from commentators. In the recent Striker (2009) translation, it goes like this (emphasis added):
A ‘syllogismos’ is an argument (logos) in which, (i) certain things being posited (tethentôn), (ii) something other than what was laid down (keimenôn) (iii) results by necessity (eks anagkês sumbainei)(iv) because these things are so. By ‘because these things are so’ I mean that it results through these, and by ‘resulting through these’ I mean that no term is required from outside for the necessity to come about.
It became customary among commentators to take ‘syllogismos’ as belonging to the genus ‘logos’ (discourse, argument), and as characterized by four (sometimes five) differentiae:
In recent times there has been quite some discussion on the phenomenon of internet shaming. Two important recent events were the (admirable, brave) TED talk by Monica Lewinsky, and the publication of Jon Ronson’s book So you’ve been publicly shamed. Lewinsky’s plight mostly pre-dates the current all-pervasiveness of the internet in people’s lives, but she was arguably one of the first victims of this new form of shaming: shaming that takes world-wide(-web) proportions, no longer confined to the locality of a village or a city. Pre-internet, people could move to a different city, if need be to a different country, and start over again. Now, only changing your name would do, to avoid being ‘googled down’ by every new person or employer you meet.
As described in Ronson’s book (excerpt here, interview with Ronson here), lives can be literally destroyed by an internet shaming campaign (the main vehicle for that seems to be Twitter, judging from his stories). Justine Sacco, formerly a successful senior director of corporate communications at a big company, had her life turned upside down as a result of one (possibly quite unfortunate, though in a sense also possibly making an anti-racist point) tweet: ““Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” From there on, her life became a tragedy of Kafkaesque proportions, and she’s only one of the many people having faced similar misfortunes discussed in Ronson’s book. Clearly, people truly delight in denouncing someone as ‘racist’, as in Sacco’s case; it probably makes them feel like they are making a contribution (albeit a small one) to a cause they feel strongly about. But along the way, for the sake of ‘justice’, they drag through the dirt someone whose sole ‘crime’ was to post a joke of debatable tastefulness on Twitter. But who has never said anything unfortunate, which they later came to regret, on the internet?
I am deeply grateful for the wonderful feedback I received from readers along the way (also in the form of comments and discussions over at Facebook). I could never have written this paper if it wasn't for all this help, given that much of the material falls outside the scope of my immediate expertise. So, again, thanks all!
(And now, on to start working on a new paper, on the definition of the syllogism in Aristotle, Ockham and Buridan. In fact, it will be an application of the conceptual genealogy method, so it all ties together in the end.)
Today is International Women’s Day, so here is a short post on what it means to be a feminist to me, to mark the date. Recently, a (male) friend asked me: “Why do you describe yourself as a ‘feminist’, and not as an ‘equalist’”? If feminism is about equality between women and men, why focus on the female side of the equation only? This question is of course related to the still somewhat widespread view that feminism is at heart a sexist doctrine: to promote the rights and wellbeing of women at the expense of the rights and wellbeing of men. Admittedly, the idea that it’s a zero-sum game is reminiscent of so-called second-wave feminism, in particular given the influence of Marxist ideas of class war. However, there is a wide range of alternative versions of feminism that focus on the rights and wellbeing of both men and women (as well as of those who do not identify as either), and move away from the zero-sum picture.
The much-watched TED talk by Chimamanda Ngozi Adichie, ‘We should all be feminists’ (bits of which were sampled in Beyoncé’s ‘Flawless’), offers precisely one such version, which I personally find very appealing. (After recently reading Americanah, I’ve been nurturing a crush on this woman; she is truly amazing.) The talk is worth watching in its entirety (also, it’s very funny!), and while she describes a number of situations that might be viewed as specific to their originating contexts (Nigeria in particular), the gist of it is entirely universal. It is towards the very end that Adichie provides her preferred definition of a feminist:
A feminist is a man or a woman* who says: yes, there is a problem with gender as it is today, and we must fix it. We must do better.
A few days ago the link to an interesting piece popped up in my Facebook newsfeed: ‘Three reasons why every woman should use a vibrator’, by Emily Nagoski. I wholeheartedly agree with the main claim, but what makes the piece particularly interesting for philosophers at large is a reference to Andy Clark and the extended mind framework:
Some women feel an initial resistance to the idea of using a vibrator because it feels like they “should” be able to have an orgasm without one. But there is no “should” in sex. There’s just what feels good. Philosopher Andy Clark (who’s the kind of philosopher who would probably not be surprised to find himself named-dropped in an article about vibrators) calls it “scaffolding,” or “augmentations which allow us to achieve some goal which would otherwise be beyond us.” Using paper and pencil to solve a math equation is scaffolding. So is using a vibrator to experience orgasm.
This is an intriguing suggestion, which deserves to be further explored. (As some readers may recall, I am always happy to find ways to bring together some of my philosophical interests with issues pertaining to sexuality – recall this post on deductive reasoning and the evolution of female orgasm.) Within the extended mind literature, the phenomena discussed as being given a ‘boost’ through the use of bits and pieces of the environment are typically what we could describe as quintessentially cognitive phenomena: calculations, finding your way to the MoMA etc. But why should the kind of scaffolding afforded by external devices and parts of the environment not affect other aspects of human existence, such as sexuality? Very clearly, they can, and do. (Relatedly, there is also some ongoing discussion on the ethics of neuroenhancement for a variety of emotional phenomena.)
I've been asked to write a review of Williamson's brand new book Tetralogue for the Times Higher Education. Here is what I've come up with so far. Comments are very welcome, as I still have some time before submitting the final version. (For more background on the book, here is a short video where Williamson explains the project.)
Disagreement in debates and discussions is an interesting phenomenon. On the one hand, having to justify your views and opinions vis-à-vis those who disagree with you is perhaps one of the best ways to induce a critical reevaluation of these views. On the other hand, it is far from clear that a clash of views will eventually lead to a consensus where the parties come to hold better views than the ones they held before. This is one of the promises of rational discourse, but one that is all too often not kept. What to do in situations of discursive deadlock?
Timothy Williamson’s Tetralogue is precisely an investigation on the merits and limits of rational debate. Four people holding very different views sit across each other in a train and discuss a wide range of topics, such as the existence of witchcraft, the superiority and falibilism of scientific reasoning, whether anyone can ever be sure to really know anything, what it means for a statement to be true, and many others. As one of the most influential philosophers currently in activity, Williamson is well placed to give the reader an overview of some of the main debates in recent philosophy, as his characters debate their views.
In this post, I discuss in more detail the two main categories of genealogy that were mentioned in previous posts: vindicatory and subversive genealogies.
III. Applications of genealogy
In the spirit of the functionalist, goal-oriented approach adopted here, a pressing question now becomes: what’s the point of a genealogy? What kind of results do we obtain from performing a genealogical analysis of philosophical concepts? I’ve already mentioned vindication and subversion/debunking en passant along the way, but now it is time to discuss applications of genealogy in a more systematic way.
III.1 Genealogy as vindicatory or as subversive
By now, it should be clear that genealogy is a rather plastic concept, one which can be (and has been) instantiated in a number of different ways. Craig offers a helpful description of a range of options:
[Genealogies] can be subversive, or vindicatory, of the doctrines or practices whose origins (factual, imaginary, and conjectural) they claim to describe. They may at the same time be explanatory, accounting for the existence of whatever it is that they vindicate or subvert. In theory, at least, they may be merely explanatory, evaluatively neutral (although as I shall shortly argue it is no accident that convincing examples are hard to find). They can remind us of the contingency of our institutions and standards, communicating a sense of how easily they might have been different, and of how different they might have been. Or they can have the opposite tendency, implying a kind of necessity: given a few basic facts about human nature and our conditions of life, this was the only way things could have turned out. (Craig 2007, 182)
In this section, I pitch genealogy against its close cousin archeology in order to argue that genealogy really is what is needed for the general project of historically informed analyses of philosophical concepts that I am articulating. And naturally, this leads me to Foucault. As always, comments welcome! (This is the first time in like 20 years that I do anything remotely serious with Foucault's ideas: why did it take me so long? Lots of good stuff there.)
I hope to have argued more or less convincingly by now that, given the specific historicist conception of philosophical concepts I’ve just sketched, genealogy is a particularly suitable method for historically informed philosophical analysis. In the next section, a few specific examples will be provided. However, and as mentioned above, I take genealogy to be one among other such historical methods, so there are options. Why is genealogy a better option than the alternatives? In order to address this question, in this section I pitch genealogy against one of its main ‘competitors’ as a method for historical analysis: archeology. Naturally, this confrontation leads me directly to Foucault.
[UPDATE: It seems that my post is being interpreted by some as a criticism of the Charlie Hebdo collaborators. Nothing could be further from the truth; I align myself completely with their Enlightenment ideals -- so I'm intolerant too! -- and in fact deem humor to be a powerful tool to further these ideals. Moreover, perhaps it is worth stressing the obvious: their 'intolerance' does not in any way justify their barbaric execution. It is not *in any way* on a par with the intolerance of those who did not tolerate their humor and thus went on to kill them.]
I grew up in a thoroughly secular household (my father was a communist; I never had any kind of religious education). However, I did get a fair amount of exposure to religion through my grandmothers: my maternal grandmother was a practicing protestant, and my paternal grandmother was a practicing catholic. In my twenties, for a number of reasons, I became more and more drawn to Catholicism, or at least to a particular interpretation of Catholicism (with what can be described as a ‘buffet’ attitude: help yourself only to what seems appetizing to you). This led to me getting baptized, getting married in the Catholic church, and wearing a cross around my neck. (I have since then distanced myself from Catholicism, in particular since I became a mother. It became clear to me that I could not give my daughters a catholic ‘buffet’ upbringing, and that they would end up internalizing all the dogmas of this religion that I find deeply problematic.)
At the same time, upon moving to the Netherlands in the late 90s, I had been confronted with the difficult relations between this country and its large population of immigrants and their descendants sharing a Muslim background, broadly speaking. At first, it all made no sense to me, coming from a country of immigrants (Brazil) where the very concept of being a ‘second-generation immigrant’ is quite strange. Then, many years ago (something like 13 years ago, I reckon) one day in the train, I somehow started a conversation with a young man who appeared to be of Arabic descent. I don’t quite recall how the conversation started, but one thing I remember very clearly: he said to me that it made him happy to see me wearing the catholic cross around my neck. According to him, the problem with the Netherlands is the people who have no religion – no so much people who (like me at the time) had a religion different from his own.* This observation has stayed with me since.
Anyway, this long autobiographical prologue is meant to set the stage for some observations on the recent tragic events in Paris. As has been remarked by many commentators (see here, for example), the kind of humor practiced by the Charlie Hebdo cartoonists must be understood in the context of a long tradition of French political satire which is resolutely left-wing, secular, and atheist. Its origins go back at least to the 18th century; it was an integral part of the Enlightenment movement championed by people like Voltaire, who used humor to provoke social change. In particular in the context of totalitarian regimes, satire becomes an invaluable weapon.
Here's a short piece by the New Scientist on the status of Mochizuki's purported proof of the ABC conjecture. More than 2 years after the 500-page proof has been made public, the mathematical community still hasn't been able to decide whether it's correct or not. (Recall my post on this from May 2013; little change seems to have taken place since then.)
Going back to my dialogical conception of mathematical proofs as involving a proponent who formulates the proof and opponents who must check it, this stalemate can be viewed from at least two perspectives: either Mochizuki is not trying hard enough as a proponent, or the mathematical community is not trying hard enough as opponent.
[Mochizuki] has also criticised the rest of the community for not studying his work in detail, and says most other mathematicians are "simply not qualified" to issue a definitive statement on the proof unless they start from the very basics of his theory.
Some mathematicians say Mochizuki must do more to explain his work, like simplifying his notes or lecturing abroad.
(Of course, it may well be that both are the case!). And so for now, the proof remains in limbo, as well put by the New Scientist piece. Mathematics, oh so human!
The great historian of logic and mathematics Ivor Grattan-Guinness passed away about a month ago, aged 73. I only heard it yesterday, when Stephen Read posted a link to the Guardian obituary on Facebook. From the obituary:
He rescued the moribund journal Annals of Science, founded the journal History and Philosophy of Logic, and was on the board of Historia Mathematica from its inception. A member of the council of the Society for Psychical Research, he wrote Psychical Research: A Guide to Its History (1982). In 1971 the British Society for the History of Mathematics was founded: Ivor served as its president (1986-88) and instituted a formal constitution.
Indeed, many of us owe him eternal gratitude for founding the journal History and Philosophy of Logic, which continues to be the main journal for studies combining historical and philosophical perspectives on logic. Ivor's work and scholarship spans over an impressive range of topics and areas, and is bound to continue to influence many generations of scholars to come. It is a great loss.
Those of you who can’t get yourselves to be offline even during the Xmas break have most likely been following the events involving Brian Leiter (BL), Jonathan Ichikawa (JI) and Carrie Jenkins (CJ), and the lawyers’ letters regarding the legal measures BL says he is prepared to take as a reaction to what he perceives as defamatory statements by (or related to) Ichikawa and Jenkins. As a matter of fact, a blog post of mine back in July seems to have played a small role in the unfolding of these unfortunate developments, and so I deem it appropriate to add a few observations of my own.
On multiple occasions (and in particular in comments at Facebook posts), Leiter claims to have been directed to Jenkins’ ‘Day One’ pledge by this blog post of mine of July 2nd 2014, in defense of Carolyn Dicey Jennings. It begins:
Most readers have probably been following the controversy involving Carolyn Dicey Jennings and Brian Leiter concerning the job placement data post where Carolyn Dicey Jennings compares her analysis of the data she has assembled with the PGR Rank. There have been a number of people reacting to what many perceived as Brian Leiter’s excessively personalized attack of Carolyn Dicey Jennings’s analysis, such as in Daily Nous, and this post by UBC’s Carrie Ichikawa Jenkins on guidelines for academic professional conduct (the latter is not an explicit defense of Carolyn Dicey Jennings, but the message is clear enough, I think). [emphasis added]
Leiter claims that this observation is what led him to Jenkins’ post in the first place, which he perceived as a direct attack on him. He also claims that this is what the passage above implies, and continues to repeat that the post “was intended by Jenkins as a criticism of me (as everyone at the time knew, and as one of her friends has now admitted), and thus explains my private response, which she chose to make public.” However, there is nothing explicitly indicating that the post was intended as a criticism of BL beyond the fact that he read it this way and keeps repeating it.
I now discuss the five main features of the historicist conception of philosophical concepts that motivates and justifies the method of conceptual genealogy for philosophical concepts. In a sense, this is the backbone of the paper and of the whole project, so I'm particularly interested in feedback from readers now.
We are now in a better position to describe in more detail what I take to be the five main characteristics of the historicist conception of philosophical concepts that I defend here, borrowing elements from Nietzsche’s conception of genealogy and Canguilhem’s concept-centered historical approach. In short, these are (they will each be discussed in turn subsequently):
Superimposition of layers of meaning
Multiple lines of influence
Connected to (extra- or intra-philosophical) practices and goals