I am deeply grateful for the wonderful feedback I received from readers along the way (also in the form of comments and discussions over at Facebook). I could never have written this paper if it wasn't for all this help, given that much of the material falls outside the scope of my immediate expertise. So, again, thanks all!
(And now, on to start working on a new paper, on the definition of the syllogism in Aristotle, Ockham and Buridan. In fact, it will be an application of the conceptual genealogy method, so it all ties together in the end.)
I'm teaching Wittgenstein this semester--for the first time ever--to my Twentieth-Century Philosophy class. My syllabus requires my students to read two long excerpts from the Tractatus Logico-Philosophicusand Philosophical Investigations; bizarrely enough, in my original version of that exalted contract with my students, I had allotted one class meeting to a discussion of the section from the Tractatus. Three classes later, we are still not done; as you can tell, it has been an interesting challenge thus far.
As you know, I was the gentleman that made that remark in a private facebook thread with a close friend. If I recall correctly, people in that thread were asking about whether certain kinds of thought experiments were typically referred to as “Gettier Cases”. I said that they were, despite how inaccurate or uninformative it might be to do so, in part because of the alternative traditions you cite. I’m sorry you interpreted my remark as silencing my friends on facebook. Personally I believe that philosophers should abandon the notion of “Gettier cases” and that the practice of labeling thought experiments in this way should be discouraged. If you are interested, I have recently argued for this in two articles here (http://philpapers.org/rec/BLOGCA) and here (http://philpapers.org/rec/TURKAL).
A few months ago, I noticed an interesting and telling interaction between a group of academic philosophers. A Facebook friend posted a little note about how one of her students had written to her about having encountered a so-called "Gettier case" i.e., she had acquired a true belief for invalid reasons. In the email, the student described how he/she had been told the 'right time' by a broken clock. The brief discussion that broke out in response to my friend's note featured a comment from someone noting that the broken clock example is originally due to Bertrand Russell. A little later, a participant in the discussion offered the following comment:
Even though the clock case is due to Russell, it's worth noting that "Gettier" cases were present in Nyāya philosophy in India well before Russell, for instance in the work of Gaṅgeśa, circa 1325 CE. The example is of someone inferring that there is fire on a faraway mountain based on the presence of smoke (a standard case of inference in Indian philosophy), but the smoke is actually dust. As it turns out, though, there is a fire on the mountain. See the Tattva-cintā-maṇi or "Jewel of Reflection on the Truth of Epistemology." [links added]
In this post, I discuss in more detail the two main categories of genealogy that were mentioned in previous posts: vindicatory and subversive genealogies.
III. Applications of genealogy
In the spirit of the functionalist, goal-oriented approach adopted here, a pressing question now becomes: what’s the point of a genealogy? What kind of results do we obtain from performing a genealogical analysis of philosophical concepts? I’ve already mentioned vindication and subversion/debunking en passant along the way, but now it is time to discuss applications of genealogy in a more systematic way.
III.1 Genealogy as vindicatory or as subversive
By now, it should be clear that genealogy is a rather plastic concept, one which can be (and has been) instantiated in a number of different ways. Craig offers a helpful description of a range of options:
[Genealogies] can be subversive, or vindicatory, of the doctrines or practices whose origins (factual, imaginary, and conjectural) they claim to describe. They may at the same time be explanatory, accounting for the existence of whatever it is that they vindicate or subvert. In theory, at least, they may be merely explanatory, evaluatively neutral (although as I shall shortly argue it is no accident that convincing examples are hard to find). They can remind us of the contingency of our institutions and standards, communicating a sense of how easily they might have been different, and of how different they might have been. Or they can have the opposite tendency, implying a kind of necessity: given a few basic facts about human nature and our conditions of life, this was the only way things could have turned out. (Craig 2007, 182)
In this section, I pitch genealogy against its close cousin archeology in order to argue that genealogy really is what is needed for the general project of historically informed analyses of philosophical concepts that I am articulating. And naturally, this leads me to Foucault. As always, comments welcome! (This is the first time in like 20 years that I do anything remotely serious with Foucault's ideas: why did it take me so long? Lots of good stuff there.)
I hope to have argued more or less convincingly by now that, given the specific historicist conception of philosophical concepts I’ve just sketched, genealogy is a particularly suitable method for historically informed philosophical analysis. In the next section, a few specific examples will be provided. However, and as mentioned above, I take genealogy to be one among other such historical methods, so there are options. Why is genealogy a better option than the alternatives? In order to address this question, in this section I pitch genealogy against one of its main ‘competitors’ as a method for historical analysis: archeology. Naturally, this confrontation leads me directly to Foucault.
I now discuss the five main features of the historicist conception of philosophical concepts that motivates and justifies the method of conceptual genealogy for philosophical concepts. In a sense, this is the backbone of the paper and of the whole project, so I'm particularly interested in feedback from readers now.
We are now in a better position to describe in more detail what I take to be the five main characteristics of the historicist conception of philosophical concepts that I defend here, borrowing elements from Nietzsche’s conception of genealogy and Canguilhem’s concept-centered historical approach. In short, these are (they will each be discussed in turn subsequently):
Superimposition of layers of meaning
Multiple lines of influence
Connected to (extra- or intra-philosophical) practices and goals
This is the third installment of my series of posts with different sections of the paper on conceptual genealogy that I am working on. Part I is here; Part II.1 is here; a tentative abstract of 2 years ago, detailing the motivation for the project, is here.
I now turn to Canguilhem as an author exemplifying the kind of approach I have in mind when I speak of 'conceptual genealogy'. The main difference is that Canguilhem focused on scientific concepts (especially from biology and medicine), whereas I am articulating a methodology for the investigation of philosophical concepts (though of course, often the line between the two groups will be rather blurry). The same caveat of the previous installment on Nietzsche applies: this is a very brief and inevitably superficial discussion of Canguilhem's ideas, on which there is obviously much more to say.
The thesis of the relevance of historical analysis for philosophical theorizing rests crucially on a historicist conception of philosophical concepts, namely that they are not (or do not correspond to) a-historical essences or natural kinds. However, ‘historicism’ can have different meanings (Beiser 2011, Introduction), so let me now spell out in more detail in what sense I defend a historicist conception of philosophical concepts.
This is the second installment of my series of posts with different sections of the paper on conceptual genealogy that I am working on. Part I is here; a tentative abstract of 2 years ago, detailing the motivation for the project, is here.
I now present some of the basics of Nietzschean genealogy which will then be central for my general project. The goal here is thus not to offer a thorough account of Nietzsche's thought on the matter, obviously (a lifelong project!), but it should still be an accurate presentation of some aspects of it. If that is not the case, please do let me know! (I rely mostly on Geuss' and Leiter's interpretations.) Feedback in general is more than welcome.
The mundane, commonsensical sense of genealogy is typically related to the idea of vindication, i.e. of validation of one’s authority through the narrative of one’s origins. This is particularly conspicuous in historical disputes for political power within the traditional monarchic model: a contestant has a claim to the throne if she can prove to be a descendent of the right people, namely previous monarchical power-holders. In such cases, a genealogy is what Geuss (1994, 274) describes as ‘tracing a pedigree’, a practice as old as (Western?) civilization itself. The key idea is the idea of transmission of value: a person with noble ancestry inherits this status from her ancestors.
As some readers may recall (see this blog post with a tentative abstract -- almost 2 years ago!), I am working on a paper on the methodology of conceptual genealogy, which is the methodology that has thus far informed much of my work on the history and philosophy of logic. Since many people have expressed interest in this project, in the next couple of days I will post the sections of the paper that I've already written. Feedback is most welcome!
Today I post Part I, on the traditionally a-historical conception of philosophy of analytic philosophers. Tomorrow I will post Part II.1, on Nietzschean genealogy; on Thursday and Friday I will post Part II.2, on the historicity of philosophical concepts, in two installments.
Wiliams (2002) and Craig (2007) fittingly draw a distinction between genealogies that seek to expose the reprehensible origins of something and thereby decrease its value, and genealogies that seek to glorify their objects by exposing their ‘noble’ origins. The former are described as ‘subversive’, ‘shameful’ or ‘debunking’, while the latter may be dubbed ‘vindicatory’. (I will have much more to say on this distinction later on.) Nietzsche’s famous genealogical analysis of morality is the archetypal subversive genealogy, and has given rise to a formidable tradition of deconstruction of concepts, values, views, beliefs etc. by the exposure of their pudenda origo, their shameful origins. As described by Srinivasan (2011, 1),
Nietzsche’s innovation prompted a huge cultural shift towards subversive genealogical thinking – what might be called the ‘Genealogical Turn’ – including Freudian analysis, 20th-century Marxism, Foucault’s historical epistemology, certain strands of postcolonial and feminist theory, and much of what goes by the label ‘postmodernism’. These ideological programmes operate by purporting to unmask the shameful origins – in violence, sexual repression, gender or racial hegemony and economic and social oppression – of our concepts, beliefs and political structures.
In 'Five Parables' (from Historical Ontology, Harvard University Press, 2002), Ian Hacking writes,
I had been giving a course introducing undergraduates to the philosophers who were contemporaries of the green family and August der Stark. My hero had been Leibniz, and as usual my audience gave me pained looks. But after the last meeting, some students gathered around and began with the conventional, 'Gee, what a great course.' The subsequent remarks were more instructive: 'But you could not help it...what with all those great books, I mean like Descartes...' They loved Descartes and his Meditations.
I happen to give terrible lectures on Descartes, for I mumble along saying that I do not understand him much. It does not matter. Descartes speaks directly to these young people, who know as little about Descartes and his times as I know about the green family and its time. But just as the green family showed itself to me, so Descartes shows himself to them....The value of Descartes to these students is completely anachronistic, out of time. Half will have begun with the idea that Descartes and Sartre were contemporaries, both being French. Descartes, even more than Sartre, can speak directly to them....I do find it very hard to make sense of Descartes, even after reading commentaries, predecessors, and more arcane texts of the same period. The more I make consistent sense of him, the more he seems to me to inhabit an alien universe.
Religious disagreements are conspicuous in everyday life. Most societies, except perhaps for theocracies or theocracy-like regimes, show a diversity of religious beliefs, a diversity that young children already are aware of. One emerging topic of interest in the social epistemology of religion is how we should respond to religious disagreement. How should you react if you are confronted with someone who seems equally intelligent and thoughtful, who has access to the same evidence as you do, but who nevertheless ends up with very different religious beliefs? Should you become less confident about your beliefs, or suspend judgment? Or is it permissible to accord more weight to your own beliefs than to those of others?
In November and December 2014, I surveyed philosophers about their views on religious disagreement. I was not only interested in finding out what philosophers think about disagreements about religious topics in the profession (for instance, do they consider other philosophers as epistemic peers, or do they take the mere fact of disagreement as an indication that the other can't be right?), but also in the influence of personal religious beliefs and training. I present a brief summary of results below the fold; a longer version can be found here.
Today is UNESCO’s World Philosophy Day, which is celebrated on the third Thursday of November every year. As it so happens, November 20th is also the United Nations’ Universal Children’s Day (here is a blog post I wrote for the occasion 2 years ago). I am truly delighted that these two days coincide today, as children and philosophy are two of my greatest passions. But the intimate connection between children and philosophy runs much deeper than my particular, individual passions, and so it should be celebrated.* As Wittgenstein famously (but somewhat dismissively) put it:
Philosophers are often like little children, who first scribble random lines on a piece of paper with their pencils, and now ask an adult "What is that?" (Philosophical Occasions 1912-1951)
My own favorite definition of philosophy is that philosophy is at heart the activity of asking questions about things that appear to be obvious but are not. (True enough, it also involves attempting to provide answers and giving arguments to support one’s preferred answers.) And so it is incumbent on the philosopher to ask for example ‘What is time, actually?’, while everybody else goes about their daily business taking the nature of time for granted. Indeed, philosophy is intimately connected with curiosity and inquisitiveness, and this idea famously goes back all the way to the roots of philosophy as we know it:
I was asked to write a review of Terry Parsons' Articulating Medieval Logic for the Australasian Journal of Philosophy. This is what I've come up with so far. Comments welcome!
Scholars working on (Latin) medieval logic can be viewed as populating a spectrum. At one extremity are those who adopt a purely historical and textual approach to the material: they are the ones who produce the invaluable modern editions of important texts, without which the field would to a great extent simply not exist; they also typically seek to place the doctrines presented in the texts in a broader historical context. At the other extremity are those who study the medieval theories first and foremost from the point of view of modern philosophical and logical concerns; various techniques of formalization are then employed to ‘translate’ the medieval theories into something more intelligible to the modern non-historian philosopher. Between the two extremes one encounters a variety of positions. (Notice that one and the same scholar can at times wear the historian’s hat, and at other times the systematic philosopher’s hat.) For those adopting one of the many intermediary positions, life can be hard at times: when trying to combine the two paradigms, these scholars sometimes end up displeasing everyone (speaking from personal experience).
Terence Parsons’ Articulating Medieval Logic occupies one of these intermediate positions, but very close to the second extremity; indeed, it represents the daring attempt to combine the author’s expertise in natural language semantics, linguistics, and modern philosophy with his interest in medieval logical theories (which arose in particular from his decade-long collaboration with Calvin Normore, to whom the book is dedicated). For scholars of Latin medieval logic, the fact that such a distinguished expert in contemporary philosophy and linguistics became interested in these medieval theories only confirms what we’ve known all along: medieval logical theories have intrinsic systematic interest; they are not only curious museum pieces.
Despite not being the first to employ modern logical techniques to analyze medieval theories, Parsons' approach is quite unique (one might even say idiosyncratic). It seems fair to say that nobody has ever before attempted to achieve what he wants to achieve with this book. A passage from the book’s Introduction is quite revealing with respect to its goals:
It is no news to anyone that the concept of consistency is a hotly debated topic in philosophy of logic and epistemology (as well as elsewhere). Indeed, a number of philosophers throughout history have defended the view that consistency, in particular in the form of the principle of non-contradiction (PNC), is the most fundamental principle governing human rationality – so much so that rational debate about PNC itself wouldn’t even be possible, as famously stated by David Lewis. It is also the presumed privileged status of consistency that seems to motivate the philosophical obsession with paradoxes across time; to be caught entertaining inconsistent beliefs/concepts is really bad, so blocking the emergence of paradoxes is top-priority. Moreover, in classical as well as other logical systems, inconsistency entails triviality, and that of course amounts to complete disaster.
Since the advent of dialetheism, and in particular under the powerful assaults of karateka Graham Priest, PNC has been under pressure. Priest is right to point out that there are very few arguments in favor of the principle of non-contradiction in the history of philosophy, and many of them are in fact rather unconvincing. According to him, this holds in particular of Aristotle’s elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we disagree on its exact structure. At any rate, it is worth noticing that, unlike David Lewis, Aristotle did think it was possible to debate with the opponent of PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of non-contradiction and consistency are still widely viewed as cornerstones of the very concept of rationality.
However, in the spirit of my genealogical approach to philosophical issues, I believe that an important question to be asked is: What’s the big deal with consistency in the first place? What does it do for us? Why do we want consistency so badly to start with? When and why did we start thinking that consistency was a good norm to be had for rational discourse? And this of course takes me back to the Greeks, and in particular the Greeks before Aristotle.
Last week I was ‘touring’ in Scotland, first in St. Andrews for a workshop on medieval logic and metaphysics, and then in Edinburgh for a workshop on philosophical methodologies, organized by the Edinburgh Women in Philosophy Group. In the latter, I presented a paper entitled ‘Virtuous adversariality as a model for philosophical inquiry’, which grew out of a number of blog posts on the topic I’ve been writing in the recent past (here, here and here). Quoting from the abstract:
In my talk, I will develop a model for philosophical inquiry that I call 'virtuous adversariality', which is meant to be a response to critics from both sides [those who criticize and those who endorse adversariality in philosophy]. Its key feature is the idea that a certain form of adversariality, more specifically disagreement and debate, is indeed at the heart of philosophy, but that philosophical inquiry also has a strong cooperative, virtuous component which regulates and constrains the adversarial component. The main inspiration for this model comes from ancient Greek dialectic.
And so I gave my talk, and somewhat against the spirit of it, everybody in the audience seemed to agree with pretty much everything I had said – where are these opponents when you need them? But one person, Amia Srinivasan (Oxford), raised what is perhaps the most serious objection to any adversarial mode of inquiry, virtuous or not: it may well minimize our endorsement of false beliefs, but it does so at the risk of also minimizing our endorsement of true beliefs.
There’s a discussion going on over at Leiter about the results of his latest poll: which modern philosopher had the “most pernicious influence” on philosophy? Heidegger was the strong #1, both in terms of the number of people who hated him, and the intensity of their hatred. This doesn’t seem that surprising, given that Leiter’s readers, um, lean analytic and since Leiter took their Derrida option off the table.
Much more interesting, it seems to me, is the historical skew of the results. Most of the figures in the top 20 are 20th century philosophers, and all but three (Descartes, Berkeley, and Kant) are 19th or 20th century (and it wouldn’t be conceptually wrong to put Kant in with the 19c). Does this reflect poor historical training? Do influential but controversial positions get absorbed into the ‘mainstream?’
“Rehearse this thought every day, that you may be able to depart from life contentedly; for many men clutch and cling to life, in the same way that those who are carried down a rushing stream clutch and cling to briars and sharp rocks.” -- Seneca, Letter 4
“A free man thinks of death least of all things; and his wisdom is a meditation not of death but of life.” -- Spinoza, Ethics 4P67
“One would require a position outside of life, and yet have to know it as well as one, as many, as all who have lived it, in order to be permitted even to touch the problem of the value of life; reasons enough to comprehend that this problem is for us an unapproachable problem.” --Nietzsche, Twilight of the Idols
It’s been more than ten years, but the memory is very much alive of the night my stepmother called to tell me that my father had died, suddenly, while on a run the day before he was to race in the L.A marathon. Of the many overwhelming thoughts and emotions that came upon me that night, one of the first was the realization that everything had suddenly and irreversibly changed; things, I realized, will no longer be the same. This thought was much more than an intellectual grasp and insight; it felt much more real than that. Within 24 hours I was on a plane and back home in Laguna Beach. Walking in town that beautiful March night I couldn’t help but think of how, despite the fact that my father was no longer walking the streets of Laguna, the moon and stars that lit up the night sky were the same as the night before and will continue to be the same long after I succumb to the same fate as my father, the night sky being implacable and unaffected by the changes that affect our lives. It is no wonder the Ancients referred to the night sky as the heavenly sphere, the eternal realm distinct from the earthly sphere of changing human affairs.
I know that my thoughts and feelings regarding my father’s death are not unusual – it is from what I can tell a very common reaction to the loss of a significant person in one’s life. My reaction is also probably not unique to sudden deaths either. I had a similar reaction to my stepfather’s death from colon cancer. Although we knew his death was coming, the actual event of his death left me with a similar feeling of the transformative nature of what had happened. But there is something about sudden deaths that accentuates, or brings to an extreme, an important truth about our relation to death.
It is this truth about our relation to death that motivates, I would argue, the claims made in the quotes that lead off this post.
I am currently supervising a student writing a paper on Wittgenstein’s notion of therapy as a metaphilosophical concept. The paper relies centrally on a very useful distinction discussed in N. Rescher’s 1985 book The Strife of Systems (though I do not know whether it was introduced there for the first time), namely the distinction between prescriptive vs. descriptive metaphilosophy (the topic of chap. 14 of the book).
The descriptive issue of how philosophy has been done is one of factual inquiry largely to be handled in terms of the history of the field. But the normative issue of how philosophy should be done – or significant questions, adequate solutions, and good arguments – is something very different. (Rescher 1985, 261)
Rescher goes on to argue that descriptive metaphilosophy is not part of philosophy at all; it is a branch of factual inquiry, namely the history of philosophy and perhaps its sociology. Prescriptive metaphilosophy, by contrast, is real philosophy: methodological claims on how philosophy should be done are themselves philosophical claims. (Full disclosure: I haven’t read the whole chapter, only what google books allows me to see…) Rescher’s position as described here seems to be quite widespread, encapsulating the ‘disdain’ with which not only descriptive metaphilosophy, but also the history of philosophy in general, is often viewed by ‘real philosophers’. And yet, this position seems to me to be fundamentally wrong (and this is also the claim that my student is defending in his paper).
(Notice that to discuss the status of descriptive metaphilosophy as philosophy, we need to go meta-metaphilosophical! It’s turtles all the way up, or down, depending on how you look at it.)
With Robert Brandom (and for recognizably Hegelian reasons) I think that Whig histories are necessary. I also agree with conservative critics that American English departments damaged their own enrollments when the 1980s attacks on the canon led to too sweeping curricular changes. In every field, it's very important for students to master a Whig history that allows them to critically engage with contemporary work and that gives them an analogical jumping off point to apply their knowledge elsewhere. And students know this.
I also agree about 90% with Brandom on how this Whig history should be put together for philosophy. A philosopher must understand Kant, how Kant led to Hegel, how (and hopefully why with respect to the 19th century) Hegel was finally suppressed in the "back to Kant" movement, how phenomenology and logical positivism pushed the neo-Kantian moment to its breaking point, and how contemporary philosophy is a reaction to the agonies and ecstasies of positivism and phenomenology.
In my regular visits to Munich as an external member of the MCMP, a frequent item on my program is meeting with Peter Adamson, of ‘History of Philosophy without any Gaps’ fame, to talk about, well, the history of philosophy (there are still gaps to be filled!). So last week, after another lovely 2-hour session that felt like 10 minutes, Peter told me about a chapter of Julian Barnes’ AHistory of the World in 10 ½ Chapters, where everyone goes to heaven and gets to do whatever they want for however long they want. After some years of pleasurable life, almost everyone then gives up and wants to die ‘for real’, but a particular group of people is remarkably resilient: the philosophers, who are happy to go on discussing with each other for decades and decades. They are the ones who last the longest in heaven. (I haven’t read the book yet, but coincidentally I was reading another one of Barnes’ books.)
Coincidence or not, a day later I came across an article by Nigel Warburton, of ‘Philosophy Bites’ fame, on how philosophy is above all about conversation. (Those podcasters like their talking alright.) The article points out that, while the image of the philosopher as the lone thinker, associated with Descartes, Boethius, and Wittgenstein, is still influential, it is simply a very partial, if not entirely wrong, picture of philosophical practice. Warburton relies on John Stuart Mill to emphasize the importance of conversation and dissent for philosophical inquiry:
Formal/mathematical philosophy is a well-established approach within philosophical inquiry, having its friends as well as its foes. Now, even though I am very much a formal-approaches-enthusiast, I believe that fundamental methodological questions tend not to receive as much attention as they deserve within this tradition. In particular, a key question which is unfortunately not asked often enough is: what counts as a ‘good’ formalization? How do we know that a given proposed formalization is adequate, so that the insights provided by it are indeed insights about the target phenomenon in question? In recent years, the question of what counts as adequate formalization seems to be for the most part a ‘Swiss obsession’, with the thought-provoking work of Georg Brun, and Michael Baumgartner & Timm Lampert. But even these authors seem to me to restrict the question to a limited notion of formalization, as translation of pieces of natural language into some formalism. (I argued in chapter 3 of my book Formal Languages in Logic that this is not the best way to think about formalization.)
However, some of the pioneers in formal/mathematical approaches to philosophical questions did pay at least some attention to the issue of what counts as an adequate formalization. In this post, I want to discuss how Tarski and Carnap approached the issue, hoping to convince more ‘formal philosophers’ to go back to these questions. (I also find the ‘squeezing argument’ framework developed by Kreisel particularly illuminating, but will leave it out for now, for reasons of space.)
Francesco del Punta, a well-known and much admired scholar of medieval philosophy, sadly passed away yesterday. Upon hearing the news from his former student Luca Gili, I asked Luca to write an obituary for NewAPPS, and here it is.
After a long and dreadful illness, Francesco Del Punta (1941-2013) passed away yesterday evening. He was a great scholar and a great man, and he will be much missed. Del Punta is well known especially for his edition of Ockham’s commentary on Aristotle’s Sophistici Elenchi (St. Bonaventure, New York, 1978), and for his edition of Paul of Venice’s treatises De veritate et falsitate propositionis and De significato propositionis (Oxford, 1978).
This week, we’ve had a new round of discussions on the ‘combative’ nature of philosophy as currently practiced and its implications, prompted by a remark in a column by Jonathan Wolff on the scarcity of women in the profession. (Recall the last wave of such discussions, then prompted by Rebecca Kukla’s 3AM interview.) Brian Leiter retorted that there’s nothing wrong with combativeness in philosophy (“Insofar as truth is at stake, combat seems the right posture!”). Chris Bertram in turn remarked that this is the case only if “there’s some good reason to believe that combat leads to truth more reliably than some alternative, more co-operative approach”, which he (apparently) does not think there is. Our own John Protevi pointed out the possible effects of individualized grading for the establishment of a competitive culture.
As I argued in a previous post on the topic some months ago, I am of the opinion that adversariality can have a productive, positive effect for philosophical inquiry, but not just any adversariality/combativeness. (In that post, I placed the discussion against the background of gender considerations; I will not do so here, even though there are obvious gender-related implications to be explored.) In fact, what I defend is a form of adversariality which combines adversariality/opposition with a form of cooperation.
On the topic of useful teaching material available online (following up on Roberta's post on material for logic courses), I recently came across the series ‘60-Second Adventures in Thought’ produced by the Open University. These are short (60 seconds!) animated videos explaining some of the most intriguing philosophical puzzles:
“No variation of things arises from blind metaphysical necessity, which must be the same always and everywhere.” [A cæca necessitate metaphysica, quæ utique eadem est semper & ubique, nulla oritur rerum variatio.]--Isaac Newton, General Scholium (1713), Principia.
This week we're celebrating three hundred years since Newton published the General Scholium, attached to the second edition of the Principia. The passage above was only inserted in the final (1726), third edition. The argument of the sentence seems to be something like this:
A1: (Metaphysical) Necessity <--> Homogeneity
A2: Homogeneity and Variety are disjunctive alternatives (suppressed premise)
P: We observe variety
Therefore, no metaphysical necessity
From textual context it is very clear that Newton wants to defend the legitimacy of final causes. In particular, in circles around Newton it was common to refer to Spinoza as having defended the system of “Blind and Unintelligent Necessity” (S. Clarke (1705) Demonstration, 12.102); or “A Blind and Eternal Fatality” (S. Clarke, Demonstration, Intro.8);
or "blind mechanical necessity” (H. More, Confutation of Spinoza, 91). In fact, More refers to Spinoza as that “completely blind and stupid philosophaster” (H. More, 91; recall!) So, because Spinoza's denies final causes, the necessity he defends is unguided and undirected, that is, "blind." In the quoted passage above, Newton is, thus, offering an empirical argument against a metaphysical thesis: observed variety is not compatible with Spinoza's proposed system of nature. Moreover, the General Scholium argues more generally that we do not just observe variety, we observe quite determinate and peculiar variety (of the sort that leads Newton to offer his famous argument to a designer).
A naturall foole that could never learn by heart the
order of numerall words, as One, Two, and Three, may observe every stroak
of the Clock, and nod to it, or say one, one, one; but can never know what
houre it strikes...Nor is
it possible without Letters for any man to become either excellently wise,
or (unless his memory be hurt by disease, or ill constitution of organs)
excellently foolish. For words are wise mens counters, they do but reckon
by them: but they are the mony of fooles...one man calleth Wisdome, what another calleth Feare; and one Cruelty,
what another Justice; one Prodigality, what another Magnanimity...such names can
never be true grounds of any ratiocination. No more can Metaphors, and
Tropes of speech: but these are less dangerous, because they profess their
inconstancy; which the other do not.--Leviathan, 1.4
Night nursed not him in whose dark mind
The clambering wings of birds of black revolved,
Making harsh torment of the solitude.
The walker in the moonlight walked alone,
And in his heart his disbelief lay cold.--Wallace Stevens.
Despite the helpful reminder of 3AM Magazine, we at NewAPPS failed to celebrate the ninetieth birthday of Wallace Stevens' Harmonium. Seneca's mysterious, terse (under 325 words) tenth Letter, brought me back to Stevens' early poetry. Stevens talks of the (nightly) "torment of solitude," faced by the poetic mind (who happens to be a religious skeptic). Yet, Seneca seems to suggest that some of the very best people should seek solitude; in particular they should living
with their conscience [conscientia] (recall eight letter). But presumably Stevens's poetic disbeliever is expressing his conscience faitfully.
"I have no great faith in political
arithmetick, and I mean not to warrant the exactness of either of these
computations." Adam Smith (1776) Wealth of Nations.
While Ancient writers (Pliny) certainly noted the existence of
fossils, the meaning of the existence fossils was explosive during the eighteenth
century. In posthumously published work on Discourse on Earthquakes (1705), the secretary
of the Royal Society, Robert Hooke, had while surveying fossil evidence suggested that "There
have been many other Species of Creatures in former Ages, of which we can find
none at present; and that 'tis not unlikely also but that there may be divers
new kinds now, which have not been from the beginning." (here)
As it happens, Adam Smith's two best friends in old age, James Hutton and Joseph Black, the editors of his posthumous (1795) work, Essays on Philosophical Subjects (EPS), understood what was at stake. For, in 1785 Hutton gave
a public lecture, “Concerning the System of the Earth, Its Duration, and
Stability,” at University of Edinburgh. Due to Hutton's illness, Black gave
the lecture on Hutton’s behalf. In the lecture Hutton used geological and
fossil evidence to argue that the Earth was almost certainly older than 6000
years. We do not know for sure if Smith attended the lecture,
although he was in town.The argument was elaborated in far greater detail in Hutton's (1788) Theory of the Earth, which made him an international celebrity. The significance of this episode to the history of geology and Darwinism is much studied.
But what does this have to do with the history of economics?
Smith's closeness to Hutton may provide additional clues for one of the enduring mysteries of the history of economics: why did Adam Smith forsake the deployment of a mathematical model in the Wealth of Nations (1776)?
All nine of the Schock winners thus far were or are eminent philosophers, and most of us can only aspire to emulate the quality of their work as best we can. Even if one allows that "The Schock" only seems to go to male, analytical philosophers, each winner is an important and interesting philosopher, deserving of significant honor. Having said that, The Schock Prize judges had four or five chances to honor David Lewis, and failed to do so. (Lewis died in the Fall of 2001.) Lewis is arguably the most significant and influential (analytical) philosopher of the last quarter of the 20th century. (Perhaps, Deleuze is the only contemporary that will match his enduring significance, but he and Foucault died before the Schock got up and running.) So, while one can excuse the members of the Royal Swedish Academy of Sciences (RSAS) to play it safe and not award the prize to, say, Derrida (and, thus, avoid the predictable outcry), not giving it to Lewis means they failed to grasp the nature of analytical philosophy in their own time. That in addition, they passed on Gadamer, Ricœur, Goodman, and, thus far, Habermas suggests that the Schock has a long way to go before it can establish itself as the ultimate arbiter of general philosophical excellence.