[X-posted at Prosblogion] In the epistemology of religion, authors like Swinburne and Alston have argued influentially that mystical experience of God provides prima facie justification for some beliefs we hold about God on the basis of such experiences, e.g., that he loves us, is sovereign etc. Belief in God, so they argue, is analogous to sense perception. If I get a mystical experience that God loves me, prima facie, I am justified in believing that God loves me.
Alston relies critically on William James' Varieties of Religious Experience (1902). This seminal, but now dated psychological study draws on self-reports by mystics to characterize mystical experience. The mystical experiences James (and others) describe are unexpected, unbidden; they immediately present something (God) to one's experience, i.e., they provide a direct, unmediated awareness of God. More recent empirical work on the phenomenology of religious experience, such as that conducted by Tanya Luhrmann and other anthropologists, suggests that ordinary sense experience is a poor and misleading analogy for religious experience.
Even most Kuhn-haters are committed to the popular idea that an important measure of scientific truth is consensus among the experts. The downside of this commitment is on display in Roberta Millstein's important piece on "the removal of gray wolves from endangered species status in the U.S." One could debate the merits of the proposal, of course, but a scary aspect of her account is the manufacture of scientific consensus by a government-agency. I quote the relevant bits first, and comment below the fold:
The problem is, as the authors of the proposal acknowledge, that
there is a lack of consensus among scientists on what species are, what
subspecies are, and how many species and subspecies of wolves there are.
Nonetheless, they declare that one paper, Chambers et. al (2012),
"is the only peer-reviewed synthesis of its kind conducted for North
American wolves and summarizes and synthesizes the best available
scientific information on the issue."...Chambers et al. (2012) appears in the journal North American Fauna,
a publication of the FWS [Fish and Wildlife Service] itself; it is unclear why the paper wasn't
sent to a more recognized peer-reviewed journal in the field, such as Conservation Biology. According to the website for North American Fauna,
Chambers et al. (2012) is the only publication since 1991; it appears
as though the journal was reborn specifically to publish the wolf study
only to languish again afterward....In short,
the FWS service has taken one of the most well-studied animals, about
which there is great controversy...and benighted one paper, authored by itself (perhaps by some of the
same people who wrote the proposal) as "the best available scientific
information"....What is the urgency – so urgent that the FWS must hastily designate
itself as the source of the best available science – to make policy when
the science is so unsettled?"
On June 13, 2013, the U.S. Fish and Wildlife Service (FWS) proposed removing the gray wolf, Canis lupus, from the List of Endangered and Threatened Wildlife. Under this proposal, the species of gray wolves – which was at one time protected across the 48 contiguous U.S. states – would no longer be so protected. Only one subspecies of gray wolf would be granted protection: the Mexican wolf (Canis lupus baileyi). This would, in effect, protect only 75 of the wolves in the U.S. (the current size of the wild population of Mexican wolves). The FWS is soliciting comments on this proposed ruling here, where you can also find the text of the proposal. If you find this proposed ruling as problematic as I do, I would urge you to submit a comment.
Philosophers of biology such as myself, many of whom are well-versed in the challenges of defining the terms "species," "subspecies," and "population," not to mention skills in evaluating arguments, are particularly well placed to see the flaws in the proposed ruling. I was first alerted to potential problems in the proposal upon reading this editorial, which blamed the ruling on pressure from "a loose coalition of hunters' groups, outfitters, and ranchers." While I don't have any evidence for this assertion, after reading through the ill-defended proposal one has to wonder, "Why this? Why now? And why has the fact-finding process of the Endangered Species Act been corrupted with a 'government-manufactured scientific consensus'?" (thanks to Eric Schliesser for the felicitous phrase).
"Horn 1: no predicates carve at the joints. Here only two attractive options seem open. One is Goodmania: all talk of objecive joints in reality is simply mistaken." T. Sider (2011) Writing The Book of the World, 186.
"Let it be clear that the question here is not of the possible worlds that many of my contemporaries, especially those near Disneyland, are busy making and manipulating." N. Goodman, Ways of Worldmaking, 2.
One nugget in the Healy citation data is the near-absence of Nelson Goodman. The only work by Goodman found in the top 500 is The Structure of Appearance (1951) with 11 citations in the H4 (between 1993-2013). No mention of Goodman's Fact, Fiction, and Forecast (over 3000 citation according to google.scholar) or the foundational work in mereology; there is no harm--even Harvard professors can be forgotten (recall Jon on Collingwood paradoxicality; it seems Goodman's reputation went into decline around 1996). A more serious matter is that Goodman's Languages of Art (published in 1976), which over 4000 citations, is invisible in the H4. So, this tells you at once that nominalism is discussed and aesthetics is seriously neglected in our supposedly, generalist journals. This should come us no surprise (recall this post earlier in the week.) Given the significance of the widely deployed aesthetic principles embedded in the theoretical virtues of simplicity, elegance, harmony, this absence is something of a scandal (over and beyond the intrinsic value of aesthetics). (By the way: in the Stanford Encyclopedia, Goodman's aesthetics is the only entry primarily devoted to Goodman--it even contains a biographical sketch, as if the editors doubt there ever will be a systematic Goodman entry.)
The absence of Goodman's (1978) Ways of Worldmaking (WOW) is in some sense more important. (For the record: I never met Goodman.) Let me explain.
over sixty years it has been widely accepted that twinning, the process
that results in identical or conjoined twins, normally occurs about two
weeks after fertilization, or conception. This assumption has been used
as a premise in what philosophers call the "twinning argument." The
idea behind the twinning argument is that since one thing cannot be
identical to two things, a twin cannot be identical to a zygote, or
fertilized egg. Arguing from that
to the general conclusion that none of us is identical to a zygote is
more complicated and involves the further premise that it is not fully
determined at the time of fertilization whether a zygote will undergo
Brian Weatherson has done some nice data crunching of his own inspired by the Healy-data (eliciting an important comment by Jennifer Nagel), especially here. In particular, he looked at citations in a broader range of journals than the H4. Now Brian's crucial point is: "Lesson 1: It’s easy to be wrong about what people are talking about, if you try to generalise from personal experience." This is important to keep in mind for all of us that need to make decisions about the future of the field (i.e., future job candidates and those that supervise them.) Brian is a privileged observer having helped shape cutting-edge M&E as a presence at a variety of top departments. (I made a similar observation last week in light of converging evidence.) His data also show that the recent debate over philosophical intuitions (see Catarina [and Mohan and here]) is generating a lot of discussion outside of the H4. This is part of a larger phenomenon: while H4 articles are cutting edge (by stipulation), their citations are a lagging indicator of current trends (see also here and here). This may be due, in part, to their relatively slow refereeing process (thus, building in delay from submission to acceptance), or it may be due to in-crowd refereeing (so that in effect citations of high status males are privileged).
Here I focus on Brian's surprise about "336 citations for a paper about mechanisms!" This is a reference to "Thinking about mechanisms" by P Machamer, L. Darden, CF Craver in Philosophy of science (2000) [here for direct access]. (Hereafter MDC) There is no evidence that Brian thinks MDC does not merit attention. Brian's lack of awareness of the importance of this paper is indicative of a genuine split between: (a) post-Lewisian folk that take the metaphysics and epistemology of science, especially inspired by text-book physics, seriously and (b) those of us that focus more on scientific practice, past and present, in our philosophy of science, epistemology, and even metaphysics. Moreover, Brian's tendency to suggest that the high citation-rate for MDC must be due to a non-philosophical audience suggests (with N=1) that folk in (a) tend to underestimate how large the community of (b) is within philosophy.
Williamson's final paragraph begins: "In making these comments, it is
hard not to feel like the headmaster of a minor public school at speech
day, telling everyone to pull their socks up after a particularly bad
term". I cannot speak for the participants at the conference, but my own
reaction to being compared to a wayward British schoolboy was: So who
died and made you headmaster?--Tim Maudlin
I much prefer searching self-criticism than kicking the outsider. So, I was about to start really liking Williamson. But it turns out, Williamson is not above kicking down. For the very same passage continues: "within the analytic tradition many philosophers use arguments only to the extent that most ‘continental’ philosophers do: some kind of inferential movement is observ able, but it lacks the clear articulation into premises and conclusion and the explicitness about the form of the inference that much good philosophy achieves." (11) Okay, so the point is: most analytical philosophers think they are superior in philosophical virtue to continental philosophers, but they are as bad as the legitimately despised continental philosophers. Yes, in context Williamson says he is deploying "crude stereotypes," but he is not disowning the stereotype about continental philosophy! (Cf. "Much even of analytic philosophy moves too fast in its haste to reach the sexy bits." (15; emphasis added--ES)) The main point of Williamson's piece is to double-down on the stereotypical virtues of analytical philosophy: "precision" and "rigour" (15), and to do so in opposition to the despised 'other.' In fact, the un-argued hostility toward Kant, which I noticed yesterday, is a trope in Williamson: "if we aim to be rigorous, we cannot expect to sound like Heraclitus, or even Kant: we have to sacrifice
the stereotype of depth." (15; logically that allows Kant to rigerous, of course, but if you sound like Kant, etc...) [Doubling-down is not the whole story, but about that more tomorrow.]
The idea that there is something like an efficient market in scientific ideas (EMISI), supporting a ruling 'paradigm,' is very dangerous in the policy sciences. Even if we assume that scientists are individually pure truth-seekers,
imperfections in scientific markets can produce non-epistemic
(and epistemic) externalities (recall here, including criticism of a famous paper by Aumann). EMISI provides cover for 'The Everybody Did It' (TEDI) Syndrome (recall here). With Merel Lefevere, I have been exploring in what circumstances the presence of TEDI Syndrome is indicative of collective negligence (or a negative externalities). One possible consequence of our approach is that those scientists/institutions that interface with policy should seek out critics and critical alternatives to the existing paradigm. Jon Faust, an economist, sometimes acts as such an in-house critic at the United States Federal Reserve (the Fed) and the Riksbank. Two of his relatively non-technical papers (here and here) prompted this post.
Central Banks rely, in part, on models developed by academic economists to set monetary policy. Yet, Faust notes two problems in the way the intellectual supply-chain works: (i) there is almost no venue for "high-level conversation" about "academic work and its relation to actual practice." (53) (ii) State of the art models are often applied without full knowledge of all their possible consequences in the real world because these models models "have substantial areas of omission and coarse approximation" (55) In light of (i) and (ii), Faust's aim (iii) is to help central bankers and the modellers develop "a formal literature on best methods and practices for using materially flawed models in practical policymaking," (55) or "how to make the most responsible use in policymaking of what we now know." (60) My first reaction was, 'it is about time;' my second, more generous response was warmth in my philosophical heart that Faust is engaging in philosophy of scientific methodology and non-ideal regime/institution construction. His main idea is to adapt a kind of policy protocol from a literature that "goes under names like “human relevance of animal studies” and “interspecies extrapolation” (57) in the practice(s) of Toxicology.
If you find it useful, please feel free to share with students and colleagues. And if you see things that are missing or mistaken, please let me know. I am happy to make additions and corrections. Apologies in advance if I have overlooked your page of publicly available HPB articles or other important internet source for HPBers.
But mainly, having put in the effort to maintain it, I'd love to know that others besides myself are using it!
DeLanda provides a doubled
difference, a differentiation and differenciation,
of Deleuze. While DeLanda certainly provides a straightforward explanation of
the process Deleuze calls counter-actualization (moving from the actual to the
virtual), he does so not by an interpretation of Deleuze’s full philosophical
output, but by a reconstruction of the ontology and epistemology of Difference
and Repetition and The Logic of Sense: ‘This line of argumentation
... is, in fact, not Deleuze’s own, although it follows directly from his
ontological analysis’ (39). As DeLanda puts it: Deleuze’s world rather than his
words. But this folds Deleuze back on himself, giving us a virtualization of
Deleuze, moving from the actual productions of Deleuze (his books) to the differentiated structures of his
production process (the network of his concepts) in order to produce a new,
divergent, differenciation (DeLanda’s
book). By virtue of being a book on Deleuze, of course, this product has itself
the all-important fold of explaining the structures of all processes (or more
precisely, explaining that all processes are structured, and that the structure
of the realm of those structures, the virtual, can itself by explicated).
And here's an outline of ISVP I did for a course I taught back then.
1. Leibnizian substance: Something is a substance if and only if it evolves by the fundamental laws 2. Russellian laws: The cosmos is the one and only thing that evolves by the fundamental laws 3. Spinozan monism: The cosmos is the one and only substance (from 1 and 2)
As Schaffer is well aware, there is lots of irony in all of this. (At NewAPPS we have discussed Russell's reservations about Spinoza several times here, here, and also Jeff. [Recall also Russell's debts to Boole on Clarke vs Spinoza; and Stebbing on Spinoza.]) Now, my objection to this argument is inspired by my reading of Spinoza's so-called "Letter on the Infinite," but what follows is not meant to be a historical argument (or a gotcha, 'you got the history wrong' moment). Recall that I read Spinoza as claming that characterizing and grasping substance as such does not involve our ordinary scientific 'utensils' (e.g., measures, mathematics, laws of nature), but rather concepts like essence and eternity. Mathematical physics can only give a partial view of substance as such. Now one reason for this is that mathematical physics of Spinoza's day, treats some part of nature as a closed system (governed by its own 'conservation' rules/laws). Moreover, Spinoza would deny that fundamentally the universe evolves. For, applying temporal concepts to the universe is, however useful it may be, always a less than fully adequate conceptualization of the universe.
In an earlier
post, I began to write about John Hawthorne and Daniel Nolan’s analysis of teleological
causation. (Eric has written about related topics too.) My aim there was
primarily to summarize H&N’s analysis. Here I have some critical thoughts—I
have only been thinking about this for a couple of weeks, so my opinions are
far from final. H&N ask two questions:
Is teleology coherent? Is teleology consistent with contemporary physics? Can it be added on? In my
opinion, their analysis demonstrates coherence. (That's not a very high bar, but they clear it with ease.) I am less clear about
consistency with physics.
Let’s start by considering the motion of a
single particle. (I’ll consider ensembles of particles in a further post.) H&N
distinguish three types of process (all more fully described in my earlier
post): mechanical (for simplicity’s sake, Newtonian), retrotemporally mechanical (like
Newton’s, but moving backward in time), and teleological or goal-directed. Since Newtonian
trajectories are reversible—the temporal reversal of a trajectory is possible
if the trajectory is possible—the paths of single particles do not distinguish
between the first two options. If they are consistent with Newtonian mechanics,
they are also consistent with time-reversed Newtonian mechanics. (See Eric Winsberg's comments on my earlier post.)
Now, Hawthorne and Nolan open the door to
two ways of distinguishing goal-direction or teleological causation from both
mechanical and retro-mechanical causation at the level of single particles.
Evolutionary naturalism provides an account of our capacities that undermines their reliability, and in doing so undermines itself...I agree with Alvin Plantinga that...the application of evolutionary theory to the understanding of our own cognitive capacities should undermine, though it need not completely destroy, our confidence in them. Mechanisms of belief formation that have selective advantage in the everyday struggle for existence do not warrant our confidence in the construction of theoretical accounts of the world as a whole. I think the evolutionary hypothesis would imply that though our cognitive capacities could be reliable, we do not have the kind of reason to rely on them that we ordinarily take ourselves to have using in them directly--as we do in science. Thomas Nagel, Mind and Cosmos, 27-28 (emphasis in original)
A non-trivial (albeit not the most fundamental) feature of Nagel's book (recall my here, here, here; see Feser's response to me and also Mohan's posts: here, here, here and here) is his reliance on Plantinga's so-called evolutionary argument against naturalism (hereafter EAAN; see also pp. 74-78). Let's leave aside the fact that Nagel pretends in his book that this (evolving) EAAN argument has not been subject to significant criticism. (It must be convenient to think that one is obliged to engage only with one's referee [Sober, although even his criticism of EAAN is ignored], one's colleague [Street], one's cheerleader [Plantinga], and one's deus ex machina [Hawthorne & Nolan].) Here I explore a response to this style of argument that is overlooked by Nagel and, I think, not explored in the literature (but would love to learn otherwise--it's not my field). So, let's grant -- for the sake of argument -- the claim that "Mechanisms of belief formation that have selective advantage in the
everyday struggle for existence do not warrant our confidence in the
construction of theoretical accounts of the world as a whole." What follows from this?
My quick and dirty answer is: nothing. For the crucial parts of science really do not rely on such mechanisms of belief formation. Much of scientific reason is or can be performed by machines; as I have argued before, ordinary cognition, perception, and locution does not really matter epistemically in the sciences.
Since an article by Macfie (1971), scholars
have recognized that Smith uses the phrase “invisible hand” three times in his corpus; once in Wealth of Nations; once in The Theory of Moral Sentiments. Recently the great, late Warren Samuels bequeathed us a lifetime of
scholarship on the enormous variety of interpretations that Smith’s
“invisible hand” has generated. In this post I focus on the "third" use, which occurs in Smith's "History of Astronomy" -- one of the founding documents of the philosophy of science (and simultaneously the history of the philosophy of science) -- published posthumously in 1795.
Hence the origin of Polytheism, and of
that vulgar superstition which ascribes all the irregular events of nature to
the favour or displeasure of intelligent, though invisible beings, to gods, daemons,
witches, genii, fairies. For it may be observed, that in all Polytheistic
religions, among savages, as well as in the early ages of Heathen antiquity, it
is the irregular events of nature only that are ascribed to the agency and
power of their gods. Fire burns, and water refreshes; heavy bodies descend, and
lighter substances fly upwards, by the necessity of their own nature; nor was
the invisible hand of Jupiter s ever apprehended to be employed in those
matters. But thunder and lightning, storms and sunshine, those
more irregular events, were ascribed to his favour, or his anger. Man,
the only designing power with which they were acquainted, never
acts but either to stop, or to alter the course, which natural events would take, if left to themselves. Those other intelligent beings, whom they imagined, but knew not, were naturally supposed to act in the same manner; not to employ themselves in supporting the ordinary course of things, which went on of its own accord, but
to stop, to thwart, and to
disturb it. And thus, in the first ages of the world, the lowest and most pusillanimous superstition supplied the place of
At his blog Edward Feser has been responding to Thomas Nagel's
critics (no, not me (yet)!). In response to Sober's review he concludes
with the following sociological remark:
think, is precisely what is going on -- the “presuppositions that Nagel
trying to transcend” run so deep in contemporary academic philosophical
that it is difficult for most philosophers to get any critical distance
on them. They lack, as Nietzsche might have said, the courage
for an attack on their own convictions. And
yet the evidence that there is something deeply wrong with the
consensus is all around them even in “mainstream” academic philosophy --
work of renegade naturalists like Nagel, Searle, Fodor, McGinn, et al.;
like Chalmers, Brie Gertler, Howard Robinson, John Foster, et al.; and
like the “new essentialist” metaphysicians and philosophers of science
Ellis, Martin, Heil, Mumford, et al.) and the analytical Thomists
Haldane, et al.). It’s psychologically
easy (even if philosophically sleazy) to dismiss one or two of these
as outliers who needn’t be taken seriously.
But as their ranks slowly grow, it will be, and ought to be, harder both
psychologically and philosophically to dismiss them.
Which is no
doubt why the more ideological naturalists would very dearly like to strangle
this growing challenge to the consensus while it is still in its crib -- hence
the un-philosophical nastiness with which Nagel’s views have been greeted in
some quarters. But Sober, to his credit,
is not an ideologue, and is sober enough to acknowledge at least the possibility that Nagel is on to something.--Edward Feser.
Analytical philosophy has made great
progress over the last century. But its original, necessary biases did some
harm, too. In particular, detailed working knowledge of the history of
philosophy and metaphysics was banished for several generations. While
metaphysics is thriving again, we still lack (despite the brilliance of David
Lewis' modular approach) complete systems of thought that can rival in depth
and interlocking breadth the past masters (say, Suarez, Leibniz, etc.). The
damage has also been more narrow. For example, one of the most obvious
so-called ‘Kuhn Losses’ is our
relative ignorance of the nature and implications of the Principle of Sufficient Reason (PSR). This is no
surprise because analytical philosophy was founded in the act of rejecting PSR.
Our forefathers’ attempt to balance between common sense and the truths of
science meant -- as science and the PSR parted ways -- the willing submission to brute, ultimate facts (recall this post).
In Mind & Cosmos, Thomas
Nagel happily embraces “a form of the principle of sufficient reason” (17) in
support of his "common sense" (5, 7, etc.) and against the recent
“orthodox scientific consensus.” (10; 5) Rather than accepting this
"ideological consensus," (128) Nagel insists -- regularly using
language reminiscent of the great Feyerabend -- that "almost
everyone in our secular culture has been browbeaten into regarding the
reductive research program as sacrosanct." (7) While Nagel insists that
the champions of scientific enlightenment are bullies, he treats the
"defenders of intelligent design" with "gratitude" (Plantinga returns the gratitude),
even though Nagel clearly recognizes that once one embraces one's inner sensus
divinitatis one is also compelled in one's judgments. (12)
A classic statement of the PSR is Spinoza's
"For each thing there must be assigned a cause, or reason, both for
its existence and for its nonexistence." (Ethics 1p11d2) That is to
say, any PSR worth having imposes significant explanatory demands (especially
of non-arbitrariness) on any philosophical system in which it is deployed.
Below the fold I critically discuss Nagel's way of combining the PSR and his
attempted revisionary science, but here I just register the marvelousness
of Nagel's deployment of the PSR as an instrument in the service of common
sense! (cf. 91-2) This is certainly an original move in the history of
metaphysics--one that, in a single, magical stroke overturns Lovejoy's long narrative.
When I was a child one of my favorite books was about a pair of
identical twins who decided to switch clothes. They looked so much alike
that their parents had had to dress one in blue and the other in green.
The twin boys fooled their parents for a long long time. An obedient
3-year old, I was thrilled by their ingenuity and boldness.
parents can usually tell the difference between their identical twins--grandparents, teachers, neighbors and peers sometimes cannot. And for
good reasons. Identical twins very often look almost exactly alike. No
surprise there, if identical twin share all of their DNA.
Yet, it seems that one can reasonably ask why one should join a society like the PSA these days. It used to be that many people joined in order to get the journal, Philosophy of Science. But now, most academics get access to the journal through their universities. Some old-fashioned types like me join societies whose goals they want to promote and whose communities they feel a part of; in addition to the PSA, I am member of HSS, ISHPSSB, ISEE, and the APA.
However, if one is not moved on that score, why join? Why join the PSA, or any academic society, for that matter?
Daniel Kahneman’s Thinking, Fast and Slow is making quite a splash (the other
day, I saw at Bristol airport that it is currently at the top of the bestseller list for non-fiction -- naturally, it still can’t compete with Fifty Shades of Gray). I haven’t read it
yet, but people whose opinion I hold in high esteem tell me that it has been
successful in striking the difficult balance between being accessible to a
wider audience and scientifically accurate (for the most part at least) at the
same time. The book summarizes research on cognitive and reasoning biases of
the last decades, a research program in which Kahneman himself has been a major
player. The conceptual cornerstone of the book is the (still) popular
distinction between System 1 and System 2, the two systems which allegedly run
in parallel underpinning all our cognitive processes, and which often conflict
with each other.
Now, as I’ve stated a few times before (here for example), I
am no fan of System 1/System 2 talk at all (not even of weaker versions, the
so-called dual-process theories of cognition), even though I agree that the
empirical findings on cognitive biases should be taken very seriously. (I also
agree that there is something to the idea of debiasing as suppressing automatic
processes.) So I was curious to see how Kahneman himself introduces the System
1/System 2 distinction, and took a quick look at the book (my husband was
reading it during our holiday of a few weeks ago, after having gotten it from
me as a birthday present – that’s what you get for having a nerdy wife). The
first thing that struck me is that, on footnote 20, he lists some of the
pioneers of dual-system theories, including Jonathan Evans, Steve Sloman and
Keith Stanovich, and adds: “I borrow the terms System 1 and System 2 from early writings
of Stanovich and West that greatly influenced my thinking” (he refers to their
2000 BBS article on individual differences in reasoning). But what is puzzling
is that Stanovich himself now overtly rejects the conceptualization
of the distinction in terms of systems, which unduly suggests reified entities,
and now uses the process terminology
instead (same with Jonathan Evans).
But perhaps most striking is what Kahneman
says in the conclusion of the book:
In the late 1970s, Benjamin Libet showed
that motor cortex activity preparing for an action occurs before the conscious
act of willing that action. (Here is a nice demonstration of the experiment by Patrick Haggard.)
Libet's result has been replicated countless times (as above), and though it is perhaps rash to generalize too broadly, let's just say we have strong evidence for:
Conscious acts of "willing" an action occur after the brain activity that cause the action, and so
Conscious acts of willing do not cause action.
As a philosopher, which of the following conclusions can I legitimately draw?
"(A proximate mechanism is an immediate direct cause, while an ultimate explanation is the last in the long chain of factors leading up to that immediate cause. For example, the proximate cause of a marriage breakup may be a husband's discovery of his wife's extramarital affairs, but the ultimate explanation may be the husband's chronic insensitivity and the couple's basic incompatibility that drove the wife to affairs.) Physiologists and molecular biologists regularly fall into the trap of overlooking this distinction, which is fundamental to biology, history and human behavior. Physiology and molecular biology can do no more than identify proximate mechanisms; only evolutionary biology can provide ultimate causal explanations."--Jared Diamond Why is Sex Fun?, 1997.
Let's grant Diamond the coherence of the distinction between proximate mechanisms and ultimate explanations. Let's also grant it him in the way he has articulated it, despite, perhaps, a lingering sense that being "last in the long chain of factors" does not quite grasp the fundamentality of an ultimate explanation. I was struck by (i) Diamond's insistence (without evidence) that his fellow scientists "regularly" overlook the distinction, and by (ii) his further claim that "only" evolutionary biology can get at ultimate causal explanations. Diamond has a nice sense of the intellectual hierarchy within the intellectual division of labor. (If you plug in "metaphysics" for evolutionary biology and "mechanics" for molecular biology, you get a standard 18th century picture embraced by Berkeley and Leibniz, I think.)
Here I am not interested in the hankering after even more fundamental than fundamental explanations (you know, God, the Principle of Sufficient Reason, Final Causes, etc.). Rather, this morning I was gripped by this thought: why think that in the real world there is anything over and above the proximate causes and mechanism? Why isn't the whole idea of "ultimate" explanations just our chasing after patterns? Now, what would persuade me otherwise is if the evidence for the ultimate causes can (a) systematically avoid relying on the evidence for the the proximate causes and (b) be also more robust, higher quality, etc. But given that it is so hard to do controlled experiments in the service of (b), (b) doesn't seem to be so easy to achieve. I am too ignorant about the details to have any strong opinion on (a). Anyway, I bet there are standard answers to my gripped questions.
The Philosophy of Biology Lab that I co-run with Jim Griesemer here at UC Davis is re-reading Wesley Salmons' Scientific Explanation and the Causal Structure of the World. I am reminded how, when doing such re-readings, one can find little nuggets of wisdom that may have been overlooked on the first read. Even in the Preface.
Although much modern work on scientific explanation has been rather formal and technical -- often treating various quasi-formal 'models' in great detail -- I shall dwell extensively on less formal considerations. There are two reasons for this emphasis. In the first place, I have been convinced for some time that many recent philosophical discussions of scientific explanation suffer from a lack of what Rudolf Carnap called "clarification of the explicandum." As Carnap has vividly shown, precise philosophical explications of important concepts can egregiously miss the mark if we do not have a sound prior informal grasp of the concept we are endeavoring to explicate.
It's my impression that this is a lesson that many have failed to learn; all too often I see formalism with little attempt to explain what the formalism is intended to represent (and not just in philosophy of science).
The lore we are told inspired by, say, Putnam (not a disinterested spectator) and more recently Huw Price, who thinks we delude ourselves, is roughly this: after the founders of analytical philosophy had successfully ridden philosophy of its thirst for metaphysics, Quine, discerning a crack in Carnap's edifice, re-opened the door to our deposed Queen, μεταφυσική, in "On What There Is" (and "Two Dogmas"); with the door ajar and Alvin Goldman and Dan Dennett distracted by 'naturalizing' everything, Hillary Putnam developed a Quine-ean argument from the authority of science for the really real existence of numbers and, more significantly, David Lewis -- perhaps spurred on by some Antipodes -- drove a truck through the opening by embracing modal realism.
We love linear stories [Carnap --> Quine --> Lewis], don't we, so even the descriptive metaphysics of Strawson's Individuals (1959) can't quite be squished into, shall we say, our conceptual scheme. Now consider the following paragraph written in 1930:
The pursuit of metaphysics as the study of generic characters of existence has been slowly regaining its professional adherents. Once its central theme, reaction to the unchecked flights of nineteenth century romantic speculation has well nigh banished metaphysics as a legitimate subject matter for philosophy. But the problems which professional philosophers refused to consider became acutely pressing in the special sciences. It was to be expected that ere long comprehensive treatises on the nature of existence would appear, fashioned by philosophers were where sensitive to the advances of recent science as well to the ancient tradition that philosophy is the systematic study of being. To the series of distinguishes essays on metaphysics which contemporary philosophers have contributed, these volumes [by Whitehead--ES] are a notable addition.--Ernest Nagel (1930 "Alfred North Whitehead," republished in Sovereign Reason, p. 154.)
Thomas Nagel's Mind and Cosmos is drawing quick responses. (Can't wait to read Mohan's!) Both in the hostile review by Brian Leiter and Michael Weisberg as well as in the more cautious strategic pivot by Alva Noë (who doesn't engage critically with Nagel's book), mythic history of the scientific revolution plays significant rhetorical roles.
Let's start with Noë:
If there is mind — and of course the great scientific revolutionaries
such as Descartes and Newton would not deny that there is mind — it
exists apart from and unconnected to the material world as this was
conceived of by the New Science.--Alva Noë (NPR)
Let's accept Noë's point about Descartes. But Newton
thought minds had to be somewhere in space and in time, extended but "indivisible." Incidentally, this is also Newton's doctrine about "the Maker and Lord of all things" who "cannot
be never and no where." (Principia, General
Scholium.) And at one point earlier in his career, Newton also flirted with the idea that an
extended body had to be the kind of thing that was capable of exciting various perceptions in the senses and imagination of minds (this is from a piece known as "De Gravitatione;" I am linking to a very nice treatment by Zvi Biener and Chris Smeenk.) [Note that I am not drawing on the infamous sensorium passage at all.]
"It has been shown, for
example, that a pollster can in principle always publish his prediction
of an election result in such a form that, despite the reactions of
voters to the forecast, the prediction is not falsified by those
reactions. CF. Herbert A Simon"--Ernest Nagel ((1960) Structure of Science, 473, n 13)
"it was shown that it is always possible in principle
to make a public prediction that will be confirmed by the event...It
was shown that correct prediction requires at least some knowledge of
the reaction function."--Herbert Simon (1954) 253; emphasis in original)
Simon won a Nobel in economics in 1978. (In his intellectual autobiography he recounts his debts to Carnap and his ongoing interests in philosophy of physics.) In philosophy he is probably now best remembered for his work on bounded rationality/satisficing. The work discussed by Nagel (who didn't just invent the category analytical philosophy as we know it, but was also a leading figure in the discipline in the 50s and 60s [Nagel's Structure was much cited]) above and from which I cite is part of a larger literature in which Simon played a non-trivial role as acknowledged in the key (1954) paper by Grunberg and Modigliani (who won the Nobel in 1985). Simon & Grunberg/Modigliani provide a framework for showing under what conditions social scientific predictions need not be self-refuting--a welcome result to the economics profession that was warming up to Milton Friedman's (1953) proposal that what mattered was not the realism of the assumptions in an economic model but its predictive power (Simon was critical by the way).
Nagel does not name his targets in dealing with a "difficulty confronting the social sciences, sometimes cited as the gravest one they face." (466) But if one goes to Simon's paper it's not so hard to figure out; after a nod to Aristotle, he names Frank Knight and Hayek in the first footnote. Now, there are very important differences between Hayek and the now-forgotten Knight (not the least of which is that in TJ Rawls encouraged attention to Knight (he even tells us to read the footnotes) and not so with Hayek--a judgment of relative value that the discipline has reversed), but one important commonality is that in the 1940s and 50s economists from free-marketeers like Alchian to high theory types like Arrow were extremely eager to reject their skepticism about the technocratic turn of the discipline. Nagel is certainly aware of Hayek's skepticism. (In addition, Grunberg & Modigliani also point to another forgotten economist, Vining--recall my post.)
The hazards of trying to draw conclusions about all of science, by
focusing narrowly on physics were learned at the end of the last
century. However, including biology and chemistry are only the
beginning, not the end, of the project of trying to develop a more
well-rounded picture of science.--Alisa Bokulich
The quoted passage is from a terrific NDPR review by Bokulich that Catarina discussed yesterday. Bokulich notes that "Conspicuously absent from this list are any of the social sciences." Bokulich goes on to call attention to the works of four recent leading philosophers of social science--all of which happen to be women. [By the way, when later in the review Bokulich calls attention to the lack of representation of women in the volume (and the lack of focus on philosophy race/feminism) she does not refer back to her earlier discussion. This justifies Catarina's claim that Bokulich should be praised for the "elegant way" in which these issues are raised. ]
As the epigraph to this post suggests, our current understanding of the development of philosophy of science is that we are "trying" to develop it away from an exclusively physics focus to other sciences during the last few decades. (It was gratifying to read Bokulich's claim that philosophy of economics is "thriving.") But this leaves me with a puzzle: if one opens Ernest Nagel's (1961) The Structure of Science, one notes that three out of fifteen of chapters are exclusively focused on philosophy of social sciences and history. These together comprise 25% of the text. (This understates the situation because earlier chapters also discuss relevant material. There is is also a chapter on biology, by the way.) So, half a century ago one the most widely cited works in the philosophy of science (although probably unread these days unless one is interested in Nagel-reduction) by one of the professional leaders of the discipline (who arguably invented analytical philosophy as a category) at one of the then elite departments already was offering a "well-rounded" picture of science. How come this was not the norm in the profession?
(It is usually Eric’s job to comment on noteworthy features of various NDPR reviews, but this time he’s having
technical issues with his computer and thus asked me to cover this one.)
Besides the underrepresentation of women at
conferences and other academic events, here at NewAPPS and elsewhere we also
talk about the possible negative consequences of all-male volumes (see an old
post by me on this here). Now, one is starting to see comments on the gender
distribution of volume line-ups also in book reviews, such as in this excellent DNPR
review by Alisa Bokulich (Boston University) of The Continuum Companion to the Philosophy of
Science, edited by Steven French and
problematic feature of this volume is that out of 20 contributors spanning the
entire philosophy of science, there is not a single female philosopher of
science included. While this omission may be understandable for a very small
collection on a highly specialized topic, it is more difficult to excuse for a
volume of this size and breadth. While I am sure this was an unintentional
oversight, it is part of a disturbing larger pattern within the philosophy of
science, and philosophy more broadly. Such omissions are particularly troubling
when it comes to pedagogical works, such as this Companion, that are designed to help recruit the next
generation of philosophers of science. These volumes then become not only a
symptom of this problem, but also part of its source, by giving the impression
that the philosophy of science is not a field to which women make significant
[A] At PLOS our mission is to accelerate progress in science and medicine by
leading a transformation in research communication. We firmly believe
that acceleration also requires being open about correcting the
literature as needed so that research can be built on a solid
foundation. Hence as editors and as a publisher we encourage the
publication of studies that replicate or refute work we have previously
published. We work with authors (through communication with the
corresponding author) to publish corrections if we find parts of
articles to be inaccurate. [B] If a paper’s major conclusions are shown to
be wrong we will retract the paper. By doing so, and by being open about
our motives, we hope to clarify once and for all that there is no shame
in correcting the literature.--By Virginia Barbour and Kasturi Haldar (writing at PLOS) [HT retractionwatch]
[B] "If a paper’s major conclusions are shown to
be wrong we will retract the paper" is very controversial. But the reason for it "in order to ensure that errors (from whatever means – unintentional or
intentional) are not simply incorporated uncritically into the
scientific literature" is not silly. But readers may wish to disagree.
But what about [A] "our mission is to accelerate progress in science and medicine." Maybe that is part of the problem?
[This is the third post after this and this on Maudlin's The Metaphysics Within Physics that I am reading with F.A. Muller, Victor Gijsbers, and Lieven Decock.--ES]
[A] "If a law governs a particular space-time region then the physical states will so evolve." (Maudlin, 17)
analysis of laws is no analysis at all. Rather I suggest we accept laws
as fundamental entities in our ontology." (Maudlin, 18)
[C] "The laws can operate to produce the rest of the Mosaic exactly because their existence does not ontologically depend on the Mosaic." (Maudlin, 175; emphases in original)
[D] "The universe, as well as all the smaller parts of it, is made: it is an ongoing enterprise, generated from a beginning and guided towards its future by physical law." (Maudlin, 182; emphasis in original)
book is fantastic. It gives you a sense of what metaphysics
looks like if one has an advanced education in recent physics; it is also rooted in "scientific practice." With laser like precision it focuses on the most fundamental weaknesses of the most important alternative approaches (Quine, Lewis, Van Fraassen, etc), and it makes obscure physics seem easy to digest. What would stop somebody sympathetic to Maudlin's general orientation from accepting laws in one's ontology [B]?
Maudlin calls the fundamental laws "FLOTEs" (for Fundamental Laws of Temporal Evoluton). Together with "adjunct principles," FLOTEs describe how states (may) evolve into later states. (17) Initial conditions are examples of such principles. One can certainly understand physics such that its business is mainly discovering FLOTEs. So far so good. But [A, C, D] describe the laws themselves as the productive sources of change. If Maudlin were writing in the seventeenth century we would describe his position about laws either as "second causes" (Cartesian language) or as a special modern instance of "formal causation" (in the way that platonizing mathematicians thought of these [see Mancosu's book])--inspired by Kuhn (and anticipated by Burtt), I think such formal causes were conceptually transformed into laws of nature by Bacon and Newton.
Short version: Science is often said to be committed to reals, because physics, for example, essentially makes use of sentences with real-quantifiers. But we have perfectly good countable, well-founded, constructive models of full second order arithmetic. So why can't physics, for example, simply explicitly embrace one of these as what they are working over and thereby radically simplify their alleged ontological commitments?