At our lab in St. Louis we are working with several people with superhuman abilities, also known as “savant skills.” My research assistant Kristian Marlow and I are also currently finishing a book entitled The Superhuman Mind (under contract with an agency, see updates here). We are blogging about these cases almost daily over at Psychology Today. The following are four brief stories about some of the individuals we are working with.
Daniel Kahneman’s Thinking, Fast and Slow is making quite a splash (the other
day, I saw at Bristol airport that it is currently at the top of the bestseller list for non-fiction -- naturally, it still can’t compete with Fifty Shades of Gray). I haven’t read it
yet, but people whose opinion I hold in high esteem tell me that it has been
successful in striking the difficult balance between being accessible to a
wider audience and scientifically accurate (for the most part at least) at the
same time. The book summarizes research on cognitive and reasoning biases of
the last decades, a research program in which Kahneman himself has been a major
player. The conceptual cornerstone of the book is the (still) popular
distinction between System 1 and System 2, the two systems which allegedly run
in parallel underpinning all our cognitive processes, and which often conflict
with each other.
Now, as I’ve stated a few times before (here for example), I
am no fan of System 1/System 2 talk at all (not even of weaker versions, the
so-called dual-process theories of cognition), even though I agree that the
empirical findings on cognitive biases should be taken very seriously. (I also
agree that there is something to the idea of debiasing as suppressing automatic
processes.) So I was curious to see how Kahneman himself introduces the System
1/System 2 distinction, and took a quick look at the book (my husband was
reading it during our holiday of a few weeks ago, after having gotten it from
me as a birthday present – that’s what you get for having a nerdy wife). The
first thing that struck me is that, on footnote 20, he lists some of the
pioneers of dual-system theories, including Jonathan Evans, Steve Sloman and
Keith Stanovich, and adds: “I borrow the terms System 1 and System 2 from early writings
of Stanovich and West that greatly influenced my thinking” (he refers to their
2000 BBS article on individual differences in reasoning). But what is puzzling
is that Stanovich himself now overtly rejects the conceptualization
of the distinction in terms of systems, which unduly suggests reified entities,
and now uses the process terminology
instead (same with Jonathan Evans).
But perhaps most striking is what Kahneman
says in the conclusion of the book:
A few days ago Eric linked
to a report
by Lori Gruen (Ethics and Animals blog here; Wesleyan University
website here) on the renewal
of cruel maternal deprivation research on primates. The comments on Eric's post
were such that we asked Lori to write a guest post for us. She graciously
agreed; the post follows: [UPDATED 1:40 pm 16 Oct. See below for contact info for Madison's Provost.]
steps in scientific progress are sometimes followed closely by outbursts of foolishness.
New discoveries have a way of exciting the imagination of the well-meaning and
misguided, who see theoretical potentialities in new knowledge that may prove
impossible to attain.” – Dr.
Sherwin Nuland, Yale School of Medicine
Does the system we have in place to curtail scientific
“outbursts of foolishness” and protect research subjects from “misguided”
scientific curiosity work?
There was no oversight system in place back in the
days when Harry Harlow’s experiments psychologically tormenting baby monkeys
were making news. Surely that sort of
horrible work in which infant primates are taken from their mothers to make
them crazy wouldn’t be approved of today. On my recent visit to the University
of Wisconsin I was shocked to learn otherwise.
The oversight committee chairs told me they have never rejected a
proposal. Not one.
And one of the protocols they did not reject is a renewal
of maternal deprivation research. Disturbingly, ithas been approved by not
one, but two oversight committees. A
psychiatry professor who has a distinguished record of research on anxiety
disorders plans to separate more monkey babies from their mothers, leave them
with wire “surrogates” covered in cloth (a practice developed by Harlow) to
emulate “adverse early rearing conditions,” then pair them with another
maternally deprived infant after 3-6 weeks of being alone. The infants will then be exposed to fearful
conditions. The monkeys in this group
and another group of young monkeys who will be reared with their mothers, will
then be killed and their brains examined. (The experimental protocol is here.)
The research in question is a new type of maternal deprivation research designed
to study anxiety by creating adverse early rearing conditions and then exposing
the maternally deprived young monkeys to a snake and other frightening stimuli. The monkeys will be killed after the
experiment is over and their brains will be studied. I believe this experiment
is unethical and I also think it violates the spirit, if not the promulgated
regulations, of the Animal Welfare Act which explicitly requires that the
psychological well-being of primates be promoted (not intentionally destroyed).--Lori Gruen
In 2007, a study by Hamlin, Wynn and
Bloom was published in Nature claiming to show that preverbal babies had what
could be described as a ‘moral compass’ (not the authors’ own terms in the
article). From the abstract:
Here we show that 6- and 10-month-old infants take
into account an individual's actions towards others in evaluating that
individual as appealing or aversive: infants prefer an individual who helps
another to one who hinders another, prefer a helping individual to a neutral
individual, and prefer a neutral individual to a hindering individual. These
findings constitute evidence that preverbal infants assess individuals on the
basis of their behaviour towards others. This capacity may serve as the
foundation for moral thought and action, and its early developmental emergence
supports the view that social evaluation is a biological adaptation.
Over the last week, there have been quite a few blog posts prompted
by Tim Williamson’s recent critique of experimental philosophy in his review of
J. Alexander’s Experimental Philosophy.
In particular, at NewAPPS Eric Schliesser and Berit Brogaard shared some of their
views on the debate. Here, however, I want to discuss a post by Eric
Schwitzgebel at Splintered Mind, as I think he identifies an important and
overlooked component of the whole debate. Eric puts forward the distinction
between X-Phi in a narrow and in a wide sense. The narrow conception can be
the work canonically identified as "experimental philosophy" surveys
ordinary people's judgments (or "intuitions") about philosophical
concepts, and it does so by soliciting people's responses to questions about
The wide conception is more difficult to define, and Eric
basically offers a definition by exclusion:
In this broad sense, philosophers who do empirical work aimed at addressing traditionally philosophical questions are also experimental philosophers, even if they don't survey people about their intuitions.
(I’ve been through a ridiculously busy period of work-related traveling and thus scarce blogging, and in the next four weeks I’m supposed to be on holiday, so again scarce blogging. But there is still one topic I really want to discuss before the summer break, so here it is.)
Here are a couple of brain-teasers for your amusement on this Monday morning/afternoon (depending on your time zone):
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days
Just now on NPR, there was a discussion about toddlers and iPads that could have really used a Heideggerian intervention. The issue was, more or less, what is happening when you give a 2 year old an iPad and they get completely absorbed for 5 hours straight? Is this good for them or not? And does it help them to learn what they need to learn in order to mature into smart, productive kids and adults? NPR seems to love this stuff; there’s a shorter article on the same topic here.
A range of experts was consulted, most of whom said that we don’t have enough (empirical) research to answer these questions yet, but that we shouldn’t panic – we just need to make sure that kids get a balance of screen time and face-to-face interaction with other people. But the question that started the whole discussion was a father’s question about what is going on for his son when he “zones out” in front of the iPad. This question remained unaddressed, as far as I could tell from my own zoning in and out of the radio discussion. But isn’t this basically a matter of Benommenheit, or captivation, literally “being taken,” being absorbed in an object to the point where everything else fades away.
I have written about our case study of a person with acquired synesthesia and savant syndrome in an earlier post on this blog. To make a long story short, JP was hit on the head in a mugging incident and acquired traumatic brain injury.
After the incidence he started experiencing the world in terms of geometrical figures. He also had lost his ability to see smooth boundaries and smooth motion. He sees objects as separated from their surroundings in terms of tiny tangent and secant lines. He experiences motion in picture frames. When objects are moving relative to him or he is moving relative to objects, three-dimensional geometrical figures form before his eyes.
Right after the incident he started drawing some of these images by hand. They turned into beautiful pieces of art that have received several awards. After some elementary math training following the accident, JP also experienced automatic visual imagery in response to certain mathematical formulas.
Ingrid Robeyns, professor of practical philosophy at the Erasmus University in Rotterdam, is known among other things for her work on the capability approach (see her SEP entry on the topic, and her review of Martha Nussbaum's Creating Capabilities), and as a blogger at the interdisciplinary blog Crooked Timber. This week, she will be running a series of posts on autism at Crooked Timber -- the first one is here, the second one here. Ingrid is herself the mother of an autistic child, and the combination of philosophical insight with her first-person experience is bound to yield a very interesting perspective on the topic.
Autism is a topic having many important philosophical implications, ranging from theories of cognition and philosophy of mind to ethics. So I for one look forward to the upcoming posts, and I suspect that many NewAPPS readers will be equally interested. Go check it out; in fact, today is World Autism Awareness Day, so as good a day as any!
With the growth of controversies conducted through blogs the really existing norms in various scientific disciplines can sometimes be revealed (perhaps unintentionally). In this blistering post, Yale Psychologist, John A. Bargh, Ph.D., criticizes a study that had not replicated his earlier results. Here I ignore the substance of his charges (for useful criticism see here). In his criticism he vehemently attacks the online journal, PLoS ON. But he follows with a most revealing, self-undermining comment: "If I'd been asked to review it (oddly for an article that purported to fail to replicate one of my past studies, I wasn't) I could have pointed out at that time the technical flaws." The parenthesis teaches us that the (once-standard?) norm among the peer-reviewed journals in his niche is that if one is targeted (and high status?) one can expects to be the referee. Perhaps, the vehemence of the little spat is indicative that an old-boys-network is on the way out? [Hat-tips to Bryce Huebner and Antti Kauppinen on Facebook.]
Many readers will have already seen Jess Prinz’s recent blog post criticizing a psychological study defending the Male Warrior hypothesis, according to which men are evolved to seek out violent conflicts in order to get women. He now has a reply to the objections raised by two other bloggers, one of them one of the authors of the study (H/T Feminist Philosophers). I’m not sure this is appropriate language for blogging, but I just can’t help myself: Prinz is really kicking ass, there is no better way to describe it. Some excerpts:
One of the subjects I work with, JP, has acquired synesthesia and acquired savant syndrome. This happened as a result of a brutal assault in 2002, during which he was kicked and hit on the head. He was subsequently diagnosed with a bleeding kidney and an unspecified head injury. What the doctors didn't know was that JP no longer saw the world the way he used to. Objects suddenly did not have smooth boundaries. Things no longer moved smoothly. Motion took place in picture frames. It looked like someone paused and unpaused the flow of the world very rapidly. Even more amazing: JP was suddenly able to see vivid fractal images of objects with a fractal structure (such as, broccoli).
JP's response to his new way of seeing the world was to withdraw from it. He spent the following three years in his apartment and refused to leave unless it was strictly necessary. After three years in complete isolation JP figured that he would try to draw what he saw, so he could make people understand him. He started drawing. And he continued. He drew and drew and drew, using only a pencil, a ruler and a compass. The results were beautiful hand-drawn fractal-like images. JP didn't know then that he was the first in the world to hand-draw mathematical fractals and that he would later win prizes for his drawings. He didn't even know what he was drawing, except that it was what he saw.
Almost a year ago I wrote a post on the dubious scientific status of psychoanalysis. One might think that this is an old and dated Popperian question, but in view of the influential position still occupied by psychoanalysis at least in some quarters, it remains a topical issue. In effect, via the Feminist Philosophers I came across this NYT article on a documentary which heavily criticizes psychoanalytic approaches to autism in France.
According to the article, psychoanalysis remains the standard approach to autism there, but not for particularly good reasons. In fact, the results seem to be quite discouraging (for example, a much smaller percentage of children with an autism diagnosis are sufficiently autonomous to be able to attend school in France than in e.g. the UK), and yet the grip of psychoanalysis remains strong – needless to say, arguably to the disadvantage of the children in question and their caregivers.
In the Feminist Philosophers’ post there is also a link to the documentary; it is well worth watching, but also quite depressing.
Another well-worn example bites the dust? You remember that famous study in which the participants, if primed with words connoting agedness, walked more slowly when leaving the lab.
A new study by the Belgian team of Stéphane Doyen, Olivier Klein, Cora-Lise Pichon, and Axel Cleeremans not only failed to replicate the effect, but also appeared to show that the effect observed in the original study was owing to the experimenters’ expectations.
This has been going around the internet over the last couple of days, but for those who have not seen it yet: The Nation has an excellent overview article of the Hauser affair, by distinguished psychology professor Charles Gross. Let me quote some of the concluding paragraphs, which discuss in particular the damaging effect of the affair for the whole field of animal cognition, and in particular of the secretive way in which the investigations have been handled.
As mentioned before, recently I read Cordelia Fine’s A Mind of its Own, a highly informative and accessible account of some of the traits of human psychology, as documented by empirical research, indicating that our cognitive and emotional apparatus is highly unreliable. From the introduction:
[…] the truth of the matter – as revealed by the quite extraordinary and fascinating research described in this book – is that your unscrupulous brain is entirely undeserving of your confidence. It has some shifty habits that leave the truth distorted and disguised. (p. 2)
The rhetoric is quite (too?) strong, and one may raise an eyebrow or two at the conflation of brain with human cognition and psychology generally speaking. Nevertheless, the evidence presented by Fine is compelling and unsettling. The chapters have the following titles: the vain brain, the emotional brain, the immoral brain, the deluded brain, the pigheaded brain, the secretive brain, the weak-willed brain, the bigoted brain, and finally the vulnerable brain. (You get the picture…) I highly recommend the book, especially for philosophers who still hold on to the idea that human cognition is for the most part reliable and truth-conducive.
As many of you have probably already seen, Rebecca Kukla has an excellent post up at Leiter’s blog on the effects of implicit biases, specifically as affecting hiring practices. However, as she is done with her job of guest-blogger over there, the post is not open for comments, and with Rebecca’s agreement, I figured it might be useful to have a discussion here.
Rebecca is making very good points about the effects of implicit biases in hiring practices, and in particular how hard (in fact, nearly impossible) it is to shield yourself from them if you are on the decision-making side of things. Now, as it turns out, one of the books I read over my vacation last week was Cordelia Fine’s A Mind of its Own(as mentioned before, co-blogger John Protevi and I are big fans of her work). One of the chapters of the book is ‘The Bigoted Brain’, and she discusses precisely some of the findings from experimental psychology (on the ways implicit biases operate) that Rebecca refers to. As she mentions, one of the surprising features of implicit biases is that, if you actively try to suppress them, they in fact re-emerge later on with additional strength. (In fact, it is not so surprising given that suppressing specific thoughts is likely to have a priming effect.) Here’s an excerpt from the book:
I’m just back from an extremely enjoyable family vacation in sunny Fuerteventura, which also means that I am swamped by a zillion work-related things that need to be attended to asap. I also want to resume blogging, and have a few posts already lined up in my head (in particular, one on the ‘climate for women’ discussion which has re-emerged), but where do I find time for all this? (One almost regrets going on holiday and forgetting about it all for a while, given the harsh conditions upon return!)
But anyway, today I came across two interesting links, via the New Scientist twitter feed, and thought it might be a good topic to resume blogging. As it turns out, Steven Pinker’s most recent interest is the history of violence, which he takes to be a privileged window for his long-standing interest in human nature (broadly construed). In his new book The Better Angels of our Nature, he claims that there has been a significant decrease in homicides and violent deaths over the centuries: ‘Humans are less violent than ever’. This becomes particularly clear if the death tolls of historical occurrences of horror are estimated on the basis of the human population at the time, and what the proportion would mean in terms of the current human population in the world. This was done by finding the per-capita death rate at the midpoint of the event's range of years, based on population estimates from McEvedy and Jones.
A few weeks ago, Helen reported on a wonderful conversation she had with her 7 year-old daughter on the ontological status of numbers. Helen also remarked that the children of scientists and researchers are often the subject of all kinds of ‘experiments’ unbeknownst to them. I must confess that I’ve performed a wide range of cognitive ‘tests’ on my kids, but before social services are called I can assure you all that they greatly enjoyed it and saw it all as a fun game. I have in particular done the false belief task with both, at different ages, and can report that they fall squarely within the expected results!
Now, as some readers may recall, I am working quite extensively on reasoning, deductive reasoning in particular, both from a philosophical and a psychological perspective. So I’ve been through most of the voluminous literature on the psychology of reasoning (my own account of the findings can be found in chapter 4 of my forthcoming book, draft available here), and as is well known, in experiments with deductive tasks, participants overwhelmingly fail to give the ‘right’ response from the point of view of the canons of deduction as traditionally construed. And yet, these studies were almost all conducted with participants having a fairly homogeneous educational background, namely undergraduates of North-American and Western European universities. My hypothesis is that even the modicum of ‘logical competence’ that does emerge from the experiments is by and large a product of the formal education they received. To test this hypothesis, one would have to isolate the education component and thus undertake the same or similar experiments with participants with a very different educational background, in particular unschooled subjects. Unfortunately, very few studies of this kind have been conducted, but the ones which have do suggest that unschooled participants tend to engage with the task materials in *very* different ways.
Every reasonably neurologically healthy person has some fear of public speaking. How much varies hugely from individual to individual. But I suspect that it is very common among philosophers. Why? Because the majority of people who enjoy receiving a lousy salary in return for an insane amount of work have got to have some very good reasons. One good reason, I believe, is that they enjoy working in the comfort of their own home and enjoy the solitude and the control they have over their own time and direction of their work. They are good old-fashioned introverts, who don't really truly enjoy large assemblies of people but who may have adjusted to them and who may even come across as extroverts on a good day. Do introverts fear public speaking more than extroverts? I don't know. But I believe that they do. If you dislike large groups of people or prefer your own company to that of other people, it is not likely that you by nature are super-comfortable speaking to a large group of people. That said, I don't want to rule out that some people went into the profession because of the possibility of fame and attention.
As for my own case, I started out with an extreme fear of public speaking. I recall taking a large lecture class in molecular biology the first year of college. Despite it being a large lecture class, we were all expected to do a presentation. I hadn't spoken in front of a lot of people before, so I had no idea that I had a fear of public speaking. I was assigned a topic, and over-prepared. I made about 50 slides. This was before the age of Powerpoint. So my slides were the old-fashioned transparent kind that you put on an overhead projector. They were all lying in my lap in the correct order when I was sitting in the lecture hall waiting for the professor to call my name. I felt my heart pump very fast and hard even before he called my name. When he called on me, I stumbled down the steps to the front of the lecture hall. My hands were shaking. My legs felt like rubber. Then as I was about to put the first slide on the overhead projector, I dropped all the slides on the floor. The 200 students in the lecture hall were not making a single noise. It was so quiet that I could hear my heart pound. I had no idea what to do. Like an idiot, I hadn't numbered the slides and now they were all lying in a big mess on the floor. No one said anything, not even the professor. I collected the slides from the floor in a big messy pile in my arms, mumbled that I just couldn't do this and then went back to my seat. No one said anything. The professor started lecturing like nothing had happened. I felt terrible.
In several of my posts, I mentioned the book on formal languages that I've been working on for the last few years. I now have a draft of the book ready for (moderate!) public consumption, which is now available here. The two final chapters are still missing, but the draft is already something of a coherent whole, or so I hope.
Many people have kindly expressed their interest in checking out the material, hence my decision to make it available online at this point, despite the fact that it is still a somewhat rough draft (references are still a mess). Needless to say, comments are always welcome :)
A new paper by Nieuwenhuis, Forstmann, & Wagenmakers in Nature argues that roughly half of all papers in five top neuroscience journals assert differences between the effects of interventions when at most they are entitled to is to assert that an intervention has had a statistically significant effect. Their argument is explained very well in a Guardianarticle by Ben Goldacre. The authors write in their introduction:
Are all these articles wrong about their main conclusions? We do not think so. First, we counted any paper containing at least one erroneous analysis of an interaction. For a given paper, the main conclusions may not depend on the erroneous analysis. Second, in roughly one third of the error cases, we were convinced that the critical, but missing, interaction effect would have been statistically significant (consistent with the researchers’ claim), either because there was an enormous difference between the two effect sizes or because the reported methodological information allowed us to determine the approximate significance level. Nonetheless, in roughly two thirds of the error cases, the error may have had serious consequences.
So the headline should not be: “Half of Neuroscience Papers are Wrong”, but rather “Half of Neuroscience Papers are Insufficiently Well Argued/One-Third Need Fixing”. We’ll see what the headline-writers do…
A sidelight: the authors, whose affiliations are Dutch, use “intuition” in more-or-less the philosopher’s sense. Is that use diffusing into world outside philosophy?
The historian’s attitude toward his or her sources, when attempting to establish matters of fact, is one of tempered but universal skepticism. The same applies to the history of the present. For example:
Don’t depend on popularizations for your knowledge of neuroscience (see also the previous item in this blog for a similar issue concerning the biology of sex). A recent headline in severalnewspapers and online sources reads something like this: “Magnetic Pulses To The Brain Make It Impossible To Lie”. Wow! That’s exciting! And scary too…
Jeffrey Zacks, in Psychology here at Washington University, and his collaborators have been studying human event perception for the last ten years. A recent paper, in press at the Journal of cognitive neuroscience, and available at his website (pdf) argues that perceptual event boundaries occur in experience at points where prediction becomes difficult.
[…] working memory representations of the current event guide perceptual predictions about the immediate future [less than 10 sec]. These predictions are checked against what happens next in the perceptual stream; most of the time perceptual predictions about what happens next are accurate. From time to time, however, activity becomes less predictable, causing a spike in prediction errors. These spikes in prediction error are fed back to update working memory and reorient the organism to salient new features in the environment. According to this model, the increase in prediction error and consequent updating results in the subjective experience of an event boundary in perceptual experience.
The tenets of Zacks’s view are (i) that current experience consists in representations actively maintained in working memory; (ii) present experience consists partly in anticipations of future experiences. Memory, insofar as it enters the stream of experience, would be on this account proleptic, forward-looking; mere recall has no place.
Aristotle says that animals don’t recollect: they don’t search their memories for information about the past (De memoria ii, 453a8, Hist. anim. 488b26; see Grote, Aristotle 476). On what grounds he said that I don’t know, but whether it was a shrewd surmise or a lucky guess he seems to have been right. Aristotle also put forward a version of what became the predominant philosophical picture of memory—that it consists in the registering of an “impression” which is subsequently to be recalled, as if the mind had a filing-card drawer or a mental museum (such as figured in Ancient and Renaissance arts memoriæ). That picture, attractive though it is, may well be fundamentally misleading. Modelling biological memory on the specifically human capacity that consists in voluntary recall of items subject to intersubjective standards of accuracy (e.g., the procedures of memorization employed by the reciters of epic poetry, to take an example Aristotle would have known) may turn out to be yet another case where intuition has led us astray.
A predominantly proleptic function for working memory, moreover, fits nicely with theories according to which perception requires activity on the part of the perceiver, so that the perception of red, for example, to use Mohan’s example (taken from Justin Broackes) is effectively the perception of a pattern of sensations that arises from the perceiver’s having regarded the red thing from several perspectives—a feat normally made posssible only by moving. Event perception too may be governed, if not by activity itself, then by anticipations of activity.
Many NewAPPS bloggers (Helen, John, Mohan, myself) are favorably disposed towards analyses of human cognition which could be described as ‘naturalized’ in that data from empirical sciences (psychology, biology, cognitive science) play an important role.
Now, one crucial aspect in analyses of this sort in general is the issue of continuity and discontinuity between human and non-human animals. We are all familiar with Darwin’s idea that the difference between ‘us and them’ (Pink Floyd, anyone?) is “one of degree and not of kind”, and this seems to be the basic assumption underlying much of the work on non-human animal cognition that has the goal of producing a better understanding of human cognition. (Naturally, there is also the independent project of studying non-human animal cognition and behavior as a goal in and of itself.)
The two main camps are: those who marvel at the complexity of non-human animal cognition and deplore our tendency towards species chauvinism (fondly referred to as ‘monkey-huggers’ sometimes); and those who emphasize the abysmal distance between human and non-human cognition (whom I will refer to as ‘people-huggers’). (I’m using ‘cognition’ in a broad sense here, meant to include also work on e.g. sociability by someone like Frans de Waal.) And among people-huggers, at least some (but not all) end up defending a position that smacks of “We humans are so damn special and unique! There’s really nothing like us.” (also known as 'humaniqueness')
One aspect that is often (though not always) overlooked is the fact that there have been a bunch of closely-related cousins of ours roaming around the Earth at different times, but as it turns out they are all gone now: the missing hominids.
UPDATE: I've changed the term used to describe the fourth category in the taxonomy below from 'conceptual analysis' to 'conceptual reflection'. I hope the new term is better able to cover the many approaches suggested by commenters which did not seem to fit the original description in a straighforward way.
In light of the very interesting methodological discussions we’ve been having here at New APPS on the relations between physics and metaphysics, I’d like to put forward a tentative taxonomy of different strands within philosophical methodology. I suspect it can also be useful for discussions on the analytic vs. continental divide and its overcoming, which is also a recurrent theme in this blog.
Indeed, looking at past and present work in philosophy (and trying to be as encompassing as possible), it would seem that we can identify four main strands of methods used for philosophical analysis:
Formal methods – these correspond to applications of mathematical and logical tools for the investigation of philosophical issues. As examples one could cite the development of possible world semantics for the analysis of the concepts of necessity and possibility, applications of the Bayesian framework to issues in epistemology (giving rise to so-called formal epistemology), Carnapian explication, and many others.
Historical methods – they rely on the assumption that, to attain a better understanding of a given philosophical concept/problem, it is useful (or even indispensable) to trace its historical origins in philosophical theorizing. Of course, the study of the history of philosophy has intrinsic value as such (emphasis on ‘history’) but at this point I’m interested in what Eric Schliesser has once described as ‘instrumental history of philosophy’ (emphasis on ‘philosophy’).
Empirical methods – these are the methodological approaches that systematically bring in elements from empirical sciences, such as the sciences of the mind (particularly relevant for philosophy of mind, epistemology, but to my mind also for philosophy of logic and mathematics), physics (possibly relevant for metaphysics), biology (arguably relevant for ethics, and everywhere else where evolutionary concepts come into play) etc. Sometimes this approach is described as ‘naturalistic’, but as we know there are (too?) many variations of the concept of naturalistic philosophy (many self-described naturalistic approaches are not sufficiently empirically-informed to my taste).
Conceptual reflection – arguably the most traditional philosophical method, consisting in unpacking concepts and drawing implications, introducing new and hopefully useful concepts, problems, conceptual frameworks etc.
So we seem to have a plurality of methods actually being used for philosophical theorizing. Are they all equally legitimate and adequate, both in general and in specific cases? I submit that the correct response to this plurality is methodological pluralism.
Watch Naif Al-Mutawa explain the vision behind his comic The 99. Behind all the jokes and business promotion (not to mention cultural studies in action), Naif explains his way of promoting an evolving understanding of Islam within Islamic cultures (and outside of these). [UPDATE ADDED LATER: must have been cartoon day in philosophy blogland.]
Philosophy, since its inception, has been characterized by persistent disagreements. The situation in philosophy is perhaps worse than in other formalized disciplines, such as scientific or mathematical practice. Peter van Inwagen argued that it would indeed be "hard to find an important philosophical thesis that, say, ninety-five percent of, say, American analytical philosophers born between 1930 and 1950 agreed about in, say, 1987."
I do not have a clear view of the situation in 1987, but the PhilPaper survey suggests that van Inwagen may be on the right track--the strongest inclinations are towards non-skeptical realism (81 %), scientific realism (75 %) atheism (72.8%). In how far is disagreement in philosophy cause for concern? Suppose, say, that 95% or even 100 % of philosophers had been atheists or scientific realists, would this count as compelling proof against the existence of God or in favor of the existence of unobservable scientific entities? As long as we don't really have a good account of what philosophical intuitions are, it is hard to make sense of this.
An extensive part of the disagreement in philosophy stems from people having differing intuitions, for example, on whether or not free will is incompatible with determinism. Despite their variability, philosophical intuitions are often tremendously compelling to those who hold them: to explain his difference in opinion on compatibilism with Lewis, van Inwagen writes "I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions." Experimental psychologists suggest that philosophical intuitions not only show individual variation, but might also be correlated with factors like gender or ethnicity, cause for additional concern about their reliability.
There are several approaches to the problem of the instability of philosophical intuitions. To give a recent example, Jennifer Nagel has a recent interesting paper where she shows that the types of instability found in epistemic intuitions (e.g., Gettier cases) are also found in perceptual judgments, such as susceptibility to perceptual illusions. She also argues that some of the earlier studies on purported effects of ethnicity in intuitions about what knowledge is are methodologically faulty. She refers to an ongoing study by herself and others that indicates, pace the original studies on Getter cases, that East Asians and westerners have similar intuitions.
What I find truly fascinating is that, despite extensive research on philosophical intuitions in experimental philosophy or metaphilosophy, we have little idea what the psychological basis of philosophical intuitions might be. Jennifer Nagel argues they are akin to perception. But whereas we have a good psychological account of perception, we lack a good psychological account of philosophical intuition. This makes philosophical disagreement all the more puzzling and hard to make sense of.
Alva Noë has a recent post on gender, commenting on some of the experimental results described in Cordelia Fine’s Delusions of Gender (some readers may recall that John Protevi and I are huge fans of her work, and of this book in particular). (Btw, Noë’s post even got linked by Leiter – it’s great to see Leiter drawing attention to gender issues.) I quote from Noë’s post:
Conjure before your mind the image of a physics professor. Imagine what his life is like. Now pretend, for a few moments, that you are that person. Try to get a feel for what it is like to be him.
Now let's start anew. This time think of a cheerleader. Picture her; imagine what her life is like. Now pretend to be her. Imagine what it is like to be her.