Over the last week, there have been quite a few blog posts prompted
by Tim Williamson’s recent critique of experimental philosophy in his review of
J. Alexander’s Experimental Philosophy.
In particular, at NewAPPS Eric Schliesser and Berit Brogaard shared some of their
views on the debate. Here, however, I want to discuss a post by Eric
Schwitzgebel at Splintered Mind, as I think he identifies an important and
overlooked component of the whole debate. Eric puts forward the distinction
between X-Phi in a narrow and in a wide sense. The narrow conception can be
the work canonically identified as "experimental philosophy" surveys
ordinary people's judgments (or "intuitions") about philosophical
concepts, and it does so by soliciting people's responses to questions about
The wide conception is more difficult to define, and Eric
basically offers a definition by exclusion:
In this broad sense, philosophers who do empirical work aimed at addressing traditionally philosophical questions are also experimental philosophers, even if they don't survey people about their intuitions.
(I’ve been through a ridiculously busy period of work-related traveling and thus scarce blogging, and in the next four weeks I’m supposed to be on holiday, so again scarce blogging. But there is still one topic I really want to discuss before the summer break, so here it is.)
Here are a couple of brain-teasers for your amusement on this Monday morning/afternoon (depending on your time zone):
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days
Just now on NPR, there was a discussion about toddlers and iPads that could have really used a Heideggerian intervention. The issue was, more or less, what is happening when you give a 2 year old an iPad and they get completely absorbed for 5 hours straight? Is this good for them or not? And does it help them to learn what they need to learn in order to mature into smart, productive kids and adults? NPR seems to love this stuff; there’s a shorter article on the same topic here.
A range of experts was consulted, most of whom said that we don’t have enough (empirical) research to answer these questions yet, but that we shouldn’t panic – we just need to make sure that kids get a balance of screen time and face-to-face interaction with other people. But the question that started the whole discussion was a father’s question about what is going on for his son when he “zones out” in front of the iPad. This question remained unaddressed, as far as I could tell from my own zoning in and out of the radio discussion. But isn’t this basically a matter of Benommenheit, or captivation, literally “being taken,” being absorbed in an object to the point where everything else fades away.
I have written about our case study of a person with acquired synesthesia and savant syndrome in an earlier post on this blog. To make a long story short, JP was hit on the head in a mugging incident and acquired traumatic brain injury.
After the incidence he started experiencing the world in terms of geometrical figures. He also had lost his ability to see smooth boundaries and smooth motion. He sees objects as separated from their surroundings in terms of tiny tangent and secant lines. He experiences motion in picture frames. When objects are moving relative to him or he is moving relative to objects, three-dimensional geometrical figures form before his eyes.
Right after the incident he started drawing some of these images by hand. They turned into beautiful pieces of art that have received several awards. After some elementary math training following the accident, JP also experienced automatic visual imagery in response to certain mathematical formulas.
Ingrid Robeyns, professor of practical philosophy at the Erasmus University in Rotterdam, is known among other things for her work on the capability approach (see her SEP entry on the topic, and her review of Martha Nussbaum's Creating Capabilities), and as a blogger at the interdisciplinary blog Crooked Timber. This week, she will be running a series of posts on autism at Crooked Timber -- the first one is here, the second one here. Ingrid is herself the mother of an autistic child, and the combination of philosophical insight with her first-person experience is bound to yield a very interesting perspective on the topic.
Autism is a topic having many important philosophical implications, ranging from theories of cognition and philosophy of mind to ethics. So I for one look forward to the upcoming posts, and I suspect that many NewAPPS readers will be equally interested. Go check it out; in fact, today is World Autism Awareness Day, so as good a day as any!
With the growth of controversies conducted through blogs the really existing norms in various scientific disciplines can sometimes be revealed (perhaps unintentionally). In this blistering post, Yale Psychologist, John A. Bargh, Ph.D., criticizes a study that had not replicated his earlier results. Here I ignore the substance of his charges (for useful criticism see here). In his criticism he vehemently attacks the online journal, PLoS ON. But he follows with a most revealing, self-undermining comment: "If I'd been asked to review it (oddly for an article that purported to fail to replicate one of my past studies, I wasn't) I could have pointed out at that time the technical flaws." The parenthesis teaches us that the (once-standard?) norm among the peer-reviewed journals in his niche is that if one is targeted (and high status?) one can expects to be the referee. Perhaps, the vehemence of the little spat is indicative that an old-boys-network is on the way out? [Hat-tips to Bryce Huebner and Antti Kauppinen on Facebook.]
Many readers will have already seen Jess Prinz’s recent blog post criticizing a psychological study defending the Male Warrior hypothesis, according to which men are evolved to seek out violent conflicts in order to get women. He now has a reply to the objections raised by two other bloggers, one of them one of the authors of the study (H/T Feminist Philosophers). I’m not sure this is appropriate language for blogging, but I just can’t help myself: Prinz is really kicking ass, there is no better way to describe it. Some excerpts:
One of the subjects I work with, JP, has acquired synesthesia and acquired savant syndrome. This happened as a result of a brutal assault in 2002, during which he was kicked and hit on the head. He was subsequently diagnosed with a bleeding kidney and an unspecified head injury. What the doctors didn't know was that JP no longer saw the world the way he used to. Objects suddenly did not have smooth boundaries. Things no longer moved smoothly. Motion took place in picture frames. It looked like someone paused and unpaused the flow of the world very rapidly. Even more amazing: JP was suddenly able to see vivid fractal images of objects with a fractal structure (such as, broccoli).
JP's response to his new way of seeing the world was to withdraw from it. He spent the following three years in his apartment and refused to leave unless it was strictly necessary. After three years in complete isolation JP figured that he would try to draw what he saw, so he could make people understand him. He started drawing. And he continued. He drew and drew and drew, using only a pencil, a ruler and a compass. The results were beautiful hand-drawn fractal-like images. JP didn't know then that he was the first in the world to hand-draw mathematical fractals and that he would later win prizes for his drawings. He didn't even know what he was drawing, except that it was what he saw.
Almost a year ago I wrote a post on the dubious scientific status of psychoanalysis. One might think that this is an old and dated Popperian question, but in view of the influential position still occupied by psychoanalysis at least in some quarters, it remains a topical issue. In effect, via the Feminist Philosophers I came across this NYT article on a documentary which heavily criticizes psychoanalytic approaches to autism in France.
According to the article, psychoanalysis remains the standard approach to autism there, but not for particularly good reasons. In fact, the results seem to be quite discouraging (for example, a much smaller percentage of children with an autism diagnosis are sufficiently autonomous to be able to attend school in France than in e.g. the UK), and yet the grip of psychoanalysis remains strong – needless to say, arguably to the disadvantage of the children in question and their caregivers.
In the Feminist Philosophers’ post there is also a link to the documentary; it is well worth watching, but also quite depressing.
Another well-worn example bites the dust? You remember that famous study in which the participants, if primed with words connoting agedness, walked more slowly when leaving the lab.
A new study by the Belgian team of Stéphane Doyen, Olivier Klein, Cora-Lise Pichon, and Axel Cleeremans not only failed to replicate the effect, but also appeared to show that the effect observed in the original study was owing to the experimenters’ expectations.
This has been going around the internet over the last couple of days, but for those who have not seen it yet: The Nation has an excellent overview article of the Hauser affair, by distinguished psychology professor Charles Gross. Let me quote some of the concluding paragraphs, which discuss in particular the damaging effect of the affair for the whole field of animal cognition, and in particular of the secretive way in which the investigations have been handled.
As mentioned before, recently I read Cordelia Fine’s A Mind of its Own, a highly informative and accessible account of some of the traits of human psychology, as documented by empirical research, indicating that our cognitive and emotional apparatus is highly unreliable. From the introduction:
[…] the truth of the matter – as revealed by the quite extraordinary and fascinating research described in this book – is that your unscrupulous brain is entirely undeserving of your confidence. It has some shifty habits that leave the truth distorted and disguised. (p. 2)
The rhetoric is quite (too?) strong, and one may raise an eyebrow or two at the conflation of brain with human cognition and psychology generally speaking. Nevertheless, the evidence presented by Fine is compelling and unsettling. The chapters have the following titles: the vain brain, the emotional brain, the immoral brain, the deluded brain, the pigheaded brain, the secretive brain, the weak-willed brain, the bigoted brain, and finally the vulnerable brain. (You get the picture…) I highly recommend the book, especially for philosophers who still hold on to the idea that human cognition is for the most part reliable and truth-conducive.
As many of you have probably already seen, Rebecca Kukla has an excellent post up at Leiter’s blog on the effects of implicit biases, specifically as affecting hiring practices. However, as she is done with her job of guest-blogger over there, the post is not open for comments, and with Rebecca’s agreement, I figured it might be useful to have a discussion here.
Rebecca is making very good points about the effects of implicit biases in hiring practices, and in particular how hard (in fact, nearly impossible) it is to shield yourself from them if you are on the decision-making side of things. Now, as it turns out, one of the books I read over my vacation last week was Cordelia Fine’s A Mind of its Own(as mentioned before, co-blogger John Protevi and I are big fans of her work). One of the chapters of the book is ‘The Bigoted Brain’, and she discusses precisely some of the findings from experimental psychology (on the ways implicit biases operate) that Rebecca refers to. As she mentions, one of the surprising features of implicit biases is that, if you actively try to suppress them, they in fact re-emerge later on with additional strength. (In fact, it is not so surprising given that suppressing specific thoughts is likely to have a priming effect.) Here’s an excerpt from the book:
I’m just back from an extremely enjoyable family vacation in sunny Fuerteventura, which also means that I am swamped by a zillion work-related things that need to be attended to asap. I also want to resume blogging, and have a few posts already lined up in my head (in particular, one on the ‘climate for women’ discussion which has re-emerged), but where do I find time for all this? (One almost regrets going on holiday and forgetting about it all for a while, given the harsh conditions upon return!)
But anyway, today I came across two interesting links, via the New Scientist twitter feed, and thought it might be a good topic to resume blogging. As it turns out, Steven Pinker’s most recent interest is the history of violence, which he takes to be a privileged window for his long-standing interest in human nature (broadly construed). In his new book The Better Angels of our Nature, he claims that there has been a significant decrease in homicides and violent deaths over the centuries: ‘Humans are less violent than ever’. This becomes particularly clear if the death tolls of historical occurrences of horror are estimated on the basis of the human population at the time, and what the proportion would mean in terms of the current human population in the world. This was done by finding the per-capita death rate at the midpoint of the event's range of years, based on population estimates from McEvedy and Jones.
A few weeks ago, Helen reported on a wonderful conversation she had with her 7 year-old daughter on the ontological status of numbers. Helen also remarked that the children of scientists and researchers are often the subject of all kinds of ‘experiments’ unbeknownst to them. I must confess that I’ve performed a wide range of cognitive ‘tests’ on my kids, but before social services are called I can assure you all that they greatly enjoyed it and saw it all as a fun game. I have in particular done the false belief task with both, at different ages, and can report that they fall squarely within the expected results!
Now, as some readers may recall, I am working quite extensively on reasoning, deductive reasoning in particular, both from a philosophical and a psychological perspective. So I’ve been through most of the voluminous literature on the psychology of reasoning (my own account of the findings can be found in chapter 4 of my forthcoming book, draft available here), and as is well known, in experiments with deductive tasks, participants overwhelmingly fail to give the ‘right’ response from the point of view of the canons of deduction as traditionally construed. And yet, these studies were almost all conducted with participants having a fairly homogeneous educational background, namely undergraduates of North-American and Western European universities. My hypothesis is that even the modicum of ‘logical competence’ that does emerge from the experiments is by and large a product of the formal education they received. To test this hypothesis, one would have to isolate the education component and thus undertake the same or similar experiments with participants with a very different educational background, in particular unschooled subjects. Unfortunately, very few studies of this kind have been conducted, but the ones which have do suggest that unschooled participants tend to engage with the task materials in *very* different ways.
Every reasonably neurologically healthy person has some fear of public speaking. How much varies hugely from individual to individual. But I suspect that it is very common among philosophers. Why? Because the majority of people who enjoy receiving a lousy salary in return for an insane amount of work have got to have some very good reasons. One good reason, I believe, is that they enjoy working in the comfort of their own home and enjoy the solitude and the control they have over their own time and direction of their work. They are good old-fashioned introverts, who don't really truly enjoy large assemblies of people but who may have adjusted to them and who may even come across as extroverts on a good day. Do introverts fear public speaking more than extroverts? I don't know. But I believe that they do. If you dislike large groups of people or prefer your own company to that of other people, it is not likely that you by nature are super-comfortable speaking to a large group of people. That said, I don't want to rule out that some people went into the profession because of the possibility of fame and attention.
As for my own case, I started out with an extreme fear of public speaking. I recall taking a large lecture class in molecular biology the first year of college. Despite it being a large lecture class, we were all expected to do a presentation. I hadn't spoken in front of a lot of people before, so I had no idea that I had a fear of public speaking. I was assigned a topic, and over-prepared. I made about 50 slides. This was before the age of Powerpoint. So my slides were the old-fashioned transparent kind that you put on an overhead projector. They were all lying in my lap in the correct order when I was sitting in the lecture hall waiting for the professor to call my name. I felt my heart pump very fast and hard even before he called my name. When he called on me, I stumbled down the steps to the front of the lecture hall. My hands were shaking. My legs felt like rubber. Then as I was about to put the first slide on the overhead projector, I dropped all the slides on the floor. The 200 students in the lecture hall were not making a single noise. It was so quiet that I could hear my heart pound. I had no idea what to do. Like an idiot, I hadn't numbered the slides and now they were all lying in a big mess on the floor. No one said anything, not even the professor. I collected the slides from the floor in a big messy pile in my arms, mumbled that I just couldn't do this and then went back to my seat. No one said anything. The professor started lecturing like nothing had happened. I felt terrible.
In several of my posts, I mentioned the book on formal languages that I've been working on for the last few years. I now have a draft of the book ready for (moderate!) public consumption, which is now available here. The two final chapters are still missing, but the draft is already something of a coherent whole, or so I hope.
Many people have kindly expressed their interest in checking out the material, hence my decision to make it available online at this point, despite the fact that it is still a somewhat rough draft (references are still a mess). Needless to say, comments are always welcome :)
A new paper by Nieuwenhuis, Forstmann, & Wagenmakers in Nature argues that roughly half of all papers in five top neuroscience journals assert differences between the effects of interventions when at most they are entitled to is to assert that an intervention has had a statistically significant effect. Their argument is explained very well in a Guardianarticle by Ben Goldacre. The authors write in their introduction:
Are all these articles wrong about their main conclusions? We do not think so. First, we counted any paper containing at least one erroneous analysis of an interaction. For a given paper, the main conclusions may not depend on the erroneous analysis. Second, in roughly one third of the error cases, we were convinced that the critical, but missing, interaction effect would have been statistically significant (consistent with the researchers’ claim), either because there was an enormous difference between the two effect sizes or because the reported methodological information allowed us to determine the approximate significance level. Nonetheless, in roughly two thirds of the error cases, the error may have had serious consequences.
So the headline should not be: “Half of Neuroscience Papers are Wrong”, but rather “Half of Neuroscience Papers are Insufficiently Well Argued/One-Third Need Fixing”. We’ll see what the headline-writers do…
A sidelight: the authors, whose affiliations are Dutch, use “intuition” in more-or-less the philosopher’s sense. Is that use diffusing into world outside philosophy?
The historian’s attitude toward his or her sources, when attempting to establish matters of fact, is one of tempered but universal skepticism. The same applies to the history of the present. For example:
Don’t depend on popularizations for your knowledge of neuroscience (see also the previous item in this blog for a similar issue concerning the biology of sex). A recent headline in severalnewspapers and online sources reads something like this: “Magnetic Pulses To The Brain Make It Impossible To Lie”. Wow! That’s exciting! And scary too…
Jeffrey Zacks, in Psychology here at Washington University, and his collaborators have been studying human event perception for the last ten years. A recent paper, in press at the Journal of cognitive neuroscience, and available at his website (pdf) argues that perceptual event boundaries occur in experience at points where prediction becomes difficult.
[…] working memory representations of the current event guide perceptual predictions about the immediate future [less than 10 sec]. These predictions are checked against what happens next in the perceptual stream; most of the time perceptual predictions about what happens next are accurate. From time to time, however, activity becomes less predictable, causing a spike in prediction errors. These spikes in prediction error are fed back to update working memory and reorient the organism to salient new features in the environment. According to this model, the increase in prediction error and consequent updating results in the subjective experience of an event boundary in perceptual experience.
The tenets of Zacks’s view are (i) that current experience consists in representations actively maintained in working memory; (ii) present experience consists partly in anticipations of future experiences. Memory, insofar as it enters the stream of experience, would be on this account proleptic, forward-looking; mere recall has no place.
Aristotle says that animals don’t recollect: they don’t search their memories for information about the past (De memoria ii, 453a8, Hist. anim. 488b26; see Grote, Aristotle 476). On what grounds he said that I don’t know, but whether it was a shrewd surmise or a lucky guess he seems to have been right. Aristotle also put forward a version of what became the predominant philosophical picture of memory—that it consists in the registering of an “impression” which is subsequently to be recalled, as if the mind had a filing-card drawer or a mental museum (such as figured in Ancient and Renaissance arts memoriæ). That picture, attractive though it is, may well be fundamentally misleading. Modelling biological memory on the specifically human capacity that consists in voluntary recall of items subject to intersubjective standards of accuracy (e.g., the procedures of memorization employed by the reciters of epic poetry, to take an example Aristotle would have known) may turn out to be yet another case where intuition has led us astray.
A predominantly proleptic function for working memory, moreover, fits nicely with theories according to which perception requires activity on the part of the perceiver, so that the perception of red, for example, to use Mohan’s example (taken from Justin Broackes) is effectively the perception of a pattern of sensations that arises from the perceiver’s having regarded the red thing from several perspectives—a feat normally made posssible only by moving. Event perception too may be governed, if not by activity itself, then by anticipations of activity.
Many NewAPPS bloggers (Helen, John, Mohan, myself) are favorably disposed towards analyses of human cognition which could be described as ‘naturalized’ in that data from empirical sciences (psychology, biology, cognitive science) play an important role.
Now, one crucial aspect in analyses of this sort in general is the issue of continuity and discontinuity between human and non-human animals. We are all familiar with Darwin’s idea that the difference between ‘us and them’ (Pink Floyd, anyone?) is “one of degree and not of kind”, and this seems to be the basic assumption underlying much of the work on non-human animal cognition that has the goal of producing a better understanding of human cognition. (Naturally, there is also the independent project of studying non-human animal cognition and behavior as a goal in and of itself.)
The two main camps are: those who marvel at the complexity of non-human animal cognition and deplore our tendency towards species chauvinism (fondly referred to as ‘monkey-huggers’ sometimes); and those who emphasize the abysmal distance between human and non-human cognition (whom I will refer to as ‘people-huggers’). (I’m using ‘cognition’ in a broad sense here, meant to include also work on e.g. sociability by someone like Frans de Waal.) And among people-huggers, at least some (but not all) end up defending a position that smacks of “We humans are so damn special and unique! There’s really nothing like us.” (also known as 'humaniqueness')
One aspect that is often (though not always) overlooked is the fact that there have been a bunch of closely-related cousins of ours roaming around the Earth at different times, but as it turns out they are all gone now: the missing hominids.
UPDATE: I've changed the term used to describe the fourth category in the taxonomy below from 'conceptual analysis' to 'conceptual reflection'. I hope the new term is better able to cover the many approaches suggested by commenters which did not seem to fit the original description in a straighforward way.
In light of the very interesting methodological discussions we’ve been having here at New APPS on the relations between physics and metaphysics, I’d like to put forward a tentative taxonomy of different strands within philosophical methodology. I suspect it can also be useful for discussions on the analytic vs. continental divide and its overcoming, which is also a recurrent theme in this blog.
Indeed, looking at past and present work in philosophy (and trying to be as encompassing as possible), it would seem that we can identify four main strands of methods used for philosophical analysis:
Formal methods – these correspond to applications of mathematical and logical tools for the investigation of philosophical issues. As examples one could cite the development of possible world semantics for the analysis of the concepts of necessity and possibility, applications of the Bayesian framework to issues in epistemology (giving rise to so-called formal epistemology), Carnapian explication, and many others.
Historical methods – they rely on the assumption that, to attain a better understanding of a given philosophical concept/problem, it is useful (or even indispensable) to trace its historical origins in philosophical theorizing. Of course, the study of the history of philosophy has intrinsic value as such (emphasis on ‘history’) but at this point I’m interested in what Eric Schliesser has once described as ‘instrumental history of philosophy’ (emphasis on ‘philosophy’).
Empirical methods – these are the methodological approaches that systematically bring in elements from empirical sciences, such as the sciences of the mind (particularly relevant for philosophy of mind, epistemology, but to my mind also for philosophy of logic and mathematics), physics (possibly relevant for metaphysics), biology (arguably relevant for ethics, and everywhere else where evolutionary concepts come into play) etc. Sometimes this approach is described as ‘naturalistic’, but as we know there are (too?) many variations of the concept of naturalistic philosophy (many self-described naturalistic approaches are not sufficiently empirically-informed to my taste).
Conceptual reflection – arguably the most traditional philosophical method, consisting in unpacking concepts and drawing implications, introducing new and hopefully useful concepts, problems, conceptual frameworks etc.
So we seem to have a plurality of methods actually being used for philosophical theorizing. Are they all equally legitimate and adequate, both in general and in specific cases? I submit that the correct response to this plurality is methodological pluralism.
Watch Naif Al-Mutawa explain the vision behind his comic The 99. Behind all the jokes and business promotion (not to mention cultural studies in action), Naif explains his way of promoting an evolving understanding of Islam within Islamic cultures (and outside of these). [UPDATE ADDED LATER: must have been cartoon day in philosophy blogland.]
Philosophy, since its inception, has been characterized by persistent disagreements. The situation in philosophy is perhaps worse than in other formalized disciplines, such as scientific or mathematical practice. Peter van Inwagen argued that it would indeed be "hard to find an important philosophical thesis that, say, ninety-five percent of, say, American analytical philosophers born between 1930 and 1950 agreed about in, say, 1987."
I do not have a clear view of the situation in 1987, but the PhilPaper survey suggests that van Inwagen may be on the right track--the strongest inclinations are towards non-skeptical realism (81 %), scientific realism (75 %) atheism (72.8%). In how far is disagreement in philosophy cause for concern? Suppose, say, that 95% or even 100 % of philosophers had been atheists or scientific realists, would this count as compelling proof against the existence of God or in favor of the existence of unobservable scientific entities? As long as we don't really have a good account of what philosophical intuitions are, it is hard to make sense of this.
An extensive part of the disagreement in philosophy stems from people having differing intuitions, for example, on whether or not free will is incompatible with determinism. Despite their variability, philosophical intuitions are often tremendously compelling to those who hold them: to explain his difference in opinion on compatibilism with Lewis, van Inwagen writes "I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions." Experimental psychologists suggest that philosophical intuitions not only show individual variation, but might also be correlated with factors like gender or ethnicity, cause for additional concern about their reliability.
There are several approaches to the problem of the instability of philosophical intuitions. To give a recent example, Jennifer Nagel has a recent interesting paper where she shows that the types of instability found in epistemic intuitions (e.g., Gettier cases) are also found in perceptual judgments, such as susceptibility to perceptual illusions. She also argues that some of the earlier studies on purported effects of ethnicity in intuitions about what knowledge is are methodologically faulty. She refers to an ongoing study by herself and others that indicates, pace the original studies on Getter cases, that East Asians and westerners have similar intuitions.
What I find truly fascinating is that, despite extensive research on philosophical intuitions in experimental philosophy or metaphilosophy, we have little idea what the psychological basis of philosophical intuitions might be. Jennifer Nagel argues they are akin to perception. But whereas we have a good psychological account of perception, we lack a good psychological account of philosophical intuition. This makes philosophical disagreement all the more puzzling and hard to make sense of.
Alva Noë has a recent post on gender, commenting on some of the experimental results described in Cordelia Fine’s Delusions of Gender (some readers may recall that John Protevi and I are huge fans of her work, and of this book in particular). (Btw, Noë’s post even got linked by Leiter – it’s great to see Leiter drawing attention to gender issues.) I quote from Noë’s post:
Conjure before your mind the image of a physics professor. Imagine what his life is like. Now pretend, for a few moments, that you are that person. Try to get a feel for what it is like to be him.
Now let's start anew. This time think of a cheerleader. Picture her; imagine what her life is like. Now pretend to be her. Imagine what it is like to be her.
Here is a short report on the Extended Cognition Workshop, which just took place over the last days in Amsterdam. The general goal of the workshop was to bring together people interested in the concept of extended cognition in the spirit of ‘second wave EM’ (EM: extended mind) (Sutton) and ‘cognitive integration’ (Menary). The main characteristic of this general approach is emphasis on what is often described as the complementarity principle:
In extended cognitive systems, external states and processes need not mimic or replicate the formats, dynamics, or functions of inner states and processes. Rather, different components of the overall (enduring or temporary) system can play quite different roles and have different properties while coupling in collective and complementary contributions to flexible thinking and acting. (Sutton, ‘Exograms and Interdisciplinarity’)
The contrast is with versions of EM which emphasize the parity principle:
Cognitive states and processes extend beyond the brain and into the (external) world when the relevant parts of the world function in the same way as do unquestionably cognitive processes in the head.(Sutton, ‘Exograms and Interdisciplinarity’)
One of the good things that giving in to Twitter has brought me is following the Twitter stream of Massimo Pigliucci (@mpigliucci), who is professor of philosophy at Lehman College at CUNY, and an activist for many 'rationalist' causes (science education, critical thinking etc.). He runs the site Rationally Speaking, which is full of interesting material; Massimo does the kind of 'empirically-informed philosophy' that I am so keen on, and brings in a wide range of empirical data relevant for philosophical discussion. (He also posts excellent quotes on his Twitter stream, such as: "Philosophy is to the real world as masturbation is to sex." -K. Marx. "Don't knock masturbation, it's sex with someone I love." -W. Allen.)
His latest podcast is on the science and philosophy of happiness, and I've just had the pleasure to listen to it (pun intended or not intended, whatever...). In the podcast he and his co-host Julia Galef discuss the concept of happiness from a philosophical point of view, drawing mostly from the familiar ancient sources (Plato, Aristotle, Epicurus, the concept of eudaimonia etc.), complemented by an array of data coming from the recent field of 'happiness studies'. They do a very good job at outlining how the two perspectives can complement and enrich each other, while also having an impact on very tangible aspects of human life. Naturally, they also discuss rankings of the happiest countries in the world, the latest version of which received quite some attention (see here for Berit Brogaard's analysis, herself a national of the country that came on top of the list, Denmark). It turns out that your typical 'happy' country is a Nothern European country with high levels of social equality, a strong welfare system and widely available healthcare and education (but countries such as Canada and Australia also do very well, for similar reasons).
This is not so surprising, but Massimo and Julia also discuss some unexpected results of happiness studies, such as that having children seems to correlate negatively with short-term happiness (it's a tough job!). I highly recommend the podcast if you have some time to spare; after all, thinking about happiness seems like a worthy time-investment, one should think.
The Society for Women in Philosophy (SWIP) has just announced that the Distinguished Woman Philosopher Award of 2011 goes to Jennifer Saul (University of Sheffield). Previous winners include Sally Haslanger (2010), Ruth Millikan (2006) and Sara Ruddick (2002) (for a list of past recipients, see here). Here is the full announcement, from which I would like to single out the following statement (my emphasis):
Jenny Saul has demonstrated courage and leadership, and she is leading feminists forward to new ways of thinking and connecting.
Indeed, these are perhaps two of the most admirable features of Jenny’s work. Her work on the role of implicit biases in sexism (and other –isms) has significantly contributed to a new recasting of the issues: the most subtle, most ubiquitous and arguably most dangerous expressions of sexism are in fact related to highly unconscious cognitive processes. So it is not (only) a matter of opposing sexism on explicit, ideological grounds; perhaps more importantly, what needs to be addressed are these underlying mechanisms which even those who do not see themselves as sexist fall prey to. Among other things, this approach sheds new and important light on the issue of the blameworthiness of sexist behavior. Another important aspect of her work is that it can be described as ‘empirically-informed feminism’, which is a great development. (On a personal note: it is no secret to anybody that I am a fan of pretty much anything that is ‘empirically-informed’, and the move away from purely ideological/conceptual discourse towards empirically-informed analysis is one of the reasons why I felt compelled to become a bit of a feminist myself.)
If I had to bring only one journal to a desert island, it would probably be Behavioral and Brain Sciences; I am continuously amazed by the high-quality and innovative nature of its contents. It works with the very interesting format of one target article per issue (usually a long, controversial and ambitious piece), and short commentaries by peers. One of the signs that it is a truly exciting journal is that commentators often come from different disciplines; in particular, it is very common to see philosophers commenting. The current issue, for example, is dedicated to Sue Carey’s new book The Origin of Concepts, and has commentaries by people such as Tyler Burge, Christopher Gauker, Edouard Machery, Eric Margolis (just to mention some of the philosophers).
But today I would like to focus on the April issue of BBS, with a controversial target article by Hugo Mercier and Dan Sperber: ‘Why do humans reason? Arguments for an argumentative theory’ (an open-source pdf of the paper can be found here). Let me quote its abstract in full:
Psychologists who examine numerical cognition in young children and people from nonnumerate cultures have found that our default, unlearned, mode to represent cardinalities is not according to a linear mental number line, but a logarithmic one. In other words, our intuitive sense of numerosities roughly corresponds to the natural logarithm of those numbers. Linear numerical representations, such as the natural numbers and the way we place them on rulers and other linear representations, are cultural inventions. As Dehaene et al. put it: "The concept of a linear number line appears to be a cultural invention that fails to develop in the absence of formal education."
The research by developmental psychologist Robert Siegler indicates that children learn to make linear representations of number by prolonged cultural expose in school and in (more informal) home settings like playing board games. Siegler and Booth (2004) found that this linearity of number lines appears gradually. In one of their experiments, they gave ﬁve- to seven-year-olds an unscaled number line with 0 at the left side and 100 at the right, and asked them to place various numbers on this number line. Younger children typically placed small numbers too far to the right. For example, they tend to place the number 10 in the middle of the scale. On the other hand, they tended to underestimate the distance between the higher numbers, placing 70 far too close to 100. The older children, on the other hand, made much more linear estimations. Intriguingly, this process is repeated as children learn to deal with cardinalities up to 1000. Siegler and Opfer gave children between 7 and 11 years of age lines from 0 to 1000, and again found that the younger children tended to have logarithmic representations, and the older ones linear representations.
I wonder whether we could generalize the observation that linear numerical representations need to be relearned. In other words, is the case that we need to re-calibrate our mental representation of magnitudes each time we learn to deal with higher numbers? Leiter's blog recently pointed my attention to the following site, which gives you a sense of how much one billion dollars can buy. The shopping list of expensive and useless items (including a private island, a plane, some yachts, etc.) is impressive, and yet the billion dollars are not even halfway spent. Most of us have no idea what a billion dollars is, or indeed have an intuitive feeling of the difference between one billion and one million. It's all equally mind boggling.
If the default mode of our mental representation of numerosities is logarithmic, most of us have no idea of how much some people are making, and what a huge difference it would make if they were fairly taxed. In other words, one reason that we do not protest at the extreme wealth, is that we simply do not have an intuitive grasp of how extreme it is.