This has been going around the internet over the last couple of days, but for those who have not seen it yet: The Nation has an excellent overview article of the Hauser affair, by distinguished psychology professor Charles Gross. Let me quote some of the concluding paragraphs, which discuss in particular the damaging effect of the affair for the whole field of animal cognition, and in particular of the secretive way in which the investigations have been handled.
As mentioned before, recently I read Cordelia Fine’s A Mind of its Own, a highly informative and accessible account of some of the traits of human psychology, as documented by empirical research, indicating that our cognitive and emotional apparatus is highly unreliable. From the introduction:
[…] the truth of the matter – as revealed by the quite extraordinary and fascinating research described in this book – is that your unscrupulous brain is entirely undeserving of your confidence. It has some shifty habits that leave the truth distorted and disguised. (p. 2)
The rhetoric is quite (too?) strong, and one may raise an eyebrow or two at the conflation of brain with human cognition and psychology generally speaking. Nevertheless, the evidence presented by Fine is compelling and unsettling. The chapters have the following titles: the vain brain, the emotional brain, the immoral brain, the deluded brain, the pigheaded brain, the secretive brain, the weak-willed brain, the bigoted brain, and finally the vulnerable brain. (You get the picture…) I highly recommend the book, especially for philosophers who still hold on to the idea that human cognition is for the most part reliable and truth-conducive.
As many of you have probably already seen, Rebecca Kukla has an excellent post up at Leiter’s blog on the effects of implicit biases, specifically as affecting hiring practices. However, as she is done with her job of guest-blogger over there, the post is not open for comments, and with Rebecca’s agreement, I figured it might be useful to have a discussion here.
Rebecca is making very good points about the effects of implicit biases in hiring practices, and in particular how hard (in fact, nearly impossible) it is to shield yourself from them if you are on the decision-making side of things. Now, as it turns out, one of the books I read over my vacation last week was Cordelia Fine’s A Mind of its Own(as mentioned before, co-blogger John Protevi and I are big fans of her work). One of the chapters of the book is ‘The Bigoted Brain’, and she discusses precisely some of the findings from experimental psychology (on the ways implicit biases operate) that Rebecca refers to. As she mentions, one of the surprising features of implicit biases is that, if you actively try to suppress them, they in fact re-emerge later on with additional strength. (In fact, it is not so surprising given that suppressing specific thoughts is likely to have a priming effect.) Here’s an excerpt from the book:
I’m just back from an extremely enjoyable family vacation in sunny Fuerteventura, which also means that I am swamped by a zillion work-related things that need to be attended to asap. I also want to resume blogging, and have a few posts already lined up in my head (in particular, one on the ‘climate for women’ discussion which has re-emerged), but where do I find time for all this? (One almost regrets going on holiday and forgetting about it all for a while, given the harsh conditions upon return!)
But anyway, today I came across two interesting links, via the New Scientist twitter feed, and thought it might be a good topic to resume blogging. As it turns out, Steven Pinker’s most recent interest is the history of violence, which he takes to be a privileged window for his long-standing interest in human nature (broadly construed). In his new book The Better Angels of our Nature, he claims that there has been a significant decrease in homicides and violent deaths over the centuries: ‘Humans are less violent than ever’. This becomes particularly clear if the death tolls of historical occurrences of horror are estimated on the basis of the human population at the time, and what the proportion would mean in terms of the current human population in the world. This was done by finding the per-capita death rate at the midpoint of the event's range of years, based on population estimates from McEvedy and Jones.
A few weeks ago, Helen reported on a wonderful conversation she had with her 7 year-old daughter on the ontological status of numbers. Helen also remarked that the children of scientists and researchers are often the subject of all kinds of ‘experiments’ unbeknownst to them. I must confess that I’ve performed a wide range of cognitive ‘tests’ on my kids, but before social services are called I can assure you all that they greatly enjoyed it and saw it all as a fun game. I have in particular done the false belief task with both, at different ages, and can report that they fall squarely within the expected results!
Now, as some readers may recall, I am working quite extensively on reasoning, deductive reasoning in particular, both from a philosophical and a psychological perspective. So I’ve been through most of the voluminous literature on the psychology of reasoning (my own account of the findings can be found in chapter 4 of my forthcoming book, draft available here), and as is well known, in experiments with deductive tasks, participants overwhelmingly fail to give the ‘right’ response from the point of view of the canons of deduction as traditionally construed. And yet, these studies were almost all conducted with participants having a fairly homogeneous educational background, namely undergraduates of North-American and Western European universities. My hypothesis is that even the modicum of ‘logical competence’ that does emerge from the experiments is by and large a product of the formal education they received. To test this hypothesis, one would have to isolate the education component and thus undertake the same or similar experiments with participants with a very different educational background, in particular unschooled subjects. Unfortunately, very few studies of this kind have been conducted, but the ones which have do suggest that unschooled participants tend to engage with the task materials in *very* different ways.
Every reasonably neurologically healthy person has some fear of public speaking. How much varies hugely from individual to individual. But I suspect that it is very common among philosophers. Why? Because the majority of people who enjoy receiving a lousy salary in return for an insane amount of work have got to have some very good reasons. One good reason, I believe, is that they enjoy working in the comfort of their own home and enjoy the solitude and the control they have over their own time and direction of their work. They are good old-fashioned introverts, who don't really truly enjoy large assemblies of people but who may have adjusted to them and who may even come across as extroverts on a good day. Do introverts fear public speaking more than extroverts? I don't know. But I believe that they do. If you dislike large groups of people or prefer your own company to that of other people, it is not likely that you by nature are super-comfortable speaking to a large group of people. That said, I don't want to rule out that some people went into the profession because of the possibility of fame and attention.
As for my own case, I started out with an extreme fear of public speaking. I recall taking a large lecture class in molecular biology the first year of college. Despite it being a large lecture class, we were all expected to do a presentation. I hadn't spoken in front of a lot of people before, so I had no idea that I had a fear of public speaking. I was assigned a topic, and over-prepared. I made about 50 slides. This was before the age of Powerpoint. So my slides were the old-fashioned transparent kind that you put on an overhead projector. They were all lying in my lap in the correct order when I was sitting in the lecture hall waiting for the professor to call my name. I felt my heart pump very fast and hard even before he called my name. When he called on me, I stumbled down the steps to the front of the lecture hall. My hands were shaking. My legs felt like rubber. Then as I was about to put the first slide on the overhead projector, I dropped all the slides on the floor. The 200 students in the lecture hall were not making a single noise. It was so quiet that I could hear my heart pound. I had no idea what to do. Like an idiot, I hadn't numbered the slides and now they were all lying in a big mess on the floor. No one said anything, not even the professor. I collected the slides from the floor in a big messy pile in my arms, mumbled that I just couldn't do this and then went back to my seat. No one said anything. The professor started lecturing like nothing had happened. I felt terrible.
In several of my posts, I mentioned the book on formal languages that I've been working on for the last few years. I now have a draft of the book ready for (moderate!) public consumption, which is now available here. The two final chapters are still missing, but the draft is already something of a coherent whole, or so I hope.
Many people have kindly expressed their interest in checking out the material, hence my decision to make it available online at this point, despite the fact that it is still a somewhat rough draft (references are still a mess). Needless to say, comments are always welcome :)
A new paper by Nieuwenhuis, Forstmann, & Wagenmakers in Nature argues that roughly half of all papers in five top neuroscience journals assert differences between the effects of interventions when at most they are entitled to is to assert that an intervention has had a statistically significant effect. Their argument is explained very well in a Guardianarticle by Ben Goldacre. The authors write in their introduction:
Are all these articles wrong about their main conclusions? We do not think so. First, we counted any paper containing at least one erroneous analysis of an interaction. For a given paper, the main conclusions may not depend on the erroneous analysis. Second, in roughly one third of the error cases, we were convinced that the critical, but missing, interaction effect would have been statistically significant (consistent with the researchers’ claim), either because there was an enormous difference between the two effect sizes or because the reported methodological information allowed us to determine the approximate significance level. Nonetheless, in roughly two thirds of the error cases, the error may have had serious consequences.
So the headline should not be: “Half of Neuroscience Papers are Wrong”, but rather “Half of Neuroscience Papers are Insufficiently Well Argued/One-Third Need Fixing”. We’ll see what the headline-writers do…
A sidelight: the authors, whose affiliations are Dutch, use “intuition” in more-or-less the philosopher’s sense. Is that use diffusing into world outside philosophy?
The historian’s attitude toward his or her sources, when attempting to establish matters of fact, is one of tempered but universal skepticism. The same applies to the history of the present. For example:
Don’t depend on popularizations for your knowledge of neuroscience (see also the previous item in this blog for a similar issue concerning the biology of sex). A recent headline in severalnewspapers and online sources reads something like this: “Magnetic Pulses To The Brain Make It Impossible To Lie”. Wow! That’s exciting! And scary too…
Jeffrey Zacks, in Psychology here at Washington University, and his collaborators have been studying human event perception for the last ten years. A recent paper, in press at the Journal of cognitive neuroscience, and available at his website (pdf) argues that perceptual event boundaries occur in experience at points where prediction becomes difficult.
[…] working memory representations of the current event guide perceptual predictions about the immediate future [less than 10 sec]. These predictions are checked against what happens next in the perceptual stream; most of the time perceptual predictions about what happens next are accurate. From time to time, however, activity becomes less predictable, causing a spike in prediction errors. These spikes in prediction error are fed back to update working memory and reorient the organism to salient new features in the environment. According to this model, the increase in prediction error and consequent updating results in the subjective experience of an event boundary in perceptual experience.
The tenets of Zacks’s view are (i) that current experience consists in representations actively maintained in working memory; (ii) present experience consists partly in anticipations of future experiences. Memory, insofar as it enters the stream of experience, would be on this account proleptic, forward-looking; mere recall has no place.
Aristotle says that animals don’t recollect: they don’t search their memories for information about the past (De memoria ii, 453a8, Hist. anim. 488b26; see Grote, Aristotle 476). On what grounds he said that I don’t know, but whether it was a shrewd surmise or a lucky guess he seems to have been right. Aristotle also put forward a version of what became the predominant philosophical picture of memory—that it consists in the registering of an “impression” which is subsequently to be recalled, as if the mind had a filing-card drawer or a mental museum (such as figured in Ancient and Renaissance arts memoriæ). That picture, attractive though it is, may well be fundamentally misleading. Modelling biological memory on the specifically human capacity that consists in voluntary recall of items subject to intersubjective standards of accuracy (e.g., the procedures of memorization employed by the reciters of epic poetry, to take an example Aristotle would have known) may turn out to be yet another case where intuition has led us astray.
A predominantly proleptic function for working memory, moreover, fits nicely with theories according to which perception requires activity on the part of the perceiver, so that the perception of red, for example, to use Mohan’s example (taken from Justin Broackes) is effectively the perception of a pattern of sensations that arises from the perceiver’s having regarded the red thing from several perspectives—a feat normally made posssible only by moving. Event perception too may be governed, if not by activity itself, then by anticipations of activity.
Many NewAPPS bloggers (Helen, John, Mohan, myself) are favorably disposed towards analyses of human cognition which could be described as ‘naturalized’ in that data from empirical sciences (psychology, biology, cognitive science) play an important role.
Now, one crucial aspect in analyses of this sort in general is the issue of continuity and discontinuity between human and non-human animals. We are all familiar with Darwin’s idea that the difference between ‘us and them’ (Pink Floyd, anyone?) is “one of degree and not of kind”, and this seems to be the basic assumption underlying much of the work on non-human animal cognition that has the goal of producing a better understanding of human cognition. (Naturally, there is also the independent project of studying non-human animal cognition and behavior as a goal in and of itself.)
The two main camps are: those who marvel at the complexity of non-human animal cognition and deplore our tendency towards species chauvinism (fondly referred to as ‘monkey-huggers’ sometimes); and those who emphasize the abysmal distance between human and non-human cognition (whom I will refer to as ‘people-huggers’). (I’m using ‘cognition’ in a broad sense here, meant to include also work on e.g. sociability by someone like Frans de Waal.) And among people-huggers, at least some (but not all) end up defending a position that smacks of “We humans are so damn special and unique! There’s really nothing like us.” (also known as 'humaniqueness')
One aspect that is often (though not always) overlooked is the fact that there have been a bunch of closely-related cousins of ours roaming around the Earth at different times, but as it turns out they are all gone now: the missing hominids.
UPDATE: I've changed the term used to describe the fourth category in the taxonomy below from 'conceptual analysis' to 'conceptual reflection'. I hope the new term is better able to cover the many approaches suggested by commenters which did not seem to fit the original description in a straighforward way.
In light of the very interesting methodological discussions we’ve been having here at New APPS on the relations between physics and metaphysics, I’d like to put forward a tentative taxonomy of different strands within philosophical methodology. I suspect it can also be useful for discussions on the analytic vs. continental divide and its overcoming, which is also a recurrent theme in this blog.
Indeed, looking at past and present work in philosophy (and trying to be as encompassing as possible), it would seem that we can identify four main strands of methods used for philosophical analysis:
Formal methods – these correspond to applications of mathematical and logical tools for the investigation of philosophical issues. As examples one could cite the development of possible world semantics for the analysis of the concepts of necessity and possibility, applications of the Bayesian framework to issues in epistemology (giving rise to so-called formal epistemology), Carnapian explication, and many others.
Historical methods – they rely on the assumption that, to attain a better understanding of a given philosophical concept/problem, it is useful (or even indispensable) to trace its historical origins in philosophical theorizing. Of course, the study of the history of philosophy has intrinsic value as such (emphasis on ‘history’) but at this point I’m interested in what Eric Schliesser has once described as ‘instrumental history of philosophy’ (emphasis on ‘philosophy’).
Empirical methods – these are the methodological approaches that systematically bring in elements from empirical sciences, such as the sciences of the mind (particularly relevant for philosophy of mind, epistemology, but to my mind also for philosophy of logic and mathematics), physics (possibly relevant for metaphysics), biology (arguably relevant for ethics, and everywhere else where evolutionary concepts come into play) etc. Sometimes this approach is described as ‘naturalistic’, but as we know there are (too?) many variations of the concept of naturalistic philosophy (many self-described naturalistic approaches are not sufficiently empirically-informed to my taste).
Conceptual reflection – arguably the most traditional philosophical method, consisting in unpacking concepts and drawing implications, introducing new and hopefully useful concepts, problems, conceptual frameworks etc.
So we seem to have a plurality of methods actually being used for philosophical theorizing. Are they all equally legitimate and adequate, both in general and in specific cases? I submit that the correct response to this plurality is methodological pluralism.
Watch Naif Al-Mutawa explain the vision behind his comic The 99. Behind all the jokes and business promotion (not to mention cultural studies in action), Naif explains his way of promoting an evolving understanding of Islam within Islamic cultures (and outside of these). [UPDATE ADDED LATER: must have been cartoon day in philosophy blogland.]
Philosophy, since its inception, has been characterized by persistent disagreements. The situation in philosophy is perhaps worse than in other formalized disciplines, such as scientific or mathematical practice. Peter van Inwagen argued that it would indeed be "hard to find an important philosophical thesis that, say, ninety-five percent of, say, American analytical philosophers born between 1930 and 1950 agreed about in, say, 1987."
I do not have a clear view of the situation in 1987, but the PhilPaper survey suggests that van Inwagen may be on the right track--the strongest inclinations are towards non-skeptical realism (81 %), scientific realism (75 %) atheism (72.8%). In how far is disagreement in philosophy cause for concern? Suppose, say, that 95% or even 100 % of philosophers had been atheists or scientific realists, would this count as compelling proof against the existence of God or in favor of the existence of unobservable scientific entities? As long as we don't really have a good account of what philosophical intuitions are, it is hard to make sense of this.
An extensive part of the disagreement in philosophy stems from people having differing intuitions, for example, on whether or not free will is incompatible with determinism. Despite their variability, philosophical intuitions are often tremendously compelling to those who hold them: to explain his difference in opinion on compatibilism with Lewis, van Inwagen writes "I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions." Experimental psychologists suggest that philosophical intuitions not only show individual variation, but might also be correlated with factors like gender or ethnicity, cause for additional concern about their reliability.
There are several approaches to the problem of the instability of philosophical intuitions. To give a recent example, Jennifer Nagel has a recent interesting paper where she shows that the types of instability found in epistemic intuitions (e.g., Gettier cases) are also found in perceptual judgments, such as susceptibility to perceptual illusions. She also argues that some of the earlier studies on purported effects of ethnicity in intuitions about what knowledge is are methodologically faulty. She refers to an ongoing study by herself and others that indicates, pace the original studies on Getter cases, that East Asians and westerners have similar intuitions.
What I find truly fascinating is that, despite extensive research on philosophical intuitions in experimental philosophy or metaphilosophy, we have little idea what the psychological basis of philosophical intuitions might be. Jennifer Nagel argues they are akin to perception. But whereas we have a good psychological account of perception, we lack a good psychological account of philosophical intuition. This makes philosophical disagreement all the more puzzling and hard to make sense of.
Alva Noë has a recent post on gender, commenting on some of the experimental results described in Cordelia Fine’s Delusions of Gender (some readers may recall that John Protevi and I are huge fans of her work, and of this book in particular). (Btw, Noë’s post even got linked by Leiter – it’s great to see Leiter drawing attention to gender issues.) I quote from Noë’s post:
Conjure before your mind the image of a physics professor. Imagine what his life is like. Now pretend, for a few moments, that you are that person. Try to get a feel for what it is like to be him.
Now let's start anew. This time think of a cheerleader. Picture her; imagine what her life is like. Now pretend to be her. Imagine what it is like to be her.
Here is a short report on the Extended Cognition Workshop, which just took place over the last days in Amsterdam. The general goal of the workshop was to bring together people interested in the concept of extended cognition in the spirit of ‘second wave EM’ (EM: extended mind) (Sutton) and ‘cognitive integration’ (Menary). The main characteristic of this general approach is emphasis on what is often described as the complementarity principle:
In extended cognitive systems, external states and processes need not mimic or replicate the formats, dynamics, or functions of inner states and processes. Rather, different components of the overall (enduring or temporary) system can play quite different roles and have different properties while coupling in collective and complementary contributions to flexible thinking and acting. (Sutton, ‘Exograms and Interdisciplinarity’)
The contrast is with versions of EM which emphasize the parity principle:
Cognitive states and processes extend beyond the brain and into the (external) world when the relevant parts of the world function in the same way as do unquestionably cognitive processes in the head.(Sutton, ‘Exograms and Interdisciplinarity’)
One of the good things that giving in to Twitter has brought me is following the Twitter stream of Massimo Pigliucci (@mpigliucci), who is professor of philosophy at Lehman College at CUNY, and an activist for many 'rationalist' causes (science education, critical thinking etc.). He runs the site Rationally Speaking, which is full of interesting material; Massimo does the kind of 'empirically-informed philosophy' that I am so keen on, and brings in a wide range of empirical data relevant for philosophical discussion. (He also posts excellent quotes on his Twitter stream, such as: "Philosophy is to the real world as masturbation is to sex." -K. Marx. "Don't knock masturbation, it's sex with someone I love." -W. Allen.)
His latest podcast is on the science and philosophy of happiness, and I've just had the pleasure to listen to it (pun intended or not intended, whatever...). In the podcast he and his co-host Julia Galef discuss the concept of happiness from a philosophical point of view, drawing mostly from the familiar ancient sources (Plato, Aristotle, Epicurus, the concept of eudaimonia etc.), complemented by an array of data coming from the recent field of 'happiness studies'. They do a very good job at outlining how the two perspectives can complement and enrich each other, while also having an impact on very tangible aspects of human life. Naturally, they also discuss rankings of the happiest countries in the world, the latest version of which received quite some attention (see here for Berit Brogaard's analysis, herself a national of the country that came on top of the list, Denmark). It turns out that your typical 'happy' country is a Nothern European country with high levels of social equality, a strong welfare system and widely available healthcare and education (but countries such as Canada and Australia also do very well, for similar reasons).
This is not so surprising, but Massimo and Julia also discuss some unexpected results of happiness studies, such as that having children seems to correlate negatively with short-term happiness (it's a tough job!). I highly recommend the podcast if you have some time to spare; after all, thinking about happiness seems like a worthy time-investment, one should think.
The Society for Women in Philosophy (SWIP) has just announced that the Distinguished Woman Philosopher Award of 2011 goes to Jennifer Saul (University of Sheffield). Previous winners include Sally Haslanger (2010), Ruth Millikan (2006) and Sara Ruddick (2002) (for a list of past recipients, see here). Here is the full announcement, from which I would like to single out the following statement (my emphasis):
Jenny Saul has demonstrated courage and leadership, and she is leading feminists forward to new ways of thinking and connecting.
Indeed, these are perhaps two of the most admirable features of Jenny’s work. Her work on the role of implicit biases in sexism (and other –isms) has significantly contributed to a new recasting of the issues: the most subtle, most ubiquitous and arguably most dangerous expressions of sexism are in fact related to highly unconscious cognitive processes. So it is not (only) a matter of opposing sexism on explicit, ideological grounds; perhaps more importantly, what needs to be addressed are these underlying mechanisms which even those who do not see themselves as sexist fall prey to. Among other things, this approach sheds new and important light on the issue of the blameworthiness of sexist behavior. Another important aspect of her work is that it can be described as ‘empirically-informed feminism’, which is a great development. (On a personal note: it is no secret to anybody that I am a fan of pretty much anything that is ‘empirically-informed’, and the move away from purely ideological/conceptual discourse towards empirically-informed analysis is one of the reasons why I felt compelled to become a bit of a feminist myself.)
If I had to bring only one journal to a desert island, it would probably be Behavioral and Brain Sciences; I am continuously amazed by the high-quality and innovative nature of its contents. It works with the very interesting format of one target article per issue (usually a long, controversial and ambitious piece), and short commentaries by peers. One of the signs that it is a truly exciting journal is that commentators often come from different disciplines; in particular, it is very common to see philosophers commenting. The current issue, for example, is dedicated to Sue Carey’s new book The Origin of Concepts, and has commentaries by people such as Tyler Burge, Christopher Gauker, Edouard Machery, Eric Margolis (just to mention some of the philosophers).
But today I would like to focus on the April issue of BBS, with a controversial target article by Hugo Mercier and Dan Sperber: ‘Why do humans reason? Arguments for an argumentative theory’ (an open-source pdf of the paper can be found here). Let me quote its abstract in full:
Psychologists who examine numerical cognition in young children and people from nonnumerate cultures have found that our default, unlearned, mode to represent cardinalities is not according to a linear mental number line, but a logarithmic one. In other words, our intuitive sense of numerosities roughly corresponds to the natural logarithm of those numbers. Linear numerical representations, such as the natural numbers and the way we place them on rulers and other linear representations, are cultural inventions. As Dehaene et al. put it: "The concept of a linear number line appears to be a cultural invention that fails to develop in the absence of formal education."
The research by developmental psychologist Robert Siegler indicates that children learn to make linear representations of number by prolonged cultural expose in school and in (more informal) home settings like playing board games. Siegler and Booth (2004) found that this linearity of number lines appears gradually. In one of their experiments, they gave ﬁve- to seven-year-olds an unscaled number line with 0 at the left side and 100 at the right, and asked them to place various numbers on this number line. Younger children typically placed small numbers too far to the right. For example, they tend to place the number 10 in the middle of the scale. On the other hand, they tended to underestimate the distance between the higher numbers, placing 70 far too close to 100. The older children, on the other hand, made much more linear estimations. Intriguingly, this process is repeated as children learn to deal with cardinalities up to 1000. Siegler and Opfer gave children between 7 and 11 years of age lines from 0 to 1000, and again found that the younger children tended to have logarithmic representations, and the older ones linear representations.
I wonder whether we could generalize the observation that linear numerical representations need to be relearned. In other words, is the case that we need to re-calibrate our mental representation of magnitudes each time we learn to deal with higher numbers? Leiter's blog recently pointed my attention to the following site, which gives you a sense of how much one billion dollars can buy. The shopping list of expensive and useless items (including a private island, a plane, some yachts, etc.) is impressive, and yet the billion dollars are not even halfway spent. Most of us have no idea what a billion dollars is, or indeed have an intuitive feeling of the difference between one billion and one million. It's all equally mind boggling.
If the default mode of our mental representation of numerosities is logarithmic, most of us have no idea of how much some people are making, and what a huge difference it would make if they were fairly taxed. In other words, one reason that we do not protest at the extreme wealth, is that we simply do not have an intuitive grasp of how extreme it is.
Some of us knew the empiricists were right all along. Did Bach-y-Rita help with that? I can't remember but his last name stuck in my mind... Did the rewired ferrets - a memory from way back in 2000?
Oh, I know there are plenty of fans of Leibniz — especially in the cocktail 'Leibniz-Whitehead', like this — out there, but sorry: empiricism was right this time. Of course, one can respond to the news in different ways: just accept it — follow the science — or, like Diderot in the Letter on the Blind (for which he got sent to jail; here in a dusty translation), go on to assert a metaphysics in which each sense constitutes a world!
This article would be a good way to introduce the topic to students and / or non-specialist friends. The author is the excellent science journalist Chris Mooney. Excerpts:
... an array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about....
To mark the end of Catarina Dutilh Novaes’ VENI-project on formal languages and the new appointment of Julian Kiverstein at the philosophy department of the University of Amsterdam, a workshop on extended cognition will take place in Amsterdam on June 27th--28th (afternoon on the 27th, whole day on the 28th), in the Oudemanhuispoort building of the university (Room A 0.08). The focus will be on conceptions of extended cognition in the spirit of ‘second-wave EM’ (Sutton) or ‘cognitive integration’ (Menary).
Richard Menary (Wollongong), "Cognitive Transformations"
Julian Kiverstein (Edinburgh/Amsterdam), "A social externalist account of cognitive agency, or why cognition isn't organism centred"
Helen de Cruz (Leuven), "Extended cognition in mathematical practice: The case of Chinese algebra"
John Protevi (LSU), "Extended Cognition, extended responsibility: cyborgs in modern warfare"
Catarina Dutilh Novaes (Amsterdam/Groningen), "Formal languages in logic, and extended cognition"
Bryce Huebner (Georgetown), "Responsibility for socially scaffolded minds"
Joel Krueger (Copenhagen), "Extended cognition and shared emotions"
Erik Myin (Antwerp), "Bound by parity?"
Jurgis Skilters et al. (Latvia), "Extended selves in distributed social networks"
Mirko Farina (Edinburg), "Finding my Mind: a Case for Extended Cognition"
Pierre Steiner (Louvain), "Unhappy coupling: extended cognition with representationalism"
REGISTRATION: Registration is now closed. For those who have registered, please don't forget to bring EUR 10 for the registration fee (will be charged on the spot).
PRELIMINARY PROGRAM (subject to change):
Monday June 27th
13.00 R. Menary, "Cognitive Transformations"
14.00 E. Myin, "Bound by parity?"
15.15 M. Farina, "Finding my Mind: a Case for Extended Cognition"
16.00 C. Dutilh Novaes, "Formal languages in logic, and extended cognition"
17.30 Reception (Café de Sluyswacht, Jodenbreestraat 1)
Tuesday June 28th
9.30 J. Protevi, "Extended Cognition, extended responsibility: cyborgs in modern warfare"
11.00 B. Huebner, "Responsibility for socially scaffolded minds"
11.45 J. Krueger, "Extended cognition and shared emotions"
12.30 lunch (own arrangements)
14.00 H. de Cruz, Extended cognition in mathematical practice: The case of Chinese algebra"
15.00 J. Skilters, "Extended selves in distributed social networks"
16.15 P. Steiner, "Unhappy coupling: extended cognition with representationalism"
17.00 J. Kiverstein, "A social externalist account of cognitive agency, or why cognition isn't organism centred"
As previously reported, I will be in St. Andrews at the beginning of April to attend a workshop on paradox and logical revision, and to deliver a talk at Arché in the context of the Foundations of Logical Consequence project. The title of my talk there is the same as the title of this post, 'The myth of the pre-theoretical notion of logical consequence', and is very much related to some of the things I was talking about in my post on proofs and dialogues this week. I've just written a tentative abstract for the talk, and I thought it might be interesting to share it with people here and see what you all think. So, as usual, comments are much appreciated!
UPDATE: I changed the title of the post, as the original one (which is the title of the article I discuss here) was perhaps not sufficiently clear.
(Thanks to Chris Fraser for the pointer, hidden in a comment he added to a post over at Leiter's blog of 2009! Quite amazing that I noticed this at all.)
I've just read a fascinating article on the positive effects of female role models in motivating young female students to pursue their interests in a given 'male' area. The article reports on research done by psychologists at the University of Massachusetts (Amherst), led by Nilanjana Dasgupta, on the effects of exposure to expert female role models. The research was focusing on mathematics and engineering, but the results certainly generalize beyond, in particular to philosophy.
There has been a lot of blogosphere activity in the past week around John Tierney's piece in the New York Times, where he questions the idea that "female scientists [face] discrimination and various forms of unconscious bias." He refers to a recent paper by Ceci and Williams, which argues that women with the same resources as men are just as likely to get their papers, grants, and job applications accepted. (The key thing is of course the 'same resources' bit).
Then Alison Gopnik brilliantly rebutted Tierney's argumentation in Slate (see also the Feminist Philosophers on Gopnik's piece here and here). Her piece is not only brilliant because it argues for the position that I am sympathetic with (^_^) (i.e. that implicit biases do severely affect the position of women in science and academia), but also because it manages to explain very clearly some of the basics of scientific methodology as currently endorsed and practiced (of course, we can still discuss the merits of this methodology). What she says connects nicely to some of the posts I've been writing where the matter of scientific methodology comes up, in particular here and in the discussion ensuing here. So let me quote a few key passages.
Stephan Hartmann & co in Tilburg are organizing yet another exciting conference, Formal Epistemology Meets Experimental Philosophy (September 2011). Last year I thoroughly enjoyed the Future of Philosophy of Science conference and the Descartes Lectures event with Ian Hacking, which were both awesome (except for the poor gender balance in the keynote speakers' lineup, but for once this is not what I want to talk about!). The CFP for the upcoming conference has been widely circulated in several blogs (It's only a theory, Choice and Inference), so it is not exactly lacking publicity, but it points in the direction of interesting new developments, so I would like to say a few words on it here.
I just saw on Philos-l the announcement of an event to discuss the scientific status of psychoanalysis. It seems to be organized by the "Institute of Psychoanalysis", so I have no idea how impartial the discussion will be (I am not familiar with the work of any of the speakers either). It is of course an old Popperian question, but given that Popper's conception of science is no longer particularly endorsed, I was wondering if people have thoughts on the scientific status of psychoanalysis given current conceptions of science. I tend to think that the answer is simply NO, but it is worth debating.