It is always impressive when someone is willing to publicly state that they were wrong about a controversial topic. Such things happen rarely, but there have been a number of recent cases. For example, last July Richard Muller declared himself to be a "converted skeptic," saying that he now acknowledges that global warming is real and that humans are almost entirely the cause. Two days ago, another such example emerged when Mark Lynas publicly apologized for having helped to start the anti-GM movement in Europe, thus "demonising an important technological option which can be used to benefit the environment."
However laudable such recantations are, they can still be called into question, and indeed, I question the basis for Lynas's, as least as it is presented in the transcript linked to above. He begins by calling the anti-GMO movement "anti-science," a claim that I debunked here and here, at least with respect to the labeling of GMOs. Lynas subsequently states that "one by one [his] cherished beliefs about GM turned out to be little more than green urban myths," and lists six such purported myths. Below, I examine each of these, and show why they are not, in fact, myths.
This NYT article (h/t Greg Downey on FB; check out his Neuroanthropology blog) lays out research on the effects of social conditions (isolation vs integration) on PTSD. Greg excerpted this quote:
It turns out that most trauma victims — even survivors of combat, torture or concentration camps — rebound to live full, normal lives. That has given rise to a more nuanced view of trauma — less a poison than an infectious agent, a challenge that most people overcome but that may defeat those weakened by past traumas, genetics or other factors. Now, a significant body of work suggests that even this view is too narrow — that the environment just after the event, particularly other people’s responses, may be just as crucial as the event itself.
I thought this one about Nepalese ex-child soldiers provided a good concrete example:
But in villages that readily and happily reintegrated them (usually via rituals or conventions specifically designed to do so), they experienced no more mental distress than did peers who had never gone to war. The lasting harm of being a child soldier, it seemed, arose not from the war but from social isolation and conflict afterward.
At his blog Edward Feser has been responding to Thomas Nagel's
critics (no, not me (yet)!). In response to Sober's review he concludes
with the following sociological remark:
think, is precisely what is going on -- the “presuppositions that Nagel
trying to transcend” run so deep in contemporary academic philosophical
that it is difficult for most philosophers to get any critical distance
on them. They lack, as Nietzsche might have said, the courage
for an attack on their own convictions. And
yet the evidence that there is something deeply wrong with the
consensus is all around them even in “mainstream” academic philosophy --
work of renegade naturalists like Nagel, Searle, Fodor, McGinn, et al.;
like Chalmers, Brie Gertler, Howard Robinson, John Foster, et al.; and
like the “new essentialist” metaphysicians and philosophers of science
Ellis, Martin, Heil, Mumford, et al.) and the analytical Thomists
Haldane, et al.). It’s psychologically
easy (even if philosophically sleazy) to dismiss one or two of these
as outliers who needn’t be taken seriously.
But as their ranks slowly grow, it will be, and ought to be, harder both
psychologically and philosophically to dismiss them.
Which is no
doubt why the more ideological naturalists would very dearly like to strangle
this growing challenge to the consensus while it is still in its crib -- hence
the un-philosophical nastiness with which Nagel’s views have been greeted in
some quarters. But Sober, to his credit,
is not an ideologue, and is sober enough to acknowledge at least the possibility that Nagel is on to something.--Edward Feser.
Thomas Nagel’s recent attack on Darwinism
raises important metaphysical questions about methodology, which Eric has
begun to explore. Here, I want to muse on a no doubt unintended effect of
Nagel’s argument—a rumoured small boost in the regard accorded to Fodor’s
earlier attack on Darwinism (aided by Massimo Piattelli-Palmarini, whose complicity in this is a mystery to me). True, Fodor's little dagger looks philosophically cautious by comparison to Nagel's WMD. My purpose here is simply to
remind you, dear reader, that like Generalissimo Francisco Franco, Fodor’s
negative critique is Still Dead. And it's feeling No Better.
Analytical philosophy has made great
progress over the last century. But its original, necessary biases did some
harm, too. In particular, detailed working knowledge of the history of
philosophy and metaphysics was banished for several generations. While
metaphysics is thriving again, we still lack (despite the brilliance of David
Lewis' modular approach) complete systems of thought that can rival in depth
and interlocking breadth the past masters (say, Suarez, Leibniz, etc.). The
damage has also been more narrow. For example, one of the most obvious
so-called ‘Kuhn Losses’ is our
relative ignorance of the nature and implications of the Principle of Sufficient Reason (PSR). This is no
surprise because analytical philosophy was founded in the act of rejecting PSR.
Our forefathers’ attempt to balance between common sense and the truths of
science meant -- as science and the PSR parted ways -- the willing submission to brute, ultimate facts (recall this post).
In Mind & Cosmos, Thomas
Nagel happily embraces “a form of the principle of sufficient reason” (17) in
support of his "common sense" (5, 7, etc.) and against the recent
“orthodox scientific consensus.” (10; 5) Rather than accepting this
"ideological consensus," (128) Nagel insists -- regularly using
language reminiscent of the great Feyerabend -- that "almost
everyone in our secular culture has been browbeaten into regarding the
reductive research program as sacrosanct." (7) While Nagel insists that
the champions of scientific enlightenment are bullies, he treats the
"defenders of intelligent design" with "gratitude" (Plantinga returns the gratitude),
even though Nagel clearly recognizes that once one embraces one's inner sensus
divinitatis one is also compelled in one's judgments. (12)
A classic statement of the PSR is Spinoza's
"For each thing there must be assigned a cause, or reason, both for
its existence and for its nonexistence." (Ethics 1p11d2) That is to
say, any PSR worth having imposes significant explanatory demands (especially
of non-arbitrariness) on any philosophical system in which it is deployed.
Below the fold I critically discuss Nagel's way of combining the PSR and his
attempted revisionary science, but here I just register the marvelousness
of Nagel's deployment of the PSR as an instrument in the service of common
sense! (cf. 91-2) This is certainly an original move in the history of
metaphysics--one that, in a single, magical stroke overturns Lovejoy's long narrative.
Today is Universal Children’s Day. The date was created in 1954 by the UN, but it seems to me that it remains fairly unknown to the public at large. Now, why do we need a day to celebrate the children of the world and to promote their welfare, you may ask? Well, as it turns out, children remain one of the most oppressed groups among humans: scores of children around the world are neglected, abused, and fail to receive schooling, proper nutrition, love and attention. Whenever things get rough, say during wars and in violent environments, children are typically the most vulnerable and thus the most likely to suffer. More generally, parents and caregivers experiencing hardship will have extreme difficulties in being adequate caregivers (it’s hard enough when things are fine!).
Why is it so? There is a rather simple biological explanation for why human youngsters are vulnerable: the human species is unique in terms of the number of years (relative to the expected lifespan for the species) that a youngster is deeply dependent on others, parents in particular (but not exclusively), for survival. The fancy word for this phenomenon is altriciality, and humans share this characteristic with many bird species, but not with our closest living cousins, the great apes. As with many birds, rearing a human child is such a tough job that humans have become a largely monogamous species, again unlike our ‘promiscuous’ cousins the chimpanzees; motherly care alone is not sufficient, so the involvement of the father in childrearing becomes vital (if nothing else, to provide for food even if he is not a ‘hands-on’ kind of dad).
Why exactly are Alvin Plantinga and Tom
Nagel reviewing each other? And could we have expected a more dismal intellectual result
on Nagel’s Mind and Cosmos in the
New Republic? When two self-perceived
victims get together, you get a chorus of hurt: For recommending an Intelligent
Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the
predictable price; he was said to be arrogant, dangerous to children, a
disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid,
unscientific, and in general a less than wholly upstanding citizen of the
republic of letters.”
My heart goes out to anybody who utters such a wail,
knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.
This NYT article on Occupy Sandy (h/t to Mark on FB) is noteworthy for highlighting a pressing problem for more-or-less anarchist strains of thought as they cross political affect.
"Occupy Wall Street has managed through its storm-related efforts not only to renew the impromptu passions of Zuccotti, but also to tap into an unfulfilled desire among the residents of the city to assist in the recovery. This altruistic urge was initially unmet by larger, more established charity groups, which seemed slow to deliver aid and turned away potential volunteers in droves during the early days of the disaster."
The question is how to institutionalize for efficiency without losing the face-to-face that helps drive empathy and altruism. I tend to take a modest evolutionary psychology angle here -- there really is something about faces for human beings and that something is plausibly an evolved predisposition for emotional resonance. Of course, the big question here is the ontological status of "predisposition."
Before my son was born (nearly three years ago) I showed very little interest in children. I was not 'anti-', just indifferent in the way that I am still indifferent about, say, Nascar racing, clubbing, or (I apologize to my academic friends) wine-tasting. On a glorious late afternoon after I got my flu-shoot I was walking along the Brouwersgracht (see here for a picture, but imagine less leaves on the trees), grateful for the lack of rain and thinking about various deadlines. When I walked by a crowded playground (see here, but imagine lots of children), I stopped and looked at the kids absorbed in their play. Before I knew it I was filled with joy, fatigue and innumerable number of other recently familiar feelings. After a moment's immersion in the scene and the accompanying feelings, it occurred to me that some kind of associative mechanism had done its work.
While I have an impossible-to-find publication that argues there are fatal problems in Hume's account of the associative mechanism, I never doubted that we do associate. But this may have been the first time I felt the mechanism's existence as a kind of brute force that could overwhelm a prior disposition otherwise. But this realization did not please me; rather, I was reminded of another fact--I completely lack a vocabulary for the recently familiar feelings that accompany the recurring mixed joy-fatigue state. (Not all of these are joy.) Echoing Socratic midwifery, Hume famously describes the fate of his philosophical works with parent-child metaphors, but it occured to me that many of the philosophers (e.g., Plato, Hobbes, Spinoza, Hume, Adam Smith, Nietzsche) that I have been thinking about during the last fifteen years, or so, died childless (or at least without acknowledging their children). A few of them had very intense experiences tutoring young men, but their accounts of the passions do not offer me the concepts to name the feelings. Reading Rawls or other more recent philosophers and economist hasn't helped. Has, say, feminist philosophy made a difference on this score? It ought to, so I would love to hear from informed readers.
"(A proximate mechanism is an immediate direct cause, while an ultimate explanation is the last in the long chain of factors leading up to that immediate cause. For example, the proximate cause of a marriage breakup may be a husband's discovery of his wife's extramarital affairs, but the ultimate explanation may be the husband's chronic insensitivity and the couple's basic incompatibility that drove the wife to affairs.) Physiologists and molecular biologists regularly fall into the trap of overlooking this distinction, which is fundamental to biology, history and human behavior. Physiology and molecular biology can do no more than identify proximate mechanisms; only evolutionary biology can provide ultimate causal explanations."--Jared Diamond Why is Sex Fun?, 1997.
Let's grant Diamond the coherence of the distinction between proximate mechanisms and ultimate explanations. Let's also grant it him in the way he has articulated it, despite, perhaps, a lingering sense that being "last in the long chain of factors" does not quite grasp the fundamentality of an ultimate explanation. I was struck by (i) Diamond's insistence (without evidence) that his fellow scientists "regularly" overlook the distinction, and by (ii) his further claim that "only" evolutionary biology can get at ultimate causal explanations. Diamond has a nice sense of the intellectual hierarchy within the intellectual division of labor. (If you plug in "metaphysics" for evolutionary biology and "mechanics" for molecular biology, you get a standard 18th century picture embraced by Berkeley and Leibniz, I think.)
Here I am not interested in the hankering after even more fundamental than fundamental explanations (you know, God, the Principle of Sufficient Reason, Final Causes, etc.). Rather, this morning I was gripped by this thought: why think that in the real world there is anything over and above the proximate causes and mechanism? Why isn't the whole idea of "ultimate" explanations just our chasing after patterns? Now, what would persuade me otherwise is if the evidence for the ultimate causes can (a) systematically avoid relying on the evidence for the the proximate causes and (b) be also more robust, higher quality, etc. But given that it is so hard to do controlled experiments in the service of (b), (b) doesn't seem to be so easy to achieve. I am too ignorant about the details to have any strong opinion on (a). Anyway, I bet there are standard answers to my gripped questions.
In Part 1, I discussed the accusation that proponents of Proposition 37 in California are anti-science, pointing out that such claims rest on a highly misleading picture of the genetically modified food industry as involving pure "value-free" science. (See, e.g., here, here, here, here, here, and here. Proposition 37 is a ballot measure that, if it passes, would label GM foods sold in California as GM foods).
Here in Part 2, I take up a second prong of the issue. Even if one acknowledges that the production of genetically modified food is not a value-free endeavor, one still might think that proponents of labeling GMOs are anti-science because they (the proponents) refuse to accept the data that show that GMOs are not harmful to humans. However, there are three problems with this version of the anti-science accusation: 1) it falsely claims that there is nothing new about GMOs, 2) it overlooks the point that there is enough uncertainty about the studies of GMOs on human health to make it reasonable for individuals to want to decide for themselves whether to eat GMOs or not, and 3) it assumes that human health is the only relevant scientifically-based objection to GMOs,
Nicholas McGinnis at the Rotman Institute has a fascinating follow up post to Mohan's recent, critical post on Alvin Plantinga. First, McGinnis has an interesting argument, drawing on Buridian, that effectively shows that Plantinga assumes he "knows the will of God." Sometimes I wonder why Plantinga's cabal gives him a free pass on such stuff, but that's really none of my business.
Second, and more important, is McGinnis' "side-note" treatment of what can be best described as a colossal failure of judgment at the Stanford Encyclopedia of Philosophy (one of my favorite institutions in the whole of professional philosophy). For, "Plantinga’s SEP entry on 'Religion and Science' ...functions as a showcase for
Plantinga’s own views." It is nice to see that Plantinga shares my admiration for Newton, but it is odd to read in a SEP entry, without qualification, that "Indeed, the pursuit of
science is a clear example of the development and enhancement of the
image of God in human beings, both individually and collectively." One especially striking aspect of McGinnis' argument is that in the entry Plantinga favorably discusses Michael Behe's views on “irreducible complexity.” After an excited ("a Gargantuan challenge" to Darwinism) paragraph-long favorable summary of those views, Plantinga adds a single sentence -- and without irony -- that others "argue that Behe has not proved his case."
Thomas Nagel's Mind and Cosmos is drawing quick responses. (Can't wait to read Mohan's!) Both in the hostile review by Brian Leiter and Michael Weisberg as well as in the more cautious strategic pivot by Alva Noë (who doesn't engage critically with Nagel's book), mythic history of the scientific revolution plays significant rhetorical roles.
Let's start with Noë:
If there is mind — and of course the great scientific revolutionaries
such as Descartes and Newton would not deny that there is mind — it
exists apart from and unconnected to the material world as this was
conceived of by the New Science.--Alva Noë (NPR)
Let's accept Noë's point about Descartes. But Newton
thought minds had to be somewhere in space and in time, extended but "indivisible." Incidentally, this is also Newton's doctrine about "the Maker and Lord of all things" who "cannot
be never and no where." (Principia, General
Scholium.) And at one point earlier in his career, Newton also flirted with the idea that an
extended body had to be the kind of thing that was capable of exciting various perceptions in the senses and imagination of minds (this is from a piece known as "De Gravitatione;" I am linking to a very nice treatment by Zvi Biener and Chris Smeenk.) [Note that I am not drawing on the infamous sensorium passage at all.]
Evolution is a lot more subtle than it is given credit for. In
1987, Patricia Churchland, expressed a rather common take when she wrote “The
principle chore of nervous systems is to get the body part where they should be
in order that the organism may survive.” (Whoa! Survival?! Isn’t reproduction the fundamental variable?) A
tale of solitary organisms fleeing predators, finding scarce food, and pouncing
on potential mates. No time for thought and reflection: “Truth, whatever that
is, definitely takes the hindmost.” Rarely is it noted (even by Churchland
herself) that in the short space between those two sentences, she wrote “a
fancier style of representing is advantageous so long as it is geared to the organism’s way of life and enhances the
organism’s chances of survival.” A salutary turn, but then a reversion to
that emphasis on survival.No thought
that believing the truth can lead to speaking the truth, which, given a social
“way of life” might be behaviour with positive evolutionary consequences.
In 1993, Alvin Plantinga seized on Churchland in an attempt
to show that the theory of evolution is incompatible with naturalism. That is
pretty cheeky, don’t you think? Could he be right?
In 2007, a study by Hamlin, Wynn and
Bloom was published in Nature claiming to show that preverbal babies had what
could be described as a ‘moral compass’ (not the authors’ own terms in the
article). From the abstract:
Here we show that 6- and 10-month-old infants take
into account an individual's actions towards others in evaluating that
individual as appealing or aversive: infants prefer an individual who helps
another to one who hinders another, prefer a helping individual to a neutral
individual, and prefer a neutral individual to a hindering individual. These
findings constitute evidence that preverbal infants assess individuals on the
basis of their behaviour towards others. This capacity may serve as the
foundation for moral thought and action, and its early developmental emergence
supports the view that social evaluation is a biological adaptation.
On November 6, 2012, Californians will vote to decide if genetically engineered foods, whether raw or processed, should be labelled as such (see details here). If it passes, it would be the first such law in the U.S., even though at least 50 countries worldwide, including all of the European Union, China, Japan, and Russia, already have GMO label laws. The ballot measure, Proposition 37, has generated a lot of heat on both sides.
Although the debate is complex, one meme has caught my eye in particular: those who advocate for "yes on 37" have been termed "anti-science" by members of the "no on 37" camp. Some have even likened pro-labelers (presumed to be anti-GMO, although that is not necessarily the case) to climate change deniers and evolution deniers.
A heated discussion ensued from my post on
circumcision last week, which in turn was essentially a plug to a
thought-provoking post by Brian D. Earp at the Oxford Practical Ethics blog.
The controversial point was whether circumcision is or is not to be compared to
female genital cutting.
I’ve learned a lot from the different
perspectives presented during the discussion; among other things, I’ve learned
the terms ‘genital alteration’ and ‘genital cutting’, which now seem to me to
be more adequate than either ‘circumcision’ or ‘genital mutilation’ to
formulate the issue in a non-question-begging way (as argued here). And yet, I am now even more
convinced that the analogy between male genital alteration and female genital
alteration is a legitimate one – which (and let me say this again!) does not
mean that there are no crucial differences to be kept in mind. That's what an analogy is, after all.
Gregory Dawes reviews Michael Ruse's The Philosophy of Human Evolution in NDPR today. It's a balanced and thoughtful review, but I would like to query one prominent point in it.
Dawes reports that according to Ruse, "our evolutionary history, as embodied in our genetic makeup, imposes constraints on the range of behaviours that human beings may successfully undertake." In particular, "it may be that as a result of our evolutionary history "women want to spend time with their young children in ways that men do not" (p.196). It follows that we "should be cautious about utopian proposals for complete sexual identity" (p.196).
Now, first of all, we should be cautious about the term "constraint." Human nature is highly plastic, especially with regard to social behaviour. To say that men are constrained not to spend time with young children is far too strong. If Ruse says this—I haven't laid hands or eyes on the book, I am afraid, and so I don't know—then he goes too far. But suppose he had said something like: Men and women have genetic predispositions to behave in certain ways, but these predispositions are remediable by the right kind of education. Would Dawes object?
The idea that really disturbs Dawes is Ruse's plea that we "should be cautious about utopian proposals for complete sexual identity" (p.196).
Justin E. Smith has an interesting post today on the history of orgasm, or in any case the history of conceptualizations of orgasm, going back to the early modern period. The short story is: when orgasm was viewed as a not-particularly-venerable bodily reaction, comparable to sneezing, it was a 'female thing'; when it began to be viewed as something 'cool', it was promptly associated with maleness. Somehow, I'm not surprised... But do go check out the whole post!
Last week, I argued that art couldn’t be a spandrel, at any rate not a spandrel on norms of beauty. In this, my concluding post on art and beauty, I want to advance two theses. The first is that our sense of beauty comes from art, not the other way around. And the second is that the art capacity is adaptive.
Let’s start with the sense of beauty, or more generally, the sense that things have aesthetic value. Throughout this series of posts, I have been sympathetic to Kant’s notion of disinterested pleasure. I judge a thing to be beautiful because it gives me pleasure (or displeasure) in a way that is disinterested, i.e., which is independent of my desiring it, or feeling an aversion to it.
In this post, I want to begin to consider evolutionary explanations of the universality of art. Here, I make a simple point: evolutionary explanations should (but typically don’t) account for the essential role of form in art. Accounting for form has a surprising result.
Last week, I talked about the sense of beauty. I argued first that it would be very strange if there were human beings, or even human-like beings, who are incapable of making Kantian judgements of taste. Then, in a second post, I reviewed some attempts to account for universal norms of beauty in evolutionary terms, i.e. the standards by which we determine what things are beautiful. These attempts get the domain of beauty wrong. From their perspective, chocolates and sex should also be objects of beauty. But this is not the right category for sex. In sum, I think that the sense of beauty is universal–but that norms of beauty are parochial and hard to account for in evolutionary terms, at least if it’s important to account for them as beauty.
Today I want to reflect on the universality of art, which has been illuminatingly discussed by Ellen Dissanayeke, Stephen Davies, Noël Carroll, and Denis Dutton. The phenomena are striking.
Art has extremely ancient, perhaps even pre-Homo sapiens, origins. Hand axes made by H. ergaster one and a half million years ago display a symmetry unrelated to function, and these was highly time-consuming to produce. Forty thousand years ago, at Enkapune Ya Muto in Kenya, people made delicate strings of beads out of ostrich eggshell, an extremely delicate process. While these artifacts lack the individual style of their makers, and are hence better classified as craftworks, they nevertheless display a “disinterested”–this term will be explained later–regard for appearance and form that is the hallmark of art.
There I suggested that any (normal) human-like creature is capable of Kantian judgements of taste. Today, I want to consider the appreciation of beauty without restriction to the rigorous Kantian ideal. My concern here is the simple sense of beauty–the gobsmacked reaction that people have to gorgeous sunsets, magnificent mountain ranges, the starry sky, and also (my ultimate quarry) to great artistic creations. Obviously, SOB–I apologize for the rebarbative acronym, but perhaps some demystification is healthy–is universal among humans, and apparently not so among other animals. It must, therefore, be an evolved characteristic, something that sprung up in some hominin species and inherited by us. (Million year old handaxes are meticulously symmetric, and dubiously functional; so it seems likely that SOB is pre Homo sapiens.)The question is: why did it evolve?
I have been writing an entry on “Art and Evolution” for the Routledge Companion to Aesthetics (3rd edition), and I am going to try out some ideas in a series of posts. On the main points that you might expect to hear about, my positions are (in ascending order of logical strength):
Art is culturally universal.
Art is evolved.
Art is selected for.
I won’t get to 3, the most contested of these theses, for a while.
I want to start by considering the question: “Is the appreciation of beauty culturally universal?” I also want to touch on whether it is evolved and/or selected for. (To be clear: this is just a preliminary to the above questions about art.)
We had a vigorous discussion last week about the merits of Bernard Suits’s definition of game. Of course, we did not reach agreement. But Tom Hurka and I argued that there is a difference between defining concepts and defining words. Our position was that Suits’s definition focuses on a valuable concept that fits most games, even if it is not precisely coextensive with the vernacular use of the word ‘game’.
Today, I want to work with a Suits-like game-concept, setting aside worries both about correspondence with the vernacular and about precise details. The idea I want to explore is that games have a reflexive structure shared by art. I also want to suggest that they might share something significant at the base level of the reflexive structure.
In the context of the controversy sparked by the article published last week comparing infanticide (which it refers to as 'after-birth abortion') to abortion, I thought it might be useful to highlight Sarah Hrdy’s work on infanticide, both among humans and among other animals (see here for example). Some attempts to refute the analogy between abortion and infanticide presented in the polemic paper (see here for example) argued that infanticide is unnatural among humans, and hardly ever practiced.
Now, in her book Mother Nature Hrdy argues convincingly that infanticide is much more widespread among humans than we like to think. One need not agree with her attribution of a fitness-enhancing component to the practice of infanticide (criticized for example in this paper) to be convinced by the data she presents, indicating the ubiquity of the practice among humans. My point here is not to argue that, since humans practice it, it must be a legitimate practice (I did take ‘Is-ought fallacy 101’ after all), but simply to point out that there is a lot of misconception out there concerning actual occurrences of infanticide among humans (and other species). Everyone interested in the topic would benefit from a closer analysis of Hrdy’s work, even from a ‘purely philosophical’ point of view (i.e. the ‘ought’ side of the story).
By way of contrast, let me just mention a very sensible proposal I heard the other day: a vibrator should be offered to every young woman reaching a certain age (which age exactly is at this point still under debate), in a government-funded project in the interest of public health. The owner of this great idea does realize that even in a fairly liberal country such as the Netherlands, this might be a bit hard to sell, but he is confident that at least some political parties will see the merits of the proposal.
Many readers will have already seen Jess Prinz’s recent blog post criticizing a psychological study defending the Male Warrior hypothesis, according to which men are evolved to seek out violent conflicts in order to get women. He now has a reply to the objections raised by two other bloggers, one of them one of the authors of the study (H/T Feminist Philosophers). I’m not sure this is appropriate language for blogging, but I just can’t help myself: Prinz is really kicking ass, there is no better way to describe it. Some excerpts:
In the early 1990s I was an undergraduate at Tufts, and took an exciting seminar with Dan Dennett on his manuscript that became Darwin's Dangerous Idea. After graduation and some travel I ended up alone on a Greek Island, where captivated by the memetic theory, I wrote a very long (80 pages?) ms on how to turn memetics into a science. I sent it to Dennett and received a very kind, encouraging response. (All of this aided by Greek mail.) I went to Chicago to work with Bill Wimsatt on the science of memetics in a weekly reading group with Bill and Betty Van Meer. Bill was skeptical, but open-minded. So, we decided to focus on technology as a species of cultural evolution and try to build a science of memes out of a collection of case-studies. After relentless discussion and debate, I gave up on turning memes into a science just as the Journal of Memetics was founded (ca 1997). Bill wrote a lovely, much cited piece about memes, and that was the end of the memetic matter for me.
So, I read the this review of a book bashing memetics with a touch of bemusement; the reviewer agrees with the book (and also bashes neo-classical economics). The following paragraph astonished me: