Aarøe Nissen is a 22-year-old math student at Aarhus University,
Denmark, with extraordinary memory abilities. He has competed in memory
sports for several years. He can recite the number Pi to more than
20,000 decimal points, recall thousands of names, faces and historical
dates and remember the order of a pack of cards.
Our perception of time varies greatly
depending on our age, mood, stress level and psychological health and
stability. Psychological disorders, such as Parkinson's disease,
attention deficit hyperactivity disorder and schizophrenia, can mess
with the brain's time keeping mechanism and warp our estimation of time.
Patients suffering from these disorders are unable to properly
coordinate events in time. Patients over- or underestimate time
intervals ranging from several seconds to minutes.
How does this
happen? How does the brain manage to keep track of time and what goes
wrong in psychological disorders? Our senses (sight, hearing, smell,
taste and touch) use specialized sensory systems with task-specific
neurons to process sensory input. Yet there is no specific sensory
system for time. So how does our sense of time come about?
A review of a recent collection of essays on Davidson concludes with:
To conclude, there are some interesting and thought-provoking moments in
this collection. But the take-home message (no doubt unintended) is
that Davidson's insights and theorizing have far less currency in
current analytical philosophy than they did twenty or thirty years ago.
It is interesting to compare this volume with two very famous and
influential volumes: Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson, edited by Lepore, and Actions and Events: Perspectives on the Philosophy of Donald Davidson, edited
by Lepore and Brian McClaughlin. Those two volumes show how central
Davidson was at the time (1985 and 1986) to most of the major areas of
philosophy (language, epistemology, metaphysics, and mind). In contrast,
reading the present volume brings home how much philosophy has moved
away (for better or for worse) from those Davidsonian themes that
captured the imagination of entire generations of analytic philosophers.--José Luis Bermúdez.
I rarely agree with José Bermúdez, but for once I share his sentiment. (Recall this post on how Anscombe's Intention is being unshackled from a Davidsonian interpretive frame.) Still, it would be interesting to see some careful data on this; this quick and dirty data suggests that the earlier "Davidsonic boom" may just a being at Oxford-induced illusion--a known perceptual bias. Either way, José does not explain why "philosophy" moved "away" from Davidsonian themes. Is it just a consequence of changing fashions, or have fatal arguments been directed against the Davidsonian program? Is it too early to tell? Readers's insights much appreciated.
In the supernatural thriller Memory,
written by Bennett Joshua Davlin, Dr. Taylor Briggs, who is the leading
expert on memory, examines a patient found nearly dead in the Amazon.
While checking on the patient, Taylor is accidentally exposed to a
psychedelic drug that unlocks memories of a killer that committed
murders many years before Taylor was born. The killer turns out to be
his ancestor. Taylor’s memories, despite being of events Taylor never
experienced, are very detailed. They contain the point-of-view of his
ancestor and the full visual scenario experienced by the killer.
the movie is supernatural, it brings up an interesting question. Is it
possible to inherit our ancestors’ memories? The answer is not black and
white. It depends on what we mean by ‘memory’. The story of the movie is
farfetched: there is no evidence or credible scientific theory
suggesting that we can inherit specific episodic memories of events that
our ancestors experienced. In other words, it’s highly unlikely that
you will suddenly remember your great-great-grandfather’s wedding day or
your great-great-grandmother’s struggle in childbirth.
We are conducting a study of color
discrimination and short-term color memory. I would be grateful if you
would participate in the study. You'll need to use the left and right
arrow keys to adjust the color of a square to fit the color of a second
image. It will only
take about 5-10 minutes. Click on the link below to begin. www.synesthesiaresearch.com/study
In 2008 two Princeton Economists, Faruk Gul and Wolfgang
Pesendorfer, published an increasingly influential methodological statement, "The Case for Mindless Economics" (hereafter "GP08"). Professors Gul and Pesendorfer publish regularly together and they also happen to be among the tightly-knit group of core-gate-keepers in the economics profession. So, for example, if you look at the submission guidelines of Theoretical Economics [TE], co-edited by F. Gul, you can read: "If you have previously submitted your paper to Econometrica, you have
the option of requesting that the referees' reports and covering letters
and the editor's decision letter be transferred to the coeditor
assigned to handle your paper at TE." Of course, until very recently Pesendorfer was one of the co-editors at Econometrica. (It would be impolite, of course, to view these journals as rent-seeking instruments, but how else to interpret economically this policy: "a paper judged to be unlikely to be acceptable by a
second round will be rejected, either without consultation with
referees or in response to referee reports. In either case, the
submission fee will not be refunded.") Econometrica does have an important "conflict of interest policy," but that does not prevent group-think. Either way, we can safely treat GP08 as a proxy for (recent) establishment views in economics.
The main and (almost) only target of GP08 is what they call "neuro-economics," which they conflate with (experimental) research on the brain. (They also frequently use the term "philosophy" to refer to an enterprise completely irrelevant to "economics" now and always.) Gp08 systematically ignores experimental research conducted by, say, economists (e.g. Vernon Smith and his various collaborators) that also focus on what GP08 calls "economic data." This is important to keep in mind when we evaluate the main thesis of GP08, which is that economics is mainly about rational choice theory (and its natural extension). The thesis is offered as a descriptive account of "common practice" among economists (1), although we also learn that given the economic "evidence" available to economists this approach has also rightly earned a "central role in economics." (43-44) Here's a statement of the main thesis:
"Lucy in the Sky with Diamonds" was a product of the Beatles'
experimentation with psychedelic drugs is still a subject of great
debate among Beatles fans and music experts. But it was no secret that
the lyrics of many of the pop legend's famous tracks was inspired by
LSD, including "I am a Walrus," "Tomorrow Never Knows," and "What's The
Real Mary Jane." The Beatles' creating during a hallucinogenic trip is
not a rare case of acid-driven creation, invention or discovery. The
double helix structure of DNA occurred to geneticist and neuroscientist Francis Crick
while he was tripping on the Lucy drug and low-level tech Kari Mullis
hit on the idea behind Polymerase Chain Reaction (PCR), a now
widely-used technique for amplifying a single piece of DNA by a factor
of 100 billion, while cruising along the Pacific Coast Highway one night
in his car on LSD.
In his famous paper entitled "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information"
cognitive psychologist George A. Miller of Princeton University argued
that our working memory, our ability to hold information in our minds
for a few seconds, is limited to 9 items. That's fewer items than the
items of a regular out of town American phone number. In light of this
you might wonder what to say about cases of people with extreme memory
abilities. Chao Lu
holds the Guinness world record in reciting Pi, a record dating back to
2005. Lu recalled 67,890 digits of Pi in 24 hours and 4 minutes with an
error at the 67,891st digit, saying it was a 5, when it was actually a
0. How is it possible to retrieve this quantity of information
accurately through working memory? Is it magic? After talking to several
people working in memory sports, we found out that it's not.
You are preparing for your upcoming exam, reading through thousands of pages. Suddenly you realize that you forgot to pay attention to what you actually read. You were reading along but your thoughts were elsewhere. "Good God," you think. "Hours of wasted time." You turn back the pages and start over. This time you make sure you pay close attention.
Recent research, to appear in the journal PNAS, suggests that you may be wasting even more time by doing that. You don't need attention to comprehend what you read or to do math. In fact, you may not even need consciousness. The researchers, located at Hebrew University, used a technique known as Continuous Flash Suppression (CFS) to suppress consciousness in some 300 research participants for a short period of time. In CFS a series of rapidly changing images is presented to one eye, whereas a constant image is presented to the other. When using this technique, the constant image supposedly is not consciously perceived until after about 2 seconds.
At our lab in St. Louis we are working with several people with superhuman abilities, also known as “savant skills.” My research assistant Kristian Marlow and I are also currently finishing a book entitled The Superhuman Mind (under contract with an agency, see updates here). We are blogging about these cases almost daily over at Psychology Today. The following are four brief stories about some of the individuals we are working with.
In the late 1970s, Benjamin Libet showed
that motor cortex activity preparing for an action occurs before the conscious
act of willing that action. (Here is a nice demonstration of the experiment by Patrick Haggard.)
Libet's result has been replicated countless times (as above), and though it is perhaps rash to generalize too broadly, let's just say we have strong evidence for:
Conscious acts of "willing" an action occur after the brain activity that cause the action, and so
Conscious acts of willing do not cause action.
As a philosopher, which of the following conclusions can I legitimately draw?
Mind-body identity is really a very big deal in contemporary
philosophyof mind. Should it be? Do materialists and naturalists need to commit to the identity of mind and body? I don't think so. In the context of mind-body debates, the debate about identity turns out to be about possibility, and this should not be much of a concern for naturalists.
First, a word about evidence.
Suppose that I have evidence that stronglysupports a proposition, p. Suppose further that q implies p, and that the only evidence that I have in support of q is that which supports p. Should I accept q?
In certain cases, obviously no. For example, suppose that I
have strong evidence that Peter (who happens to be a banker) is a thief.
Suppose, further, that my only evidence that lots of bankers (including Peter) are thieves is my evidence concerning Peter. Here, I would clearly be wrong to believe the stronger proposition. I have no reason to think that Peter is a bellwether for the banking industry.
In certain other cases, yes. Suppose that I have strong
evidence that a certain food gave Mary a lot of pleasure. Suppose, further,
that I have no evidence in favour of this food except what it did for Mary. Even so, I would be right to
believe that this food would give lots of people (including Mary) lots of pleasure. I have no reason to think that Mary is unique.
The general principle goes something like this, put in terms of possible worlds:
We have witnessed a growing interest in experimental philosophy in recent years. The
field that has commonly been referred to as "experimental philosophy"
has so far been taken to include empirical tests of intuitions
concerning philosophical concepts, such as knowledge and intentional
action. There are lots of other areas of philosophy that rely on
empirical data and even studies and experiments. Several years ago, it
became slightly uncool to do philosophy of language in complete
isolation from the empirical findings in linguistics. Many philosophers of
language decided to conduct their own tests of certain word groups or
word constructions. There are also several folks working in philosophy
of mind who take empirical data in neuroscience and psychology seriously. Some of us
also conduct our own studies in these areas.
It used to be called Asperger's Syndrome. A new suggestion for the upcoming Diagnostic and Statistical Manual of Mental Disorders (DSM-5), published by the American Psychiatric Association and expected to appear in May 2013, is to get rid of that term and include the condition under the label 'high-functioning autism'. Now take a look at some of the characteristics of high-functioning autism:
Obsession with a particular subject matter
Extremely good memory for details that seem quite irrelevant or uninteresting to others
Excellent theory of mind but inability to apply it to their own social encounters, unless trained
Exaggerated eye contact or lack of eye contact, unless trained
Poor face recognition skills
Difficulties remembering names
Show unusual attachment to objects or locations
Prefer to spend time alone and are often lost in their own thoughts
Hypersensitivity in sight, hearing, touch/body sensation, smell or taste (super-sensers)
Show unusual distress when routines are changed
A good number of these ten criteria apply to about 90 percent of philosophers I know... Go figure!
Inspired by this post and subsequent discussion on Facebook, I have started thinking about the philosophy of lucid dreaming again. As I have mentioned on earlier occasions, I have been practicing lucid dreaming for a few years. There are at least four levels of lucid dreaming:
1. Knowing that you are dreaming 2. Being able to control your own dream actions in a wake-like fashion 3. Being able to manipulate your dream surroundings 4. Being able to manipulate the dream actions of other people in your dreams
On earlier occasions, I was convinced that dreaming that you know that you are dreaming would suffice for knowing that you are dreaming. "I dreamed that" clearly is not a factive operator. If I dream that my cats can fly, it doesn't follow that my cats can fly. However, I thought that "I dream that I know that I dream" was a special case. 'I dream that I know that I dream," I thought, entailed that I know that I dream.
This is a very critical review of a recent book by Paul Churchland. I can't say if it is fair or not, although to me it read as if the reviewers were rightly annoyed at the lack of attention to opposing views ("While his book has many virtues, it is unfortunate that he repeatedly fails to do justice to his opponents' views."). But I was struck by the last paragraph:
A final striking fact about Churchland's book is that it seems almost wholly divorced from empirical psychology. Remarkably, indeed, in a book that advances a theory of the mind that is supposed to be empirically supported, Churchland provides only around thirty scientific references, just a third of which date from the twenty-first century, and many of which are computational rather than experimental in nature. One would like to think that he chose to provide only a judicious selection so as not to overwhelm his audience with references. But since he ignores a great many results that appear inconsistent with his main theses, we fear that the paucity of references requires a different explanation. Indeed, Churchland ignores almost entirely the extensive work in developmental and experimental psychology, in neuroscience, and in studies of comparative cognition that have been conducted by cognitive scientists, especially over the last twenty years. And it is precisely once we examine the theories supported by empirical phenomena of these psychological sorts that past and present arguments for nativism and for LOT (appropriately understood) begin to emerge.
Now I doubt I have a firm grip on the computational vs experimental distinction here. (Is the former a simulation with a Bayesian networks and the latter work on real humans and other animals?) Either way, this paragraph made me wonder: does a book entitled How the Physical Brain Captures a Landscape of Abstract Universals, have to cite experimental literature?
(I’ve been through a ridiculously busy period of work-related traveling and thus scarce blogging, and in the next four weeks I’m supposed to be on holiday, so again scarce blogging. But there is still one topic I really want to discuss before the summer break, so here it is.)
Here are a couple of brain-teasers for your amusement on this Monday morning/afternoon (depending on your time zone):
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days
A few months ago, I had a post on a philosophy of mind conference in Bochum, criticizing their 100% male lineup of speakers (an otherwise very impressive lineup!). Today an announcement for the same conference was sent around, but now with two female speakers added to the program. I do not want to presume that the change was in any way prompted by my post, but whatever the cause, this is a good outcome!
(I also think that conference organizers who work towards redressing the gender imbalance of their conferences at a later stage deserve credit for their courage to do so; they are not afraid of being seen as ‘caving in’.)
An acquaintance of mine in Oxford, who is a Catholic priest, is learning the psalter by heart. There are 150 psalms altogether. This has proven to be a challenging task (note that he also has an academic day job, next to duties as a priest, so he can't devote himself to the task full-time). Some psalms are catchy or short, others have interesting and striking metaphors, express recognizable emotions ranging from deep despair to hopeful optimism, and the very best (or most well-known) combine these elements, such as psalm 23 and psalm 42. But other psalms are harder to remember (think of psalm 119, for instance [this psalm has an acrostic structure that helps recall a bit, but still, it's very long...). In all, my acquaintance expects that the memorization process will take him years. When he has finally mastered all psalms "I will know as much as a 15-year-old boy in 1st century Israel". It seems that we are today a lot worse at accurate recall of verbal information (stories, songs, etc) than people in the past. An exquisite verbal memory was required to orally transmit long narrations like the Ramayana and the Mahabharata. Actors in Elizabethan times needed to learn about 70 different roles in one year time. If it is true that we are worse at verbal recall, why would this be so?
Just now on NPR, there was a discussion about toddlers and iPads that could have really used a Heideggerian intervention. The issue was, more or less, what is happening when you give a 2 year old an iPad and they get completely absorbed for 5 hours straight? Is this good for them or not? And does it help them to learn what they need to learn in order to mature into smart, productive kids and adults? NPR seems to love this stuff; there’s a shorter article on the same topic here.
A range of experts was consulted, most of whom said that we don’t have enough (empirical) research to answer these questions yet, but that we shouldn’t panic – we just need to make sure that kids get a balance of screen time and face-to-face interaction with other people. But the question that started the whole discussion was a father’s question about what is going on for his son when he “zones out” in front of the iPad. This question remained unaddressed, as far as I could tell from my own zoning in and out of the radio discussion. But isn’t this basically a matter of Benommenheit, or captivation, literally “being taken,” being absorbed in an object to the point where everything else fades away.
I have written about our case study of a person with acquired synesthesia and savant syndrome in an earlier post on this blog. To make a long story short, JP was hit on the head in a mugging incident and acquired traumatic brain injury.
After the incidence he started experiencing the world in terms of geometrical figures. He also had lost his ability to see smooth boundaries and smooth motion. He sees objects as separated from their surroundings in terms of tiny tangent and secant lines. He experiences motion in picture frames. When objects are moving relative to him or he is moving relative to objects, three-dimensional geometrical figures form before his eyes.
Right after the incident he started drawing some of these images by hand. They turned into beautiful pieces of art that have received several awards. After some elementary math training following the accident, JP also experienced automatic visual imagery in response to certain mathematical formulas.
(This post is dedicated to Eric and Sarit, who are getting married this weekend, and to my husband and myself, who are celebrating 10 years of marriage today!)
This past week, Ronald de Sousa (Toronto) was delivering a series of lectures on the philosophy of love at Leiden University. The material presented corresponds to the building blocks of the monograph on love he is currently working on. As is well known, Ronnie has made decisive contributions to the philosophy of emotions, in particular in his books The Rationality of Emotions (1987, MIT) and Emotional Truth (2011, OUP). He is now focusing specifically on romantic love – that many-splendored thing – combining elements from philosophy, literature and the neuroscience of love; so there are many reasons to look forward to the end-result, given that this plural, integrative perspective seems to be exactly what is required to make sense of such a rich and complex topic.
Due to other commitments, I could only attend one of the four lectures he delivered; but as many of the points he raised connect nicely with some of my previous posts here at NewAPPS, I figured I might as well write a post on the lecture. The overall thesis of the lecture is that most of our mainstream conceptions of love are ultimately ideologies of the pernicious kind. Ronnie started with the mother of all love ideologies: Aristophanes’ myth in Plato’s Symposium. According to the myth, human beings used to be somewhat round creatures with eight limbs and two faces, and came in three kinds: male, female and androgynous (which allows the myth to offer an account of sexual orientation as well). They were then chopped in two by a furious Zeus, and thus started to wander around the Earth missing ‘the other half’ of themselves. Aristophanes claims that when two people who were separated from each other find each other again, they never want to be separated and feel ‘whole’ again (192c).
Now that Eric has (duly) highlighted a conference on extended cognition with a very bad gender balance (11 men, 0 women), for the sake of fairness I’d like to draw everyone’s attention to another conference in the same area/topic which does have a much better gender balance, the ‘Distributed cognition and distributed agency’ workshop, taking place at Macquarie University pretty much as we speak.
My colleague Fred Keijzer and I are organizing a small workshop on 'The mark of the cognitive'. Here is the 'official' announcement, which might interest the many philosophy of mind-minded readers of NewAPPS.
'The mark of the cognitive' workshop
DATE: May 11th 2012, 9.30 to 18.00
PLACE: Faculty of Philosophy, University of Groningen
ATTENDANCE: All welcome, but please send a message to cdutilhnovaes at yahoo dot com if you intend to come.
One of the subjects I work with, JP, has acquired synesthesia and acquired savant syndrome. This happened as a result of a brutal assault in 2002, during which he was kicked and hit on the head. He was subsequently diagnosed with a bleeding kidney and an unspecified head injury. What the doctors didn't know was that JP no longer saw the world the way he used to. Objects suddenly did not have smooth boundaries. Things no longer moved smoothly. Motion took place in picture frames. It looked like someone paused and unpaused the flow of the world very rapidly. Even more amazing: JP was suddenly able to see vivid fractal images of objects with a fractal structure (such as, broccoli).
JP's response to his new way of seeing the world was to withdraw from it. He spent the following three years in his apartment and refused to leave unless it was strictly necessary. After three years in complete isolation JP figured that he would try to draw what he saw, so he could make people understand him. He started drawing. And he continued. He drew and drew and drew, using only a pencil, a ruler and a compass. The results were beautiful hand-drawn fractal-like images. JP didn't know then that he was the first in the world to hand-draw mathematical fractals and that he would later win prizes for his drawings. He didn't even know what he was drawing, except that it was what he saw.
"[W]e can see the philosopher at work here. He lets himself be led into various corners by the authors he is considering; he then finds his way out of these corners and into his own conceptual space. He arrives, perhaps, where he might not have arrived without this working through the other's thoughts. In this respect, what he says about the problem of intersubjectivity and language applies equally to the philosophical process. Philosophy is in some way a kind of language acquisition -- not in the simple sense of picking up a vocabulary, but in the sense of being guided along by the language that others have used, and then formulating an expression of something that goes beyond that. As Merleau-Ponty sometimes puts it, the child does not acquire language so much as language acquires the child. The same can be said of the philosopher and ideas."--Shaun Gallagher commenting on Merleau-Ponty.
Another well-worn example bites the dust? You remember that famous study in which the participants, if primed with words connoting agedness, walked more slowly when leaving the lab.
A new study by the Belgian team of Stéphane Doyen, Olivier Klein, Cora-Lise Pichon, and Axel Cleeremans not only failed to replicate the effect, but also appeared to show that the effect observed in the original study was owing to the experimenters’ expectations.