Philosophy is almost always interdisciplinary. But there is a strand of philosophy of mind—empirically informed speculation about mind, perception, and cognition—that stands out because it involves genuine partnership with empirical studies.
A couple of weeks ago, I remarked here on the method of imaginative counter-example in philosophy. Thanks to Helen De Cruz, the discussion morphed into a lively exchange of views about the value of intuition. On this topic, it is instructive to note how intuition has interacted with science in philosophy of mind. On a positive note for philosophers’ use of intuition, David Chalmers’ completely a priori zombie example played a palpable role (thought not all by itself, of course) in the revival of scientific interest in consciousness. On the other side of the ledger, Tom Nagel used certain cognitive dissociations in “brain bisected” patients to challenge long-standing philosophical intuitions concerning the unity of consciousness. Nagel has since been vindicated by the discovery of numerous cognitive dissociations that challenge classic statements of the unity of consciousness. So a productive conversation between cognitive neuroscience and philosophy of mind has been going on for a long time, and between philosophers and psychologists for an even longer time.
What I want to highlight today is a genuine back and forth between scientific experiment and philosophical intuition on one question, with Ned Block holding up the philosophy end. The question is this: Can a mental state be conscious, and yet be completely inaccessible to the subject? Many philosophers think that the idea is completely absurd, by definition. They think that a state is conscious if there is “something it’s like” to be in that state. They ask: How can there be something it’s like to be in a state if you have no access to that state? Many neuroscientists share this sense of absurdity: they think that it is “metaphysical” (in a bad sense of the term) to suppose that a subject could be incapable of being aware of a state that is (nonetheless) conscious. How then would one know that the subject is conscious?
Block (Behavioral and Brain Sciences 2007) seeks to unravel this conceptual tapestry by analysing a number of empirical findings. I’ll recount just one here, and present it in simplified form without various important wrinkles. The example is not, in fact, one on which Block rests his case, but it has the virtue of bringing out rather clearly the nature of the methodology used.
There is a part of your brain—the fusiform face area—that is active when you look at faces. This part of the brain is not active when you process random shapes, such as those of artefacts and houses. Now, this doesn’t show anything about consciousness yet—it just shows that the face area is active in the recognition of faces—but some neuroscientists devised a clever experiment that made the link to consciousness compelling. They exploited a phenomenon known as “binocular rivalry”. In the binocular rivalry experiments, subjects are (simultaneously) presented with a face to one eye, and a house to the other. (Neither eye can see what is presented to the other.) Now, when your two eyes are in this way presented with different scenes that the brain cannot fuse, you will experience the two scenes alternately. Accordingly, the subjects experienced the house and the face alternately—first one, then the other; never both together. They found, rather amazingly, that when subjects experienced seeing a face, their fusiform face area was active (as revealed by a fMRI scan), and when they experienced seeing a house, it was not active. So it seems that activity in the fusiform face area is correlated not just with recognition, but with awareness of faces. The link to consciousness has now been made.
Here comes the crucial point. There is a patient, GK, who has a very unfortunate condition. When presented with objects on one or other side of his visual field, GK identifies and sees them. But when he is presented with objects on both sides, he reports that he sees only the object on the right, and that he simply cannot see anything on the left. In the crucial experiment, he was presented with a scene in which there is a face on the left and another object on the right (not rivalrously). He denies that he can see the face at all—but his fusiform face area is active, as if he can.
I won’t go into the arguments back and forth: obviously there are lots of ways of interpreting this experiment, at least prima facie, and Block discusses a number of other cases in the attempt to nail down his analysis. The point is this: Block proposes that counter to the intuitions reported earlier, we should distinguish between phenomenology and access. The activity of GK’s fusiform face area shows that he is evincing face-phenomenology; yet, he cannot access his conscious state.
Block proposes, then, to reform the concepts that yield the intuition that consciousness implies cognitive access. He argues that conceptual reform is required by the best explanation of the phenomena—the GK dissociation, the evidence about the activation of the face area, etc. (As I have emphasized, Block does not rely on this example alone, and acknowledges that taken by itself it can be interpreted differently.) This proposal is extremely challenging—I confess that I found it mind-boggling at first acquaintance. Block is proposing that certain mental states have a phenomenology even though the subject is incapable of becoming aware either of them or of their phenomenology. The idea has gained some traction, and is the subject of much discussion both in science and in philosophy. It is an illustration of how philosophers and scientists can cooperate on core ideas.
Recent Comments