Enclose the sun inside a layered nest of thin spherical computers. Have the inmost sphere harvest the sun's radiation to drive computational processes, emitting waste heat out its backside. Use this waste heat as the energy input for the computational processes of a second, larger and cooler sphere that encloses the first. Use the waste heat of the second sphere to drive the computational processes of a third. Keep adding spheres until you have an outmost sphere that operates near the background temperature of interstellar space.
Congratulations, you've built a Matrioshka Brain! It consumes the entire power output of its star and produces many orders of magnitude more computation per microsecond than all of the current computers on Earth do per year.
Here's a picture:
(Yes, it's black. Maybe not if you shine a flashlight on it, though.)
I live very close to Port Meadow, one of the largest meadows of open common land in the UK, already in existence in the 10th century, and mentioned in the Domesday book in 1086. I saw my first-ever live, wild oriole there. The land has been never ploughed, so it is possible to discern outlines of older archaeological remains, some going back to the Bronze Age. The consistent management of the land makes the changes predictable: it turns into a lake in winter, is sprinkled with buttercups this time of year (see pictures below the fold - both are taken at about the same place, but one in May and the other in November), and looks mysterious and misty in the fall. Whenever I walk on Port Meadow I take my camera, anxious to preserve any beautiful view that falls on my retina, to preserve it for future memories. And, like many other parents, I take dozens of pictures of my growing children. Recently, I saw an NPR piece (no author given) that took issue with this tendency to want to preserve pictures for future memory.
The article launches a two-pronged attack against pictures. First, by worrying about capturing the moment, we lose the transience and beauty of the moment and enjoy it less. Second, the article cites psychological evidence that shows that people actually remember fewer objects during a museum visit if they were allowed to take photos of them, compared to when they only were allowed to observe them. The phenomenon is known as the photo-taking-impairment effect. Linda Henkel, who discovered the effect, says: "Any time…we count on these external memory devices, we're taking away from the kind of mental cognitive processing that might help us actually remember that stuff on our own."
Eric Schwitzgebel recently took up the question of whether an infinitely extended life must be boring. The discussion ended (when I looked at it) with Eric’s fruitfully suggesting that we look at various cognitive architectures and their capacities for boredom over the long run.
No doubt there are many kinds of minds. Let’s radically simplify the problem, in hopes of arriving at a precise answer for at least one case. (After all, if a mind without much to think about can escape boredom, then presumably a more amply stocked mind can too.) The mind I want consider thinks only of natural numbers and number theory (algebraic and analytic). Its “perceptions” consist in presentations of random natural numbers. Will it be bored?
Readers of the Brains blog might know about a symposium there concerning a paper by Philipp Koralus. In his commentary on the paper, Felipe de Brigard mentions the problem of captured attention:
"I have a hard time understanding how ETA may account for involuntary attention. Suppose you are focused on your task—reading a book at the library, say—and you hear a ‘bang’ behind you. A natural way of describing the event is to say that one’s attention has been involuntarily captured by the sound. Now, how does ETA explain this phenomenon?"
"So, you might have been asking, as part of your task of reading the blog, 'What does the blog say?' Now, you are getting the incongruent and irrelevant answer 'There’s a loud noise behind you.' There are now two possibilities, similar to what happens in the equivalent case in a conversation. One possibility is that you accommodate the answer, adopting a new question (and thereby a new task) to which 'There’s a loud noise behind you' would be a congruent answer, maybe, 'what sort of thing going on behind me?...You could also refuse to be distracted and then exercise some top-down control on your focus assignment to bring it back to something that’s relevant to your task.'
When I coined "the problem of captured attention" in my 2012 Synthese paper, "The Subject of Attention" (not cited by Koralus/de Brigard), I took a similar line, but focused on the activity of the subject, rather than on questions and answers:
Weird Tales, one of the best and oldest horror and dark fantasy magazines, has just launched a new series of ultra-short flash fiction (under 500 words), Flashes of Weirdness. To inaugurate the series, they've chosen a piece of mine -- which is now my second publication in speculative fiction.
My philosophical aim in the story -- What Kelp Remembers -- is to suggest that on a creationist or simulationist cosmology, the world might serve a very different purpose than we're normally inclined to think.
At some point, I want to think more about the merit of science fiction as a means of exploring metaphysical and cosmological issues of this sort. I suspect that fiction has some advantages over standard expository prose as a philosophical tool in this area, but I'm not satisfied that I really understand why.
In much of the philosophy of language and mind coming out of the late Wittgenstein and/or early Heidegger, a distinction is made between merely following a norm versus also being able to correctly assess whether others are following that norm. Note that the Brandom of "Dasein, the Being that Thematizes" (in Tales of the Mighty Dead) and the Mark Okrent of "On Layer Cakes" both mark this distinction, though they disagree on whether the latter ability requires language. Okrent (whose objects that Brandom's view entails that human aphaisics and non-linguistic deaf adults have no minds) writes:
Because all tool use is embedded in a context of instrumental rationality, there is more to using a hammer correctly than using it as others do. Sometimes it is possible to use a hammer better than the others do, even if no one else has ever done it in that way, and no one else recognizes that one is doing so, because the norm that defines this use as ‘better’ is independent of what is actually recognized within the community. That norm is the norm of instrumental rationality: it is good to do that which would achieve one’s ends most completely and most efficiently, were anyone to do it in that way. For the same reason, it is sometimes possible for a member of a society to improve a hammer, or repair it, by giving it a structure that no hammer has previously had in that society.
We might soon be creating monsters, so we'd better figure out our duties to them.
Robert Nozick's Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!
Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.
Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker's and my story Reinstalling Eden.)
[H]e is proclaiming his new project, the Wolfram Language, to be the biggest computer language of all time. It has been in the works for more than 20 years, and, while in development, formed the underlying basis of Wolfram’s popular Mathematica software. In the words of Wolfram, now 54, his new language “knows about the world” and makes the world computable.
From the point of view of the philosophical debates on artificial intelligence, the crucial bit is the claim that his new language, unlike all other computer languages, “knows about the world”. Could it be that this language does indeed constitute a convincing reply to Searle’s Chinese Room argument?
To be clear, I take Searle’s argument to be problematic in a number of ways (some of which very aptly discussed in M. Boden’s classic paper), but the challenge posed by the Chinese Room seems to me to still stand; it still is one of the main questions in the philosophy of artificial intelligence. So if Wolfram’s new language does indeed differ from the other computer languages thus far developed specifically in this respect, it may offer us reasons to revisit the whole debate (which for now seems to have reached a stalemate).
Article in Science Daily here, which claims that a lot of new evidence supports Roger Penrose's old conjectures about the the way that quantum physics is implicated in consciousness. If any philosophers of mind feel like explaining this to the rest of us, that would be very cool.
I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)
I have been reading Daniel Hutto and Erik Myin’s book Radicalizing Enactivism for a critical notice in the Canadian Journal of Philosophy. Enactivism is the view that cognition consists of a dynamic interaction between the subject and her environment, and not in any kind of contentful representation of that environment. I am struck by H&M’s reliance on a famous 1991 paper by the MIT roboticist Rodney Brooks, “Intelligence Without Representation.” Brooks’s paper is quite a romp—it has attracted the attention of a number of philosophers, including Andy Clark in his terrific book, Being There (1996). It’s worth a quick revisit today.
To soften his readers up for his main thesis, Brooks starts out his paper with an argument so daft that it cannot have been intended seriously, but which encapsulates an important strand of enactivist thinking. Here it is: Biological evolution has been going for a very long time, but “Man arrived in his present form [only] 2.5 million years ago.” (Actually, that’s a considerable over-estimate: homo sapiens is not more than half a million years old, if that.)
He invented agriculture a mere 19,000 years ago, writing less than 5000 years ago and “expert” knowledge only over the last few hundred years.
This suggests that problem solving behaviour, language, expert knowledge and application, and reason are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time—it is much harder. (141)
If Elisabeth Lloyd’s take on the female orgasm is
correct—i.e. if it is homologous to the male orgasm—then FEMALE ORGASMis not a proper evolutionary category. Homology is sameness. Hence, male and female orgasms belong to the same category. The orgasm is an adaptation, whether male or female (and
Lloyd should agree). It is not a spandrel or by-product.
I’ll get back to this in a moment, but first some background. There are five NewAPPSers who have a particular interest in the
philosophy of biology. Roberta Millstein, Helen De Cruz, Catarina Dutilh Novaes, John Protevi, and myself. Aside from Roberta, each of us comes at it from a related area in which biological insight is
important. For me, that area is perception. I have written quite a bit about
biology, but my mind has always been at least half on the eye (and the ear, and
the nose, and the tongue, . . .).
There is a divide among us with respect to a leading controversy
in the field. Catarina is strongly anti-adaptationist and I am strongly
adaptationist (perhaps because of my motivating interest in perception, which is exquistely adaptive). Roberta, Helen, and John are somewhere in between, but likely closer to Catarina than to me. You can gauge where I stand when I tell you that in my view, Gould and Lewontin’s 1979
anti-adaptationist manifesto, “The Spandrels of San Marco and the Panglossian
Paradigm” is one of the worst, and certainly one of the most mendacious, papers I have
ever read in any field. Among the five of us, I am sure I am alone in this.
Given all of this, my take on adaptationism with regard to the orgasm may get a
hotly negative response from my co-bloggers. Nevertheless, I’ll get on with it.
When I studied philosophy in graduate school [in the 1990s--ES], my peers and I went to
classes where we were made to read Kripke and Davidson and Quine and
Putnam. Then, duty done, we met together at a coffee shop and discussed
the latest paper from Millikan, pens in hand, arguing passionately. I
cannot even recall how we found her work and knew we had to study it,
but somehow there was consensus among us that she was producing the most
exciting philosophy happening right then. Sometimes we were convinced
that Millikan got a problem wrong... more often we felt she had offered a
solution to some problem that other philosophers had mostly just
obscured. But that was not what made us study her work so eagerly. The
important thing was that Millikan gave us tools. Her theory of
proper functions was something we could actually use. It had wide and
general utility...And, as we contrasted her work with
what our instructors considered the contemporary canon, we felt certain
that Millikan represented the vanguard.
I mention all this because the second striking feature of Millikan's
responses to these thirteen criticisms is that she still seems the
radical maverick. If it is fair to consider her critics in this volume
as representative of current philosophy, then one gets the impression
that most of us are still catching up with Millikan....To see her respond to this
pressure, however, is very helpful to understanding the details and
applications -- and, ultimately, the novelty -- of her approach.--Craig DeLancey, reviewing Millikan and Her Critics [the volume includes a chapter by our very own Mohan--ES]
I sometimes wonder how common DeLancey's experience is of graduate students discovering and debating exciting work unrelated to one's instructors' sense of significance. I often have the disheartening sense that it is more common that graduates recycle the shared and undoubtedly sophisticated commitments of their graduate instructors (despite the now relatively easy access to other people's works). This recycling is often itself very sophisticated with accompanying mini-narratives that bolster the priority claims of privileged participants (see, for example, this interesting review). There is nothing dishonest about this kind of recycling and it allows the generation of progress, but one wonders if more frequent intellectual parricide/matricide wouldn't be healthier for the discipline.
Last week I had a post up on metaphorical
language in cognitive science, which generated a very interesting discussion in
comments. I don’t think I’ve sufficiently made the case for the ‘too much’
claim, and the post was mostly intended to raise the question and foster some
debate. (It succeeded in that respect!)
There is one aspect of it though, which I
would like to follow up on. One commenter (Yan) pointed out that it’s not so
surprising that digital computers ‘think’ like us, given that they are based on
a conception of computation – the Turing machine – which was originally
proposed as a formal explanans for some cognitive activities that humans in
fact perform: calculations/computations. It is important to keep in mind that
before Turing, Post, Church and others working on the concept of computability
in the 1930, computation/effective calculation was an informal concept, with no precise mathematical definition
(something that has been noted by e.g. Wilfried Sieg in his ‘Gödel on
computability’.). To provide a mathematically precise account of this concept,
which in turn corresponds to cognitive tasks that humans do engage in, was
precisely the goal of these pioneers. So from this point of view, to say that
digital computers are (a bit) like human minds gets the order of things right; but
to say that human minds are like digital computers goes the wrong way round.
is a condition in which attributes, such as color, shape, sound, smell
and taste, bind together in unusual ways, giving rise to atypical
experiences, mental images or thoughts. For example, a synesthete may
experience numbers and letters printed in black as having their own
unique colors or spoken words as having specific tastes normally only
associated with food and drinks. People who have the condition usually
have had since early childhood, though there are also cases in which people acquire it after brain injury or disease later in life.
hypothesis about how synesthesia develops in early childhood suggests
that somtimes the brain fails to get rid of structural connections
between neural regions that do not normally project to each other. In
early childhood the brain develops many more neural connections than it
ends up using. During development, pruning processes eliminate a large
number of these structural connections. We don't know much about the
principles underlying neural pruning, though some of the connections
that the brain prunes away appear to be pathways that are not needed.
So, one possibility is that the pruning processes in synesthetes are
less effective compared to those in non-synesthetes, and that some
pathways that are pruned away in most people remain active in
Last week, Neil Sinhababu had a great post
here at New APPS picking up on an attempted explanation for why members of the
so-called Generation Y seem so dissatisfied with their lives (if indeed they
are). This latter post has been receiving a fair share of attention at the usual
places (Facebook, Twitter), and though admittedly funny, it seems to suffer
precisely from the limitation pointed out by Neil; it treats the problem mostly
as a psychological problem pertaining to the individual sphere (including of
course the parents component, as any good Freudian would have it), thus disregarding
the significant economic changes that took place in recent decades. However, I
do want to disagree with Neil’s quick dismissal of the non-negligible role that
the article claims for new technologies such as Facebook and social media in
general in the phenomenon. Neil says:
And I'm suspicious of explanations in
terms of the special properties of social media -- mostly it gives you a new
way to do kinds of social interaction that have been around forever.
Let's distinguish between Mythical history (Myth) and Mistaken history (Mish).
Myth uses narratives about the past to indicate conceptual linkages among (various) and within natural and social kinds.
Mish contains factual errors about the past.
It's possible that Myth = Mish; but Myth need not be Mish (nor does Mish always need to be Myth).
In reflecting on the public and private responses I have received to my criticisms on Thomas Nagel's abuse of history (here and here), I realize I need some such distinction. (In particular, I thank Mazviita Chirimuuta for making me see what's at stake here!)
Myth and Mish are both compatible with (i) messy history, that is, one that suggests the past is (always more) complex and ambiguous (etc.) and (ii) clean history, that is, one that extracts some determinate claim about the way it was (other than being messy). In practice, Myth tends to be clean (but, say, Foucault practices the genre, in part, by being very messy). Mythical history (be it Mish, clean, messy, or not) is philosophially interesting because it can structure how we think about the world and the way we conceive of the nature of the the problems at hand (or overlooked).
"Perhaps you read about the case a few year's ago involving Target. The
store began sending coupons for pregnancy related products to a teenage
girl. The girl's father was incensed. Were they trying to get her to
have a kid in high school? Target's management was embarrassed and
apologized profusely. It turned out the girl was pregnant. Target — no one
exactly, a computerized pattern detecter — surmised this before she had
told her father, based on information about her shopping behavior. It
is possible...to know what someone knows before
she has made the information public to anyone at all...
Actually, the point is more far-reaching still. What it is to be thinking this or that, what it is to intend this or that,
is precisely for one to be integrated, in the right sort of way, in a
complex causal or informational network. This is controversial, but it
is remarkably well established. Indeed, it is the very
foundation of the theory of computation. Computers aren't smart because
they have, inside them, clever thoughts. No. What makes the
micro-electronic states of a computer intelligent, or just contentful
— for example, what makes it the case that a computer is performing
this or that task — is the way those internal states are hooked up,
causally, and systematically, to the right kinds of inputs and outputs.
Computers don't need to understand what's going on inside of them to
solve problems."--From NPR, 13.7.
[A scholar submitted the following data and accompanying analysis; I have made minor edits only.--ES]
Lycan and Prinz, Mind and Cognition: An Anthology (3rd ed., 2008) has three texts by women (one of them co-written with a man) among 56 chapters.
Heil, Philosophy of Mind: A Guide and Anthology (2004) has five texts by women among 50 chapters.
Chalmers, Philosophy of Mind: Classical and Contemporary Readings (2002) has two texts by women among 63 chapters.
Morton, A Historical Introduction to the Philosophy of Mind: Readings with Commentary (2nd ed., 2010) has three texts by women among 40 chapters.
O'Connor and Robb, Philosophy of Mind: Contemporary Readings (2003) has zero texts by women among 28 chapters.
Bermudez, Philosophy of Psychology: Contemporary Readings (2006) has three texts by women (two of them co-written with men) among 30 chapters.
Beakley and Ludlow, The Philosophy of Mind: Classical Problems/Contemporary Issues (2nd ed., 2006) has two texts by women (one co-written) among 83 chapters.
Noe and Thompson, Vision and Mind: Selected Readings in the Philosophy of Perception (2002) has one text by a woman among 21 chapters.
Possibly I am missing one or two
relevant anthologies, but I am confident that these are representative.
Note also that I've not included a few older volumes -- including
Block, Flanagan, and Guzeldere's The Nature of Consciousness: Philosophical Debates (1997), Goldman's Readings in Philosophy and Cognitive Science (1993), and Block's Readings in Philosophy of Psychology
(2 vols., 1980-81) -- but the breakdown in these is no different from
the ones listed above. Finally, note that things do seem to be much
better in the various Oxford handbooks, but these consist of contributed
articles instead of anthologized texts.
Aarøe Nissen is a 22-year-old math student at Aarhus University,
Denmark, with extraordinary memory abilities. He has competed in memory
sports for several years. He can recite the number Pi to more than
20,000 decimal points, recall thousands of names, faces and historical
dates and remember the order of a pack of cards.
Our perception of time varies greatly
depending on our age, mood, stress level and psychological health and
stability. Psychological disorders, such as Parkinson's disease,
attention deficit hyperactivity disorder and schizophrenia, can mess
with the brain's time keeping mechanism and warp our estimation of time.
Patients suffering from these disorders are unable to properly
coordinate events in time. Patients over- or underestimate time
intervals ranging from several seconds to minutes.
How does this
happen? How does the brain manage to keep track of time and what goes
wrong in psychological disorders? Our senses (sight, hearing, smell,
taste and touch) use specialized sensory systems with task-specific
neurons to process sensory input. Yet there is no specific sensory
system for time. So how does our sense of time come about?
A review of a recent collection of essays on Davidson concludes with:
To conclude, there are some interesting and thought-provoking moments in
this collection. But the take-home message (no doubt unintended) is
that Davidson's insights and theorizing have far less currency in
current analytical philosophy than they did twenty or thirty years ago.
It is interesting to compare this volume with two very famous and
influential volumes: Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson, edited by Lepore, and Actions and Events: Perspectives on the Philosophy of Donald Davidson, edited
by Lepore and Brian McClaughlin. Those two volumes show how central
Davidson was at the time (1985 and 1986) to most of the major areas of
philosophy (language, epistemology, metaphysics, and mind). In contrast,
reading the present volume brings home how much philosophy has moved
away (for better or for worse) from those Davidsonian themes that
captured the imagination of entire generations of analytic philosophers.--José Luis Bermúdez.
I rarely agree with José Bermúdez, but for once I share his sentiment. (Recall this post on how Anscombe's Intention is being unshackled from a Davidsonian interpretive frame.) Still, it would be interesting to see some careful data on this; this quick and dirty data suggests that the earlier "Davidsonic boom" may just a being at Oxford-induced illusion--a known perceptual bias. Either way, José does not explain why "philosophy" moved "away" from Davidsonian themes. Is it just a consequence of changing fashions, or have fatal arguments been directed against the Davidsonian program? Is it too early to tell? Readers's insights much appreciated.
In the supernatural thriller Memory,
written by Bennett Joshua Davlin, Dr. Taylor Briggs, who is the leading
expert on memory, examines a patient found nearly dead in the Amazon.
While checking on the patient, Taylor is accidentally exposed to a
psychedelic drug that unlocks memories of a killer that committed
murders many years before Taylor was born. The killer turns out to be
his ancestor. Taylor’s memories, despite being of events Taylor never
experienced, are very detailed. They contain the point-of-view of his
ancestor and the full visual scenario experienced by the killer.
the movie is supernatural, it brings up an interesting question. Is it
possible to inherit our ancestors’ memories? The answer is not black and
white. It depends on what we mean by ‘memory’. The story of the movie is
farfetched: there is no evidence or credible scientific theory
suggesting that we can inherit specific episodic memories of events that
our ancestors experienced. In other words, it’s highly unlikely that
you will suddenly remember your great-great-grandfather’s wedding day or
your great-great-grandmother’s struggle in childbirth.
We are conducting a study of color
discrimination and short-term color memory. I would be grateful if you
would participate in the study. You'll need to use the left and right
arrow keys to adjust the color of a square to fit the color of a second
image. It will only
take about 5-10 minutes. Click on the link below to begin. www.synesthesiaresearch.com/study
In 2008 two Princeton Economists, Faruk Gul and Wolfgang
Pesendorfer, published an increasingly influential methodological statement, "The Case for Mindless Economics" (hereafter "GP08"). Professors Gul and Pesendorfer publish regularly together and they also happen to be among the tightly-knit group of core-gate-keepers in the economics profession. So, for example, if you look at the submission guidelines of Theoretical Economics [TE], co-edited by F. Gul, you can read: "If you have previously submitted your paper to Econometrica, you have
the option of requesting that the referees' reports and covering letters
and the editor's decision letter be transferred to the coeditor
assigned to handle your paper at TE." Of course, until very recently Pesendorfer was one of the co-editors at Econometrica. (It would be impolite, of course, to view these journals as rent-seeking instruments, but how else to interpret economically this policy: "a paper judged to be unlikely to be acceptable by a
second round will be rejected, either without consultation with
referees or in response to referee reports. In either case, the
submission fee will not be refunded.") Econometrica does have an important "conflict of interest policy," but that does not prevent group-think. Either way, we can safely treat GP08 as a proxy for (recent) establishment views in economics.
The main and (almost) only target of GP08 is what they call "neuro-economics," which they conflate with (experimental) research on the brain. (They also frequently use the term "philosophy" to refer to an enterprise completely irrelevant to "economics" now and always.) Gp08 systematically ignores experimental research conducted by, say, economists (e.g. Vernon Smith and his various collaborators) that also focus on what GP08 calls "economic data." This is important to keep in mind when we evaluate the main thesis of GP08, which is that economics is mainly about rational choice theory (and its natural extension). The thesis is offered as a descriptive account of "common practice" among economists (1), although we also learn that given the economic "evidence" available to economists this approach has also rightly earned a "central role in economics." (43-44) Here's a statement of the main thesis:
"Lucy in the Sky with Diamonds" was a product of the Beatles'
experimentation with psychedelic drugs is still a subject of great
debate among Beatles fans and music experts. But it was no secret that
the lyrics of many of the pop legend's famous tracks was inspired by
LSD, including "I am a Walrus," "Tomorrow Never Knows," and "What's The
Real Mary Jane." The Beatles' creating during a hallucinogenic trip is
not a rare case of acid-driven creation, invention or discovery. The
double helix structure of DNA occurred to geneticist and neuroscientist Francis Crick
while he was tripping on the Lucy drug and low-level tech Kari Mullis
hit on the idea behind Polymerase Chain Reaction (PCR), a now
widely-used technique for amplifying a single piece of DNA by a factor
of 100 billion, while cruising along the Pacific Coast Highway one night
in his car on LSD.
In his famous paper entitled "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information"
cognitive psychologist George A. Miller of Princeton University argued
that our working memory, our ability to hold information in our minds
for a few seconds, is limited to 9 items. That's fewer items than the
items of a regular out of town American phone number. In light of this
you might wonder what to say about cases of people with extreme memory
abilities. Chao Lu
holds the Guinness world record in reciting Pi, a record dating back to
2005. Lu recalled 67,890 digits of Pi in 24 hours and 4 minutes with an
error at the 67,891st digit, saying it was a 5, when it was actually a
0. How is it possible to retrieve this quantity of information
accurately through working memory? Is it magic? After talking to several
people working in memory sports, we found out that it's not.
You are preparing for your upcoming exam, reading through thousands of pages. Suddenly you realize that you forgot to pay attention to what you actually read. You were reading along but your thoughts were elsewhere. "Good God," you think. "Hours of wasted time." You turn back the pages and start over. This time you make sure you pay close attention.
Recent research, to appear in the journal PNAS, suggests that you may be wasting even more time by doing that. You don't need attention to comprehend what you read or to do math. In fact, you may not even need consciousness. The researchers, located at Hebrew University, used a technique known as Continuous Flash Suppression (CFS) to suppress consciousness in some 300 research participants for a short period of time. In CFS a series of rapidly changing images is presented to one eye, whereas a constant image is presented to the other. When using this technique, the constant image supposedly is not consciously perceived until after about 2 seconds.
At our lab in St. Louis we are working with several people with superhuman abilities, also known as “savant skills.” My research assistant Kristian Marlow and I are also currently finishing a book entitled The Superhuman Mind (under contract with an agency, see updates here). We are blogging about these cases almost daily over at Psychology Today. The following are four brief stories about some of the individuals we are working with.
In the late 1970s, Benjamin Libet showed
that motor cortex activity preparing for an action occurs before the conscious
act of willing that action. (Here is a nice demonstration of the experiment by Patrick Haggard.)
Libet's result has been replicated countless times (as above), and though it is perhaps rash to generalize too broadly, let's just say we have strong evidence for:
Conscious acts of "willing" an action occur after the brain activity that cause the action, and so
Conscious acts of willing do not cause action.
As a philosopher, which of the following conclusions can I legitimately draw?