Sometimes, empirical findings about the mind have surprising consequences for ethics and conduct. The empirical results may provoke little resistance, but the implied consequences for conduct run counter to intuition. Two of my posts to this blog ran into this kind of dissonance. An earlier post about an empirical result was mostly approbated. A later post about consequences for conduct were emphatically contested. (Admittedly, my application of the ethical principle was controversial--some thought it was sleight of hand.)
Let me put the two together in a single post.
First, the empirical bit. How do we arrive at a belief or decide on a course of action? The intuitive model is single-track reasoning. In any given situation, a subject sums up her reasons, and arrives at a considered reason. (It may be poorly considered, but it's a summary that purports, at the moment of choice, to be complete.) Rationally choosing a belief or action is acting on the considered reason. Of course, you might act irrationally: that is, some other kind of motivation—emotion, desire—may win out. Or you may just behave automatically. But let's consider rational action here.
Consider a petty lie, for instance to a Customs agent who asks the value of the goods you purchased while out of the country. . .
The single-track reason model holds that only the considered reason has an effect on action. Of course, there are many competing considerations as regards most actions. These are taken into account and balanced as you arrive at the considered reason. As a philosopher, for instance, you might entertain a complicated Kantian discourse that tells against all lies. But you may reject this. Why spend half an hour in a long line just in order to give the government a dollar or two?
I reported some weeks ago that the single-track reason model is empirically threatened. The considered reason is not the only one causally influencing the conclusion. All reasons have an influence on action. Action and belief occur, rather, by a process by which the weaker reasons are suppressed or blocked. The methodology by which this is established is that of reaction times. In a remarkable experiment, researchers at the Central European University in Budapest established that even when you see something plainly—say that a ball is in box A—but you also know that another human being mistakenly believes that the ball is in box B, you react more slowly to the evidence of your own senses. Why? Because your companion’s belief has reason-giving force, and even though that force is completely null in the face of your own observation, it has to be suppressed within your own reasoning. This is the suppression model, which sees all reasons as having causal efficacy. All reasons try to get their way, and to silence all other reasons. (This has an interesting application to implicit attitudes, by the way.)
Now, the bit about consequences for conduct. Two lemmas from the CEU study. First, whenever agent A knows that agent B thinks that she, i.e. A, should do (or believe) X, A will thereby form a reason to do/believe X. Second, if A thinks that s/he should in fact do Y, the other-derived reason to do X has to be suppressed.
Last week, I drew the following consequence from the foregoing. Let us suppose that Y is an action that lies within A’s realm of autonomous conduct. And as before B thinks that A should not do Y. Then, if B makes her belief known to A, B is exerting a causal influence on A within the realm of A’s autonomous conduct. Let us suppose, for instance, that undervaluing her purchases lies within A’s realm of autonomy. Then, telling A that she should submit an honest valuation exerts a causal influence on A with regard to something that lies within A's autonomy.
Now here is my point. It is prima facie wrong to an exert a causal influence on anybody’s conduct within their realm of autonomy. Therefore, it is prima facie wrong to make suggestions about what somebody else should do or believe, even by stating your own beliefs or motivations.
Prima facie wrong does not, of course, imply wrong as such. It is obviously not always wrong as such to tell another what you believe. If it were, all conversation would be wrong. (Which it obviously isn’t!) The prima facie wrongness of stating your mind is often, perhaps most often, perhaps almost always, overridden. I don’t want to deny that. My aim here is to simply to introduce the prima facie wrongness of all assertion. When assertion is not wrong as such, there must be other reasons that override this prima facie wrongness. What are these overriding reasons?
The prima facie wrongness of stating your mind comes down to this: there is always a reason for not speaking what you believe. Therefore, it is not right by default to speak what you believe. And as my post argued, this can play into interesting discussions of presumption and arrogance. For example, it suggests an account of why it is offensive to be bossy.
Recent Comments