Imagine a familiar object: perhaps the house you grew up in. You have an image of it that is quite complex. You can recall a snapshot image of it, an image of how it looks from one point of view in one kind of circumstance (early in the morning, for instance, on a sunny day). But it’s easy to do much more. What did it look like to walk around it? How did the smell of baking strike you in your bedroom and then when you came down the stairs? How did the rain sound on the roof and in the downspouts? You can imagine what it would have been like to have a dancing party there, even if such a thing never occurred. You can project how it would look if it received a brand new coat of paint in a heritage colour (instead of the off-white it always really was). These things, and countless others, unite to form a complex image of your house
How are such complex images are built up?
But mechanisms are not my primary concern here. Rather, I want to say something about how complex images and Hebbian assemblies are the stuff of an objective view of the world, a view that abstracts away from and is independent of a viewer’s perspective.
It is said, notably by Christopher Peacocke, that all perception is perspectival. And in one sense, this is true. You cannot visualise the corridor that leads down to the kitchen without visualizing it from one end or the other, i.e., from somewhere. This is true of every visual image of the corridor. A photograph, a painting, an architectural diagram—all of these, just as much as a visual mental image, present the corridor looking toward or away from the kitchen, or from above or below.
This is true, but there is another important sense in which perception is functionally non-perspectival: together with Hebbian learning, it produces a perspective-free model that undergirds perceptual knowledge. When we generate a mental image from the model—especially when we generate a dynamic, temporally extended, image such as that of walking down the stairs and turning down the corridor . . . when we do this we employ a model without perspective. The image that we generate has perspective; we must do this because all visual images must be from a perspective. But the model embraces many perspectives.
Incidentally, non-visual images are not always perspectival. Auditory images are sometimes perspectival; they are always so when they occur in auditory perception. For instance, when you actually listen to an orchestra playing a symphony, all the sounds come from somewhere. If you moved to another seat, or backstage, you would hear it from another angle, and all the same sounds would come from different directions. However, consider non-perceptual auditory images. When you recall the symphony—that is, when you listen to it in your head—the sounds do not, or at least need not, come from anywhere.
Similarly, the actual smell of bread baking has a definite spatial profile: it gets stronger or weaker and changes character as you move. But not when you recall it or imagine it. Subjective touch, particularly, is non-perspectival: a touch on your finger is a touch on your finger; there is no particular perspective from which this touch is presented. Things are different, of course, when you touch external things; these things are felt to your left or your right, in front or behind you. The necessity of perspective is primarily a feature of images in vision and external touch.
Thomas Nagel has a theory, in The View from Nowhere, of how objective appearance is generated. He thinks that one must move from a subjective presentation to a view in which the presented object occupies a position relative to oneself. A cold drink gives you pleasure on a hot day: that the drink is pleasant is subjective. Now, ascend to a view that encompasses the drink, the feeling of pleasure, and your relation to these things. Now, you are in the realm of objective fact.
Objectivity is achieved, Nagel proposes, by a process of accretion in which simpler subjective situations are rendered objective by adding an observer and his relation to a subjectively presented thing. It’s an elegant theory, but I don’t think it fits perception. In perception, variation between two perspectives allows the elimination of idiosyncrasies of each. I see something from one side, and the other side is hidden; the hidden side is represented “amodally”, as there but without visual features. That the hidden side is unknown is an artifact of my perspective. But when I see the hidden side, I generate a model in which the features of each side are fixed in space relative to each other. Now I have a representation in which the contingently hidden side of the object is filled in.
When I have generated and stored a non-perspectival model of something, I can visualize it from any position. I can visualize the house of my childhood as it looks from Cunningham Road, or as it looks from the garden in the back. To generate these perspectives, I add the perceiver. The process is the exact opposite of Nagel’s. I generate a subjective viewpoint from the objective by computing somebody's view from somewhere.
ADDENDUM: We usually form allocentric images by active perception, as I wrote last week. That piece supplements this one.