Wednesday, 26 March 2014

Photographic Absence: a fuss about nothing



'There is no such thing as an empty space or an empty time. There is always something to see, something to hear. In fact, try as we may to make a silence, we cannot.' ― John Cage

Philosopher Mikael Pettersson, explores the causal status of shadows and absences in photography. Despite the apparent absurdity of the issues he raises, the implications are profound.

For a conference presentation at the Institute For Advanced Study at Durham University, Pettersson began by reference to the widely held view that photography is a prototypical causal medium wholly dependent upon the known regularities of the universe for its efficacy. If it so happened that photography were an unreliable medium, if every exposure were a struggle with uncertainty and every image were a tapestry of the indeterminate, there would be little value in these objects that so conveniently find their way into our lives. Indeed, if photography lacked causality, it seems unlikely that vision itself would be possible.

But if photography is a causal medium, Pettersson asks, how can non-entities like shadows and other kinds of absence commonly depicted in photographs have causal influence and, moreover, what is their causal foundation?

It is tempting at this stage to throw our hands in the air and to declare the preposterousness of the enquiry. Absences, and this necessarily includes the complete absence of illumination, lack causal influence because there is literally nothing to do any causing. The box of photographic paper that sits on my desk, awaiting exposure to light, gains it's value by virtue of its complete and unalterable insensitivity to the absence of light. We discern shadows in images due to the causal presence of light and its varying intensities precisely because the absence of light has no influence whatsoever. Photographic traces that depict shadows are the result of electrochemical processes designed to replace unexposed portions of the image with chemicals that absorb light, thus simulating darkness.

But to pursue this line of response to Pettersson's enquiries would be to overlook one of the most important enigmas of human, and to an indeterminate degree also creaturely, concern. Possibly the most profound event in any life is the encounter with the death of another. Absences cause—or at least seem to cause—some of the most intense experiences we are ever likely to face. So the question over the possible causal influence of absence is by no means a trivial one. It is arguably the most significant question of all.

Another conference delegate, Vivian Mizrahi, questioned Pettersson's emphasis on photography. Why, she asked, does the same not apply to other forms of image making? For Pettersson, the widely accepted causal directness of photography, the fact that it is not mediated by the brain—as is the case with drawing or painting, seems to provide stronger grounds for attributing a causal role to absence rather than any mind-dependent intermediary. Nonetheless, minds do play an inescapable role in the reception and interpretation (or 'seeing-in') of images. This was the concluding point to which Pettersson turned his attention, though not without giving due consideration to several of the more plausible but ultimately unsatisfactory acausal theories of influence in absentia.

Is there any evidence of the causal influence of absence? It might be argued that the absence of food in our stomachs causes us to seek sustenance; or that a lack of time causes lateness; or that a hole on a beach causes sand to infill the void; or that the absence of aerodynamic lift causes an airplane to fall from the sky. All of these instances, and many more that we could enumerate, are seemingly plausible examples of the causal influence of absence. However, on inspection, many turn out to be cases of commonplace causality. A decrease in blood glucose, triggers hunger; lateness is caused by an over abundance of things to be done and falling is caused by the release of potential energy. 

However, the last example mentioned above is far from clear-cut. Some theorists would claim that the release of potential energy is due to an absence of physical support. There seems to be growing theoretical uncertainty over whether such instances are the result of an oversimplification in description or whether there genuinely are cases of negative causation. Personally, I'm not sure - though I suspect that the fault is linguistic in originIf exhaustive detail were necessary to describe the causal history of even the most basic state of affairs, the chain of causes would lead inexorably to the Big Bang. Abstraction in description is therefore both necessary and inevitable. At some point even the most exhaustive account falls silent. The concept of absence then, like the concept of zero, is a tool that has proven to have enormous efficacy for us. Yet unlike the concept of zero, which is only ever acquired through education, there is evidence to suggest that human infants display an awareness of absence from a very early age and many animals also behave in ways that suggest a capacity to register absence.

Do these glimmerings constitute the rudimentary evidence of conceptual capacities, of proto-linguistic thought? Perhaps, but we should be wary. If we wish to provide a coherent causal theory of absence we will first need to explain the causal relations that lead to attributions of absence.

I have written previously of another philosopher, Anya Farennikova, who takes the view that we literally perceive absence. Farennikova contends that we possess mental states in which absences are represented. There is a serious problem with this theory though, and it is a difficulty reflected in the issues Pettersson brings to light. To assume that an inner representation represents absence requires this absence to have causal influence, thus violating the laws of causality.

The alternative is to view absence not as a causal entity of any kind but as the name we give to the common mismatch we find between what we expect and what we actually perceive. So, when we say that an infant or animal exhibits awareness of absence, what we really mean is that they are capable of forming expectations and of being surprised when these  do not apply.

Absence is an enormously powerful and convenient concept, but the fact that we can treat such abstractions as concrete entities is no cause to invoke causality where none is possible. As far as images go, even the most explicit absence can only be recognised as such if we are capable of a contrasting expectation: of describing, delineating or selecting a substitute of that which is absent, of the actual presence envisaged.
Or, as Carl Sagan said (admittedly in a completely different context): "Absence of evidence is not evidence of absence."


Wednesday, 19 March 2014

Triggers, Signals and Intentionality


I was recently recommended to read Ruth Millikan's work on biological intentionality. Intentionality is widely thought to be the result of anticipatory capacities of one sort or another, but quite how these capacities manifest themselves on a neurological level is a fiercely debated subject upon which Millikan takes a radical but I think entirely mistaken line. Millikan takes the view, not only that brain states are representational, but that they are essentially semantic: ‘Biosemantic.’

In the next few paragraphs I aim to show why biosemantics fails to achieve its goal of naturalising representational content, and I aim to do this by way of a discussion of the evolutionary emergence of the most basic form of signalling commonly observed in nature (i.e. rabbits thumping the ground, the tail splashing of beavers and numerous other rudimentary alarm signals).

Organisms need to be sensitive to changes in their environment and to be differentially responsive to available stimuli, i.e. capable of adjusting their  responsiveness according to prevailing circumstances. If a certain advantageous or disadvantageous circumstance is commonly preceded by other regularly occurring stimuli, then it is of significant advantage for organisms to be capable of responding effectively to these antecedent environmental triggers.

Environmental triggers, of whatever kind, should never be confused with signals. The scent that leads an organism to a source of food is not a literal signal propagated by the food, nonetheless it is a potentially detectible property in the environment propagated by the food. Strictly speaking, a passing shadow or rustling in the undergrowth etc. are not signalssigns or indicators of impending danger, because they are not deliberately produced, i.e. they are not intentional. They are simply detectable characteristics of the environment that occasionally precede threatening events. As we will see, this fact about the necessary intentionality of signalling is one of the principal weaknesses in mainstream theories of intentionality like Millikan’s. If intentionality is dependent upon representations and representations are dependent upon intention then there is no way to break into the circle.

So, what do we know about the emergence of signalling in a biological context? Or, more to the point, how can a publicly perceptible behavioural trigger (a startle response, say) evolve into an intentional signalling behaviour of the rabbit ground-thumping variety?

In order to answer this question we first need to recognise that vulnerable organisms benefit  from social coexistence because this provides a safer environment in which the probability of attack is greatly reduced. Additionally, when any one individual is attacked, the ensuing commotion has the potential to trigger evasive responses on the part of neighbouring individuals. Any individuals failing to detect and respond to such disturbances will be vulnerable to further attacks and will consequently be less likely to survive.

In order for a behavioural trigger to become a signal then, the following conditions need to be met:
  1. A group of organisms must be under selective pressure.
  2. These organisms must behave in regular and conspicuous ways when attacked, thereby producing a stimulus with the potential to be used as a signal.
  3. Consumers must be capable of detecting the stimulus. 
  4. Behaviours triggered by the signal must be advantageous to both producer and consumers in the majority of instances*. 
  5. Producers must become capable of producing the signal independent of its standard causes if the signal is to have efficacy over and above that provided by ordinary behaviour.
As can be seen, the steps necessary for even the most basic form of signaling are complex and demand very different circumstances than those that pertain amongst the cell structures of the brain. Quite how an analogous form of signaling, of the kind that Millikan and others impute, could evolve through the interaction of brain cells awaits even the most basic demonstration.

What needs no demonstration though, is the fact that many creatures produce publicly perceptible signals. All that needs to be recognised is that it is the capacity to produce such representations that is instrumental in cases of intentionality, not some alleged but so far entirely undetectable representations flitting around inside our heads.

*Individuals may benefit by “crying wolf” in certain circumstances but the efficacy of this strategy will be limited in the long term. Similarly, withholding a signal may have short term advantages and would explain why many animals respond to alarm calls by increased attention instead of shelter-seeking behaviour.

Wednesday, 12 March 2014

Realism and Reality



Despite the fact that more distant objects are projected at a smaller scale onto the retina than closer objects, it is crucial that we perceive the size of distant sources of food, predators etc. as accurately as possible. If our ancestors had perceived proximate objects as larger than distant objects, their chances of survival would have been severely limited. One of the major evolutionary obstacles for the development of visual processing therefore, must have been to overcome the fact that distant objects are projected onto the eye in this way. What we see when we look at distant fruit is distant fruit, not tiny little tidbits. Nonetheless, when we draw distant fruit we have to render them at a smaller scale than closer fruit. It is nothing less than extraordinary that this technique works at all, let alone that it leads us to say that pictorial images look realistic. And it is no wonder also that it took us so long to discover one of the most important strategies for producing such images: perspective.

According to the influential educational psychologist Jean Piaget (1896-1980), young children do not draw “what they see”, they draw what they know. In Piaget’s view the child is not capable of “pure observation” but instead “he sees the world as if he had previously constructed it with his own mind.” Piaget’s theory has its origins in the philosophical doctrine of Subjective Idealism, a view that conceives of perception as a mental construct, an inner analogue of the external world - a world to which we have no direct access. Traces of this doctrine inform the work of many theorists and educationalists, both past and present, including John Ruskin and Georges-Henri Luquet whose work greatly influenced Piaget. Both Luquet and Piaget claimed that children’s drawings develop from “intellectual realism” (i.e. what children know) towards “visual realism” (i.e. what they see).

So, according to this theory, when we draw with visual realism, we draw what we see. But does this really stand up to scrutiny? Visual realism and reality (what we actually perceive) are by no means the same and it doesn’t follow therefore that there is a simple correlation between what we see and visual realism. If there were, we probably wouldn’t need to distinguish between the real and the unreal, between the real and the realistic, between reality and realism.

We take it as given that everything we see is real and we reserve words like ‘realism’ and ‘realistic’ for representations. So, to say that photographs are realistic is to say very little about what we actually see.

It should be obvious that we have evolved to see the world – as far as is humanly possible – exactly as it is. And whilst pictures look like the world, the two are not interchangeable. Our terminology has evolved in its own ways to reflect this fact.

No matter how realistic an image might be and no matter how susceptible we are, in certain rare circumstances, to mistake images for reality, there is never some point at which pictorial realism gradually or suddenly becomes full blown reality. There is no special lens, no magical painterly potion, no mystical technique that will ever transform a depiction into reality. Realism is forever barred entry into the kingdom of the real.

For a child learning to draw, it is a significant challenge to acquire the many skills necessary to convert three-dimensional experience into two-dimensional depictions. Fortunately children are surrounded by examples (pictures) that show them that the feat is possible. But one of the most significant things that stands in their way is a formidable evolutionary background in which the perception of a distant predator has always been the perception of a distant predator, not the perception of a cute little predator looking at us with hunger in its eyes.

Wednesday, 5 March 2014

Difficulties for the Philosophy of Illusion


Illusion:
1.     a deceptive or misleading appearance
2.     a false or misleading impression, idea, belief or understanding
3.     a false perception of an object or experience due to the mind misinterpreting the evidence relayed to it by the senses.

The concept of illusion is an ancient one, yet its venerable pedigree and widespread popularity are no guarantee of its value as a tool for uncovering the nature of perception. It is a handy concept for sure, but perhaps we should be wary of convenience, especially as a route to insight.

Examples of illusion are not difficult to come by - many are the stock-in-trade of conjurers and ‘illusionists’, but where some illusions might be distinguished, especially from the tricks and entertaining feats of stage-artists, is in the degree to which they might be regarded as having something to reveal about the workings of perception. When a conjurer uses slight of hand to “deceive the eye”, we do not suppose that this has anything useful to tell us about our sensory capacities. The idea that, with sufficient skill and dexterity, the hand can move more expertly than is easily perceived is unsurprising. Optical illusions, on the other hand, produce puzzling responses or anomalous visual artefacts that call for more sophisticated explanations.

Philosophers – knowing that their theories often stand or fall on the evidence of scientific enquiry – frequently refer to optical illusions in order to substantiate their claims. However, during the 1960’s, an important body of evidence emerged that cast significant doubt on many of these claims, yet has gone largely unacknowledged in philosophical circles.

Müller-Lyer Illusion
In a 1966 study undertaken by Segal et al into cross cultural variations in susceptibility to optical illusions the researchers found significant variance between differing communities and age groups across the globe. Some groups, for instance, reported little or no difference between the apparent lengths of the lines of the famous Müller-Lyer diagram. An earlier study by Hudson (1960), of culturally isolated South African children, encountered very similar findings. Both studies attributed their results to a lack of habitual exposure to pictures amongst the communities studied. Hudson dubbed this lack of familiarity: ‘pictorial illiteracy’. In fact, even children well schooled in language and arithmetic skills (but lacking pictorial literacy) were not susceptible to what is commonly described as the ‘pictorial illusion of depth’ and were therefore unsusceptible to the depth cues that many optical illusions exploit.

In 2006 Robert N. McCauley and Joseph Henrich write:

For those who experience it, the illusion may persist, but susceptibility to the Müller-Lyer illusion is neither uniform nor universal. Moreover, a plausible argument can be made that through most of our species’ history most human beings were probably not susceptible to the illusion.

If this is correct, and corroborating evidence can be provided from the art historical record to support it - then there is good cause to doubt the relevance of illusion in the explanation of perception. Moreover, if the following evidence is anything to go by, these doubts are in greater need to explanation than ever.

In numerous well documented studies, it has been shown that when people reach to grab three-dimensional versions of optical illusions, their grip aperture (the distance between finger and thumb) is unaffected by the illusion. So, whilst we may be inclined to say that one part of an optical illusion appears to be larger than the other, our ability to physically interact with these illusions is unaffected.

From an evolutionary point of view, it is of the utmost importance that we do not confuse a distant object for a small object, especially if the distant object has significance for our potential to survive. It is extraordinarily fortunate in fact, that the capacity to recognise and use perspectival images (in which distant objects are depicted at disproportionate scales than nearby objects) has not been entirely overridden by the evolution of our perceptual skills. If the research of Hudson, Segal et al is correct, then it would seem that this capacity to derive depth cues from perspectival images is a learnt skill and is not an immediately available part of our genetically acquired perceptual repertoire. And McCauley and Henrich are surely right when they speculate that our susceptibility to illusions must be a relatively recent consequence of the increasingly widespread use of pictorial imagery. What better explanation do we have for the widespread indifference amongst animals to our attempts to interest them with images?

"The Innocent Eye Test", Mark Tansey, 1981

I hope the evidence presented here makes it clear that the standard view of illusion - as deceptive, misleading or false - is thoroughly inadequate as a tool for the investigation of the nature of perception.  If our theories don’t fit the evidence then it is time to change our theories. I suggest that we start with the theory of illusion.