Wednesday 26 June 2013

Imagining Itself (Part VI: Noë’s Arc)


If what we call “mental imagery” does not consist of images and if unconscious mental processes are not conducted through the use of representations then what alternative theory might explain the workings of our most poorly understood organ. How can it be the case that we can remember huge tracts of information, faces, names, places, flavours, smells etc. unless the brain stores some kind of record, albeit a fallible and partial one, of the things we encounter? From where does our prodigious ability to remember, describe, calculate, think and respond to the world arise? And how is it possible that some individuals possess inordinate capacities to recall sequences of numbers, recite epic stories verbatim or to draw photorealistic images from memory if they are not using some form of mental inventory or image? Surely representationalism (to use the jargon) is inevitable.

In setting out his “Enactive Theory of Perception” contemporary philosopher of perception Alva Noë writes:
“A second implication of the enactive approach is that we ought to reject the idea - widespread in both philosophy and science - that perception is a process in the brain whereby the perceptual system constructs an internal representation of the world. No doubt perception depends on what takes place in the brain, and very likely there are internal representations in the brain (e.g., content-bearing internal states).” (2004)
Noë is attempting here to establish perception as wholly independent of mental representations; of images projected onto a notional neural screen carefully concealed somewhere inside our heads. Effectively he is rejecting the homunculus fallacy, and with very good reason. However, despite his reassuring rejection of representationalism in perception, Noë ends up resurrecting the very homunculus he has attempted to expunge. In stating that “content bearing internal states” are “very likely”, Noë makes it clear that he conceives of the brain as a kind of elaborate container in which representational content is housed.

Once again we face a difficulty posed by the commonplace linguistic metaphors used to describe the mind and we must be wary that these do not obscure or exclude other models of mind and mental processing that might offer equally plausible, if not superior, routes to an understanding of the workings of the brain. We are all familiar with metaphors that describe the brain as some form of container, some highly sophisticated but lossy neural bucket in which we dump all of our knowledge and experience, a repository, library, archive, hard drive, black box, recording device, etc. in which memories and information are stored for future use. But are these container metaphors the only possible way of conceiving of how the brain enables us to do all of the astounding things of which we we are capable?

In order to answer this question it might be helpful to return momentarily to Eric Kandel’s sea snails mentioned previously. In his Nobel prize winning work, Kandel demonstrated that sea snails respond to repeated negative environmental stimuli by synthesising proteins in their neural structures which then stabilise and reinforce their responses to further stimuli of a similar kind. In this way they become disposed to behave in certain ways in response to certain environmental triggers. The neural changes do not constitute a representation and no representation is necessary for them to respond. All that is necessary is an acquired disposition to respond in a certain way to a certain stimulus.

This notion of a non-representational dispositional theory of mind is most closely associated with the work of the British philosopher Gilbert Ryle who, in his 1949 book “The Concept of Mind”, put forward what has since become known as Analytical Behaviourism. Analytical Behaviourism argues that complex behaviours (those most commonly associated with the mind and mental states) are a result of the acquisition and implementation of dispositions to act. Just as sea snails acquire dispositions to behave in certain ways in response to environmental stimuli, so too do more complex organisms (human beings included). Obviously, as organisms become more sophisticated and the factors affecting them become more varied they must develop increasingly sophisticated mechanisms for dealing with the complex choices they face, yet the underlying processes of stimulus and response demand no functional assistance from representations or “mental content” (at least not in the way that mental content is most often conceived).

When asked earlier this year about the difference between organisms that merely react and organisms that perceive and purposefully act, Alva Noë responds:
“That is the anxiety that people have. You see that in Jason Stanley’s book, that I mentioned earlier. He seems to be worried that if propositional knowledge [i.e. knowing that as opposed to knowing how] doesn’t govern, then there’s no difference between us and mere reflex systems. I don’t share the anxiety however, partly because I’m much less disturbed at the thought that the bacterium has a primitive mind. It seems to me that we are on a spectrum with the bacterium.” (source 42mins in)
Can it be true that we are on a spectrum with the bacterium and the sea snail? Are perceptions and purposeful actions really just sophisticated forms of response or is there a fundamental conceptual difference that marks a clear evolutionary divide between responses on the one hand and perceptions on the other?

In K.T. Maslin’s 2001 entry-level book An Introduction to the Philosophy of Mind”,  Maslin devotes a chapter to a discussion of the Analytical Behaviourism of Gilbert Ryle and others. In it Maslin asks students to consider the difference between the following two statements:
(a) Martin raised his arm. 
(b) Martin’s arm went up.
He goes on to distinguish between what he calls “agential descriptions” (i.e. purposeful actions and intentional deeds) and “colourless bodily movements.” Implicit in Maslin’s comparison is a recognition of a fundamental difference in kind between mere responses and intentional actions, a difference that makes little sense when explained as a spectrum with bacteria at one end and human perception at the other.

Responding to causal influences is straightforward – we can create robots to do this. We can even build computers capable of defeating the most expert of chess players. But to act purposefully, to anticipate outcomes and to form intentions requires something quite different, something that the engineers of Artificial Intelligence continue to struggle to achieve, something that eludes even the most colourful movements or sophisticated reflex systems. What is this special something? Imagination.

Imagination anticipates the future and uses this to guide actions in the present. But how could such a sophisticated predictive capacity have evolved? The evolution of human imagination will be the focus of the next three posts.

Wednesday 19 June 2013

Imagining Itself (part V: Sea Snails and Homunculi)



Whilst it may still be relatively unclear as to the detail of what exactly occurs on a cognitive level as we imagine, perceive or remember, it is widely accepted amongst cognitive scientists and philosophers that memory, perception and imagination are closely related processes.

In his Nobel (2000) prize winning studies of the neurobiology of sea snails, Eric Kandel found that repeated adverse stimuli trigger similar patterns of neuronal activity. Reinforcement led these constellations of activity (“neural pathways”) to become increasingly focused and stable, through protein synthesis, in what is believed to be an elementary form of learning. Some theorists go so far as to claim that this is a form of memory whilst others remain unconvinced that such changes, that we also observe in the skin's response to ultra-violet light, for example, constitute a form of memory.  

Richard Gregory, a prominent British neuropsychologist estimates that human perception may comprise as much as 90% memory. In order to back up this claim Gregory points to the fact that 80% of the nerve fibers to the visual cortex originate from regions of the brain associated with memory functions whereas only 20% issue from the retina.

In the 1970’s Cornell University psychologist, Ulric Neisser put forward the widely acclaimed but implausible notion that perception – of which he considered imagination to be a closely related component - is an evolved form of anticipation in which perceptions are formed as the result of preparations to see, hear, feel etc. For Neisser the perceptual apparatus is led, through the contribution of memories, into an expectation so vivid that it becomes a cognitive substitute for reality. According to Neisser, perception is thus a constructive process in which an internal map or “schema”, as he called it, constantly loops through anticipation, exploration and modification in an interweaving of sensory inputs, memories and imagination. In this sense both imagination and perception are conceived by Neisser in terms of a kind of mental theatre in which our perceptions play themselves out.

Daniel Dennett, argues that this metaphor of the mind as a theatre in which cognitive images are somehow projected on a neural screen is a flawed throwback to the 17th Century Dualistic philosophy of René Descartes. Dualism claims that many mental phenomena are non-physical in nature and are thus clearly distinct from the material body and brain. Dennett rejects this notion in favour of a Physicalist position that treats all mental phenomena as purely physical in origin. For Dennett the theatre metaphor, or what he terms “The Cartesian Theatre”, presupposes a diminutive observer - a tiny homunculus - at the heart of consciousness who sits separate from the performance taking everything in. Dennett describes the nature of this misconception (sometimes called the Homunculus Fallacy) in the following video:


Interestingly, 1 minute and 24 seconds into the above video there is a cutaway shot to the audience. Sitting in the foreground of the auditorium, wearing a bright pink shirt, is an audience member taking everything in, not unlike a real-world version of the metaphor Dennett takes to task. The pink shirted individual is no less than Vilayanur Ramachandran, Director of the Center for Brain and Cognition at the University of California, San Diego. In the following video Ramachandran explains his colourful theory of how consciousness emerges as the result of the interrelations between a variety of brain functions. Ramachandran also introduces the concept of “meta-representation" in order to make a case for how he believes his theory might sidestep the homunculus fallacy.

"I'm saying that at some stage in evolution, instead of just sensory representations, you started creating what are called meta-representations: a representation of the representation, unlike the fruit-fly, which allows you to manipulate symbols internally in your head.”
Quite how Ramachandran thinks meta-representations avoid the homunculus fallacy is difficult to determine. Placing mental representations one inside the other simply adds yet more homunculi to what is already an endless list.

As previously mentioned, many philosophers and cognitive scientists make reference to meta-representations in their research. However, there is significant disagreement across different fields over the exact meaning of meta-representation (as discussed by Sam Scott here). Sometimes the term is used to describe a pictorial representation of a representation - as Daniel Dennett uses it, sometimes it is used to describe one's mental representations of one's other mental representations - as Ramachandran uses it above, sometimes it is used to describe a mental representation of an object as something that it is not (i.e. the child’s imaginary substitution of a banana for a telephone) and sometimes it is used to describe mental representations of other people’s mental representations: (i.e. Mary’s mental representation of Peter’s mental state). Regardless of the representational form, the underlying assumption of all meta-representational mind theory (i.e. all except perhaps Dennett) is that thought involves representations of one kind or another. As mentioned in parts III and IV, there are considerable grounds to doubt the common scientific, but as yet unsubstantiated, assumption that mental processes are representational in nature, let alone meta-representational.

Any theory that posits representation, of any kind, as a mental process must be entirely certain that it is founded upon an unshakeable understanding of the nature of representation. Moreover, it will need to explain how these processes of mental representation might have evolved from more rudimentary ancestral origins. Without fulfilling these two fundamental criteria, any proposed theory of imagination is likely to provide little in the way of insight.

But how could the mind function if not through the use of representations? This will be the subject of my next post.

Wednesday 12 June 2013

Imagining Itself (part IV: Describing Mental Imagery)



There is a long history to the belief that imagination is conducted through the use of mental representations and this can be traced back at least to the 4th Century BC Greek philosopher Aristotle who wrote: “whenever one contemplates, one necessarily at the same time contemplates in images.”  This view, that treats mental imagery as the straightforward equivalent of perceptual experience, persisted unchallenged for more than two millennia.

Perhaps the first person to record significant divergences in peoples’ accounts of mental imagery was Charles Darwin’s cousin: Francis Galton. Being credited as the inventor of that infamous form of data-gathering known as the questionnaire, Galton conducted the first ever survey of mental imagery in 1880. He notes that significant numbers of respondents make claims to the effect of: “If I could draw, I am sure I could draw perfectly from my mental image.” He continues:

I have little doubt that there is an unconscious exaggeration in these returns. My reason for saying so is that I have also returns from artists, who say as follows: ‘My imagery is so clear, that if I had been unable to draw I should have unhesitatingly said that I could draw from it.’

Sadly, instead of recognising these varied accounts as evidence of an underlying problem with the application of perceptual terminology to mental states, Galton draws the unremarkable conclusion that different people possess differing levels of mental-image-forming ability. In a later section he goes on to write:

There exists a power which is rare naturally, but can, I believe, be acquired without much difficulty, of projecting a mental picture upon a piece of paper, and of holding it fast there, so that it can be outlined with a pencil.

This is an extraordinary claim and it is difficult to fathom how Galton could believe such an ability to be "acquired without much difficulty" when it is patently obvious that artists are unanimously unable to exercise such prodigious skill. If it were easily acquired it would, no doubt, be taught in schools at an early age and years of disciplined observation would be unnecessary, not to mention paper, teachers’ salaries and students’ fees.

What continues to cause great confusion and significant variance in reports, is the difficulty we encounter when attempting to investigate or describe cognitive processes such as mental imagery via the terminology of perceptual experience. Mental imagery ‘feels’ like perceptual experience but, for the majority of people, it is nonetheless clearly distinguishable from it, at least in practical terms. However, as soon as we attempt to describe this difference we find ourselves unavoidably drawn to the use of such words as vividness, vagueness, haziness, realism, veridicality, verisimilitude, clarity, brightness etc, all of which derive from descriptions of phenomena available to our senses. This has led to the repeated error, amongst scientists and philosophers especially, not only of proceeding as if we were somehow possessed of sensory faculties capable of observing our internal states but also of treating mental states, and mental imagery in particular, as perceptible experiences.

“Ahh”, you say “But I can describe my mental imagery. I have mental images and I can see them clearly. How can this richness be explained if not by mental images?” Few people deny what we call mental images altogether but what is under serious question – especially since Ryle – is any similarity between what we describe as mental images and their public equivalents: pictorial representations. Theorists differ considerably in the degree to which they are prepared to entertain the possibility that thoughts (of which mental images are a form) are non-representational in nature. Many seem to agree that mental images are significantly unlike pictorial representations but that they nonetheless serve a similar functional role, and a representational one at that. Others agree that mental images are functional but that they are in no way representational.

Why should it matter? There are several reasons, with potentially profound consequences for both philosophy and science. Firstly, it is important that scientists conduct their investigations on the basis of correct premises otherwise their interpretations of the available evidence are, at best, likely to be overcomplicated and, at worst, entirely false. If Copernicus had not cast doubt upon the geocentric view of the universe then the progress of scientific understanding would surely have been greatly impeded and the explanations and calculations necessary to explain the misconception would no doubt have simply compounded the initial errors. If mental states do not utilise representations – as the vast majority of scientists believe they do – then science will very likely be struggling needlessly to make sense of the data it gathers in the same way that pre-Copernican astronomers once struggled to explain the bizarre trajectories of the planets across the heavens. The scientific investment in the assumption that our mental life is conducted through mental representations (“mentalese”) is so widespread that it is difficult to comprehend the repercussions if the underlying hypothesis turns out to be incorrect. Recent developments in the theory of embodied cognition suggest that the edifice of mental representations may well come crashing down sooner than its supporters might otherwise think.

Part V of this series of posts on the imagination looks at some of the confusion that has arisen through what is described as “meta-representation” and introduces a few theories of the functioning of human imagination and its intimate interrelationships with perception, memory and consciousness.

Wednesday 5 June 2013

Imagining Itself (part III: The Metaphors of Mental Imagery)



Wherever imagination is discussed, either during casual conversation or high-level academic debate, you will almost always encounter mention of some form of mental imagery, whether it be mental representations, mental imagery, visual perceptions, visualisations, visual experiences, representationalism or even, as we have seen: meta-representations. As well as its more recent acquisitions our language has become littered with terminology inherited from centuries-old conceptions of our mental states, and none more so than the application of visual metaphors to the “images” of our imagination. When we are uncertain we might ask “Do you see what I mean”. When we want someone to “envisage” something that we have in our “mind’s eye” we say “picture this.” The prevalence of these terms suggests that vision and visual metaphors play an indispensible role, not just in describing our mental states but in conceiving of the ways in which thought operates.

It is no doubt true that metaphors expand the ways in which we are able to describe the world but it might also be argued that there are instances where the observations we make of ourselves and the world around us are subtly are sometimes radically influenced by the common-sense language and concepts we use to describe them. Some philosophers like the Eliminative Materialists Patricia and Paul Churchland argue that commonplace terms for mental states, even such seemingly incontestable examples as “beliefs” and “desires”, significantly misrepresent our mental life. The Churchland's propose that cognitive science should reject the terminology of “folk psychology” (sometimes known as “common-sense psychology”) as a means to understand and describe the complexities of our cognitive makeup. Others, like the contemporary American philosopher and cognitive scientist Daniel Dennett, take a less radical line which accepts that folk psychology has its uses, so long as we understand its limitations. Dennett introduces the term “Intentional Stance” as a way of clarifying how different attributions of intention may or may not be appropriate depending on how, and in what way, we apply them. For Dennett, folk psychological terms may help to describe the intentions and actions of other human beings and to a lesser extent animals, however, to apply the same concepts (desire or belief for example) to the interpretation of chemical reactions or the behavior of subatomic particles would be to completely misconstrue the processes at work.

As a PhD student, Daniel Dennett studied under the English analytic philosopher Gilbert Ryle whose influential book “The Concept of Mind” (1948) devotes an entire chapter to a discussion of imagination and mental representations. Ryle mounts a formidable critique of the common-sense notion of mental imagery, a critique that has its earliest origins in the advent of behaviourism at the turn of the 20th Century. The behaviourist psychologists as well as several prominent philosophers, notably Jean Paul Sartre, Ludwig Wittgenstein as well as Ryle contributed to a widespread rejection of mental representations as quasi-perceptual phenomena throughout the first half of the 20th century. Only in the 1960’s did the discussion of mental representations begin to resurface, particularly in psychology and then later with the publication of influential studies in linguistics and cognitive science by Noam Chomsky and Jerry Fodor that argued for a reconceptualization of unconscious thought processes, casting them as essentially representational in nature. This in turn paved the way for a resurgence of interest in naïve conceptions of mental imagery as a cognitive equivalent of perceptual experiences.

The pendulum has swung so far back in recent years that it seems to be almost entirely beyond dispute that we do indeed employ mental representations on a regular basis in our thinking, both conscious and unconscious. If I asked you to picture an apple in your mind for instance I’m sure you could quite happily conjure up what would be described as a mental image. Yet there are compelling reasons for doubting this apparently incontrovertible (yet subjective) evidence. Sometimes evidence, even of the most public kind, is easily misinterpreted. For example, just because the sun appears to rise in the morning and to descend below the horizon in the evening this constitutes no proof whatsoever that our world is a motionless sphere at the centre of the universe, even though it was long believed to be so. What appears to be the case and what actually is the case can sometimes be radically different, yet it can be extremely difficult to demonstrate the error of false interpretations on the basis of available evidence. Consider for a moment, how would you go about proving that the earth revolves around the sun? Although Copernicus’ was the first person to suggest the theory of heliocentrism, it wasn’t until the much later work of Johannes Kepler that the errors in Copernicus’ theory could be fully explained and the fact of our  spinning elliptical orbit around the sun finally apprehended.

Unlike the strange trajectories of the planets across the heavens which - prior to Kepler - had proved extremely taxing to calculate and therefore to accurately predict, mental imagery has no publicly available features to which we can point our sophisticated instruments of observation and measurement. All we have access to are the many reports that have accumulated throughout culture and history as well as the evidence of our own experiences, if indeed we can even call them experiences, given the fact that they involve no sensory organs with which to ‘observe’ them.

As we will see in the next post, reports of mental imagery often create as much confusion as insight, not least because they frequently lead to claims that, once tested, fail to deliver any of the richness or accuracy that people commonly attribute to them.