Wednesday, 26 June 2013

Imagining Itself (Part VI: Noë’s Arc)

If what we call “mental imagery” does not consist of images and if unconscious mental processes are not conducted through the use of representations then what alternative theory might explain the workings of our most poorly understood organ. How can it be the case that we can remember huge tracts of information, faces, names, places, flavours, smells etc. unless the brain stores some kind of record, albeit a fallible and partial one, of the things we encounter? From where does our prodigious ability to remember, describe, calculate, think and respond to the world arise? And how is it possible that some individuals possess inordinate capacities to recall sequences of numbers, recite epic stories verbatim or to draw photorealistic images from memory if they are not using some form of mental inventory or image? Surely representationalism (to use the jargon) is inevitable.

In setting out his “Enactive Theory of Perception” contemporary philosopher of perception Alva Noë writes:
“A second implication of the enactive approach is that we ought to reject the idea - widespread in both philosophy and science - that perception is a process in the brain whereby the perceptual system constructs an internal representation of the world. No doubt perception depends on what takes place in the brain, and very likely there are internal representations in the brain (e.g., content-bearing internal states).” (2004)
Noë is attempting here to establish perception as wholly independent of mental representations; of images projected onto a notional neural screen carefully concealed somewhere inside our heads. Effectively he is rejecting the homunculus fallacy, and with very good reason. However, despite his reassuring rejection of representationalism in perception, Noë ends up resurrecting the very homunculus he has attempted to expunge. In stating that “content bearing internal states” are “very likely”, Noë makes it clear that he conceives of the brain as a kind of elaborate container in which representational content is housed.

Once again we face a difficulty posed by the commonplace linguistic metaphors used to describe the mind and we must be wary that these do not obscure or exclude other models of mind and mental processing that might offer equally plausible, if not superior, routes to an understanding of the workings of the brain. We are all familiar with metaphors that describe the brain as some form of container, some highly sophisticated but lossy neural bucket in which we dump all of our knowledge and experience, a repository, library, archive, hard drive, black box, recording device, etc. in which memories and information are stored for future use. But are these container metaphors the only possible way of conceiving of how the brain enables us to do all of the astounding things of which we we are capable?

In order to answer this question it might be helpful to return momentarily to Eric Kandel’s sea snails mentioned previously. In his Nobel prize winning work, Kandel demonstrated that sea snails respond to repeated negative environmental stimuli by synthesising proteins in their neural structures which then stabilise and reinforce their responses to further stimuli of a similar kind. In this way they become disposed to behave in certain ways in response to certain environmental triggers. The neural changes do not constitute a representation and no representation is necessary for them to respond. All that is necessary is an acquired disposition to respond in a certain way to a certain stimulus.

This notion of a non-representational dispositional theory of mind is most closely associated with the work of the British philosopher Gilbert Ryle who, in his 1949 book “The Concept of Mind”, put forward what has since become known as Analytical Behaviourism. Analytical Behaviourism argues that complex behaviours (those most commonly associated with the mind and mental states) are a result of the acquisition and implementation of dispositions to act. Just as sea snails acquire dispositions to behave in certain ways in response to environmental stimuli, so too do more complex organisms (human beings included). Obviously, as organisms become more sophisticated and the factors affecting them become more varied they must develop increasingly sophisticated mechanisms for dealing with the complex choices they face, yet the underlying processes of stimulus and response demand no functional assistance from representations or “mental content” (at least not in the way that mental content is most often conceived).

When asked earlier this year about the difference between organisms that merely react and organisms that perceive and purposefully act, Alva Noë responds:
“That is the anxiety that people have. You see that in Jason Stanley’s book, that I mentioned earlier. He seems to be worried that if propositional knowledge [i.e. knowing that as opposed to knowing how] doesn’t govern, then there’s no difference between us and mere reflex systems. I don’t share the anxiety however, partly because I’m much less disturbed at the thought that the bacterium has a primitive mind. It seems to me that we are on a spectrum with the bacterium.” (source 42mins in)
Can it be true that we are on a spectrum with the bacterium and the sea snail? Are perceptions and purposeful actions really just sophisticated forms of response or is there a fundamental conceptual difference that marks a clear evolutionary divide between responses on the one hand and perceptions on the other?

In K.T. Maslin’s 2001 entry-level book An Introduction to the Philosophy of Mind”,  Maslin devotes a chapter to a discussion of the Analytical Behaviourism of Gilbert Ryle and others. In it Maslin asks students to consider the difference between the following two statements:
(a) Martin raised his arm. 
(b) Martin’s arm went up.
He goes on to distinguish between what he calls “agential descriptions” (i.e. purposeful actions and intentional deeds) and “colourless bodily movements.” Implicit in Maslin’s comparison is a recognition of a fundamental difference in kind between mere responses and intentional actions, a difference that makes little sense when explained as a spectrum with bacteria at one end and human perception at the other.

Responding to causal influences is straightforward – we can create robots to do this. We can even build computers capable of defeating the most expert of chess players. But to act purposefully, to anticipate outcomes and to form intentions requires something quite different, something that the engineers of Artificial Intelligence continue to struggle to achieve, something that eludes even the most colourful movements or sophisticated reflex systems. What is this special something? Imagination.

Imagination anticipates the future and uses this to guide actions in the present. But how could such a sophisticated predictive capacity have evolved? The evolution of human imagination will be the focus of the next three posts.


Alex said...

Thanks Jim,

Great posts on the imagination. You seem to be putting a lot of work into writing up your thoughts for this blog, especially when surprisingly few people are responding. I’ll spread the word about a bit and perhaps we can get a bit of discussion going.

One observation about this post. It’s interesting that you titled this “Noë’s Arc” because you seem to have made a somewhat circular argument yourself. You start by suggesting an alternative to representationalism via the dispositional account of Ryle but once you move on to the distinction between colourless movements and intentional actions you leave open the question of whether intentional actions might be representational, of whether what might actually distinguish basic reflexes from intentional actions might in fact be a form of mental representation. Can the dispositional conception of mind account for purposeful action and imagination?

Jim Hamlyn said...

Thanks Alex, that’s an excellent question and it gets right at the heart of the matter. I think it’s important to really give the dispositional account a run for its money because as soon as we start to assume representationalism we have to explain how it could have evolved. In a later post I’ll look closer at representation and the degree to which representational practices are predicated upon systematic perceptual limitations. If they are, then the advocates of representationalism face a serious challenge to explain the evolutionary story of mental representation.

In 1998 Daniel Dennett wrote: “Is just seeing one's prey pastel representing? Is it representing at all? One has a mental or perceptual state that is surely about the prey (qua prey, one might add), and if that perceptual state plays an apposite role in guiding one to the prey, this state should count as a representation in use, or a representation that exists for the predator in question.”

In Dennett’s most recent book he warns against the use of “surely” arguments so perhaps we should take this quote with a pinch of salt. Is a distant horizon viewed as a goal a “representation in use”? Perhaps, but I’m not sure how far that takes us in explaining the processes at work.

Actions aren’t necessarily instantaneous and as such they take time to execute. Presumably they also trigger other actions in causal chains so I don’t see any real reason why these processes couldn’t form a very rudimentary but non representational basis of what we call imagination.

Alex said...

Thanks Jim,

The quote from Dennett is very relevant I think because it indicates how a complex set of brain states might "count" as a kind of functional representation, as a "representation in use". You seem to find this doubtful?

Jim Hamlyn said...

Hi again Alex,

Yes I am a little doubtful about it. Probably the best way to exemplify what I'm thinking is to quote a passage from a paper that Dennett cites by W.J. Freeman and C.A. Skarda:

"These considerations give an answer to our question about representations. Who needs them? Functionalist philosophers, computer scientists, and cognitive psychologists need them, often desperately, but physiologists do not, and those who wish to find and use biological brain algorithms should also avoid them. They are unnecessary for describing and understanding brain dynamics. They mislead by contributing the illusion that they add anything significant to our understanding of the brain. They impede further advances toward our goal of understanding brain function, because they deflect us from the hard problems of determining what neurons do and seduce us into concentrating instead on the relatively easy problems of determining what our computers can or might do. In a word, representations are better left outside the laboratory when physiologists attempt to study the brain. Physiologists should welcome the ideas, concepts, and technologies brought to them by brain theorists and connectionists, but they should be aware that representation is like a dose of lithium chloride; it tastes good going down but it doesn't digest very well."

Alex said...

You still seem to be avoiding the question of whether representations might serve a functional use. There is no doubt at all that conscious thought is conducted using representations: language. Is it not possible that the underlying processes could be representational too?

Jim Hamlyn said...

If I seem to be avoiding anything it’s probably because I’m trying to maintain some balance to the two sides of the argument (even though I think representationalism makes little sense).

The view I take is this. There is a mistaken but widespread assumption that mental processes are representational and this is founded upon a general lack of understanding about what it is that makes representations possible and most especially it is based upon an equally mistaken assumption that language must underpin all forms of representational practice.

I agree that it is very tempting to conclude that conscious mental processes (thoughts) are representational but it is well known that these inner monologues, as Ryle called them, form themselves through an entirely unconscious process to which we have no introspective access. I can only speculate about what those unconscious processes are but I'm not in the least convinced that they must be representational, as Jerry Fodor and others have famously argued.

As to there being “no doubt at all” that conscious thoughts are representational, I think this might be exactly where much of the difficulty lies.Let me put it this way. Imagine that someone was unable to ‘hear’ their inner voice and could only consciously think by speaking everything aloud. Such people actually already exist. Pre-school children commonly engage in “private speech” and it is only around school age that this becomes internalised. So, strange as it may sound, conscious thoughts are not where the mind-work is being done. The bit we call consciousness is just the bit where we privately enact (ie: inhibit from public performance) everything but the public representations themselves (but NOT any inner representations of any kind).

Alex said...

I fail to see how the fact that this is "everything but the public representation" entails it not being a private representation. A voice in an empty room is still a representation so why can't an inner voice be a representation?

Alex said...

...And are you suggesting that pre-schoolers are not conscious?

Jim Hamlyn said...

Firstly, yes I am indeed suggesting that consciousness is largely, if not wholly, constituted by the ability to imagine, by the ability to publicly inhibit yet privately execute dispositions to represent. Most commonly these dispositions to represent are verbal. However, congenitally deaf people brought up to sign do not have inner voices, they have inner signing. If that doesn’t lend very strong support to the claim that what we call conscious thought is inhibited representationally oriented action then I don’t know what does.

In answer to the question of whether an inner voice qualifies as a representation I’d say no. Representations are public entities. But unless you’re still doubtful let’s consider the following. If, instead of saying a word in an empty room, we physically gestured the word - would that be a representation? Then try to imagine writing the word in the air in front of you. I think you’ll find that whatever semblance the word might have had as an inner representation begins to evaporate somewhere along the line and all you’re left with is the intention to communicate the word via the strategy you have internalized most fully. That’s the amazing but confounding illusion at work in all of this: these dispositions to represent are so thoroughly embedded that we can barely distinguish between the intention to represent something and the publicly perceptible representations we are capable of producing.

Post a Comment