Monday 15 June 2015

Indivisible Atoms of Intention


We are purposeful creatures. That would seem to be a fact beyond all doubt. Aside from the odd twitch, sneeze, hiccup, yawn, blush and a whole variety of other much more common autonomic processes, our purposeful — as distinct from merely efficacious — behaviours are invariably of the intentional sort. They are actions.

We are intentionally directed creatures not because we think of every action in anticipation but because we are prepared to communicate our goals when appropriately prompted. But how can a preparedness, readiness or disposition to communicate a goal be sufficient to justify purposeful behaviour? This post is intended to offer an explanation.

When you arose from your bed this morning no doubt you did it intentionally. My aim is to show that the intention to do so, was not initiated by a special pattern of neural activity but by a special constitution of you as an organism — including your brain of course. Certainly there was activity in your brain prior to your getting up, but it would be mistaken to suppose that this activity constituted a nascent intention. There is no intender in your brain pulling any strings to lift you from your slumber or to remove you from your bed.

Intention is not an activity of brain cells so much as a state of you as an organism in which certain causal influences lead to certain patterns of behaviour. In other words, when you get up, you simply act out of habit. This is not to say that habits are unintentional. What I am suggesting is that many—perhaps all—of our actions are a consequence of the way we are configured; of our dispositions to respond to certain causal influences in certain ways. This is surely what we mean when we say that someone “acts in character.” What we need to examine in order to understand intention is these dispositions to act — in particular our dispositions to represent.

If at any point in time I were to interrupt you and ask what you are doing, I’m pretty sure that you would have an answer at the ready. It would certainly be disconcerting for us both if you didn’t. Now it cannot be true that you are covertly narrating your life as you live it just in case someone asks you to explain yourself. You are simply adept—as we all are—in the skill of offering representations of your causal engagements on demand. So I am saying that this preparedness to produce representations is is necessary to explain intentional behaviour because the preparedness in itself is causally influential upon behaviour.

Certainly there are times when we are thrown into unfamiliar circumstances where we need to consider the possible consequences of our actions. But it is important to bear in mind that there is nothing about these forms of activity that makes them any more intentional than our more ordinary acts of habit. Some of our actions are the result of explicitly contemplating reasons or anticipating future states of affairs. Others are the product of images, objects, gestures and enactments that we are capable of producing, performing or describing.

Representations are intentional artefacts and behaviours. It is for this reason that we need to distinguish  sharply between the objects and behaviours that representations are from the indivisibly intentional creatures — the individuals — that produce them. The only way a creature can contain an intentional artefact or behaviour is by an intentional act. There are no intentional agents or behaviours within us. If we assume that the agency capable of producing representations exists within us — as opposed to being a state of us — then we undermine the whole project of examining and explaining what it is to have agency in the first place — to be an intentionally directed creature and to have a mind.

There has to be a cutoff. Intention doesn't reach all the way back. I am arguing that the cutoff is at the level of the organism, not its organs, its cells or its atoms.


151 comments:

Thomas said...

From his reply to Bennett & Hacker (http://ase.tufts.edu/cogstud/dennett/papers/naive.pdf) who claim pretty much all of neuroscience is mired in confusion as a result of failing to heed a single passage of St Ludwig, Dennett says:
"In conclusion, what I am telling my colleagues in the neurosciences is that there is no case to answer here. The authors claim that just about everybody in cognitive neuroscience is committing a rather simple conceptual howler. I say dismiss all the charges until the authors come through with some details worth considering [...] positive theories or models or suggestions about how such theories or models might be constructed"

Jim Hamlyn said...

I don't much like that line of reasoning of Dennett's, Thomas. It isn't possible to know in advance what insights are occluded by the casual use of terminology -- by shoddy theorisation. There may well be many insights that are not. In this case the terminological inconvenience avoided may be justified. But if the terminology leads to sufficient conceptual confusion to outweigh these gains and if it obscures deeper or more far reaching insights then it is clearly not justified. How can we know what we don't know? And moreover, how can we know that our convenient theorisation isn't the cause of otherwise avoidable blind spots? Clearly talk of encoding amongst geneticists has led to countless major insights. Can we say the same of representation talk in brain science?

Thomas said...

Jim: "It isn't possible to know in advance what insights are occluded by the casual use of terminology"
Indeed, one would actually have to analyse the theorisation. Not just conclude in advance that casual use of terminology leads to "shoddy theorisation". A challenge I believe I have offered to you before.
Or as Dennett says:
"Not once do they attempt to show that because of making this presumably terrible mistake the author in question is led astray into some actual error or contradiction. Who knew philosophy of neuroscience would be so easy?"

Jim Hamlyn said...

"Not once do they attempt to show that because of making this presumably terrible mistake the author in question is led astray into some actual error or contradiction." -Dennett

Sure Thomas, that's a fair point. But we are looking from the wrong vantage point to be able to make a fair assessment of whether someone has been led astray because we don't know the correct course yet (although some of us might be more likely to be on the right course if we didn't disregard the lessons of parsimony and clarity). That is one objection that might be raised.

Nonetheless I find it really puzzling that you don't acknowledge that some theorists (including several on this forum) have already been led astray into the mistaken assumption that brains actually contain, detect, produce and use actual representations.

For my own part I find the commonplace ascription of active locutions to non agents extremely unhelpful to the necessary elucidation of the distinction between intentional and mere responsive behaviour. It is an easy mistake to make but I think it really is obscuring something important.

Thomas said...

"Nonetheless I find it really puzzling that you don't acknowledge that some theorists (including several on this forum) have already been led astray into the mistaken assumption that brains actually contain, detect, produce and use actual representations."
The only person I'm aware of who has definitely been led astray here is you Jim. For you seem continually unable to recognise the distinction between the 'neural representations' that cognitive scientists speak of and your notion of "actual representations". Imputing the later notion to the cognitive scientists without evidence. Show me some evidence that cognitive scientists make this same mistake or that these terms impede the science.
What is your evidence of people being led astray? Talk of neural representations? Sorry that is just evidence of you being led astray unless you can find evidence of them conflating neural representations and "actual representations" (not that I'm entirely convinced by your account here).

Jim Hamlyn said...

Daniele P, Sorry to bring you into this discussion but perhaps you could clarify your position on neural representations. Am I correct that you are not of the opinion that the term is just an expedient way of describing the causal mechanisms of the brain? Is it true that you hold that there are actual symbolic representations in the brains of humans and not animals?

Daniele said...

wait a minute. we must be very precise in defining what we mean by 'symbolic', 'representation', 'neural' etc. otherwise we end up in a terrible confusion.

I take the term 'representation' to refer to some informational structure that is part of a mind. Of course that must be implemented into real neuron but this is a very hard task and our knowledge is not able to understand how the exact match between representations and neurons is.

If you define 'symbolic representation' in logic or informatics-like terms, namely a format of information that can undergo mathematical/logical operations (i.e. turin machine-like behavior) then yes, I would agree that only human possess this kind of information.

where this is implemented in brain structures, this is a mystery. Probably in those parts of the brain that we do not have in common with chimps (high order associative areas), but this is just a well motivated guess.

of course, if you do not believe in the problem of the mind/brain distinction, then you might just substitute the term 'mind' with the term 'brain' above, and things wouldn't change.

Jim Hamlyn said...

Thanks Daniele, I have to say that I take a completely different approach. The reason chimps and Bonobos are not as skilled in the public use of symbols as humans is because they haven't had several million of years of evolutionary development in nonverbal tokening that our ancestors have (some of it documented through stone tool use). Over time, these and other proto-symbolic exchanges have developed the organisms that practice them to such a degree that we are now capable of extremely elaborate practices of exchange: namely language.
But I see no reason to suppose that anything like a symbolic system could evolve _within_ the human organism. Certainly the *capacities* to engage in public tokening have evolved within us but I'm not at all convinced that a skill in the use of symbols necessitates a correlated capacity on the part of the brain to generate and use symbols.

Daniele said...

well, I see a few problems here.

1) one basic fact we know about monkeys is that they cannot learn language. Every effort to teach them even a simplified version of natural languages was a total failure. Thus, humans must possess a faculty X that monkeys do not possess. Even the most anti-linguistic approach, such as Tomasello's approach, is obliged to claim that this faculty X exists and it is innate. He claims that it is a high level pragmatic capacity (something similar to what you mentioned above). Chomsky claims that it is something more similar to what I mentioned before, i.e. the capacity of 'merge' symbols.

2) from the archeology records it is clear that our ancestors possessed a brain that is extremely similar, if not entirely identical to ours. So the following reasoning cannot hold:

pragmatic skills -> million of years practicing in social communication -> development of faculty X -> development of language

That is, the faculty X, whatever it is, was already present in those homo sapiens (and probably even in Neanderthals and less evolved species) that did not have complex social interactions, agricolture, fine tool-making, clothes etc.

If monkeys or chimps had the faculty X, they could learn language.
If language is the cause, instead of the effect, of the faculty X, then we fall into a vicious circle.

does this make sense?

Jim Hamlyn said...

Yes Daniele, it does. And thank you.

Like all adaptations, the evolution of the human brain has been forged in the crucible of environmental and creaturely interaction. I take a realist view which means that I cannot accept the idea that the brain has evolved to model the world. Brains have evolved as integrated components of responsive organisms and as such I think we need to examine the responses of organisms to their environmental engagements in order to understand the development of the brain. In particular I believe that we need to examine the procedures of representation that creatures engage in. Those procedures are embodied, not just embrained.

You are quite right that apes have limited skills in symbol manipulation (although their abilities are nonetheless impressive when compared with other non-human animals) and their skills in pointing (admittedly not a symbolic skill) are also remarkably poor whereas dogs learn the knack quite readily. Wolfs and foxes on the other hand do not. Domestication seems to play a major role in the case of our canine friends. For example, domesticated foxes can learn to respond to pointing too. Dogs are also surprisingly good at learning to respond to abstract representations and even some simple abstract concepts. But what dogs lack in comparison with primates is manipulative skills.

Infants learn to point very early and are already born producing representations in the form of mimicry (even their first crying imitates the vocal characteristics of their mothers).

But what cannot be the case, is that the brain evolved to produce imitations for its own use (grid cells and place cells notwithstanding) and nor is it plausible that brains perform inner pointing. If this is true, then any symbolic skills that developed in the publicly perceptible world under extremely unique circumstances are simply impossible in the brain because the necessary antecedents are unavailable in the confines of the brain.

So when you say that it is a mystery where the mathematical/logical operations occur in the brain, this does not surprise me because I don't think anybody will ever find such operations there. Our skills are only performed within the mind in the most incipient sense. And they can only be learned, practiced and challenged through social intercourse because they are fundamentally procedural artefacts of culture. Only once we have learned to perform a skill such that we are capable of demonstrating it to other perceivers can we then properly entertain it in mind. Language is a skill of social organisms, not of lonely organs.

If you are of the opinion that mathematical/logical operations are merely powerful ways of modelling what happens in the brain then I have no issue with your theorisation. But if you regard the brain as a locus of mathematical/logical operations then I think you have been led astray. Perhaps that is in part due to your commitment to idealism. I can only repeat that I take an incompatible view.

John S said...

It is the human mind that knows things not the mind. It is like the difference between software and hardware

Jim Hamlyn said...

It is the person that knows things, not the mind. Mind is merely a name we give to skills of communication.

Joe said...

Whoa Jim. I think that's the first time I've actually seen a concise statement of what it is you espouse. I usually just catch the fireworks. I'll grant you the fact that it's the most counter-intuitive account of mentality I've ever encountered within legit philosophy, which is saying a lot. I can't fathom it. [And I spent a year of my life reading "Finnegans [sic] Wake".]

Jim Hamlyn said...

Joe, I'm assuming that you are referring to my statement: "Mind is merely a name we give to skills of communication." It surprises me that you can't fathom it because you seem to be conversant with quite a lot of challenging ideas and if you know Joyce then you clearly aren't averse to doing a little spadework in the peat of sense and nonsense.

I take mentality to be a manifestation of our skills of communication -- both nonverbal and verbal. If we were incapable of communication -- of learning how to communicate -- I argue that we would be incapable of mentality.

My view owes a hell of lot to the work of Donald Brook and Derek Melser. Working independently both theorists regard thinking as an action. Melser places a lot of emphasis upon language whereas Brook expands the explananda to nonverbal skills too. Some of their theorisation isn't a million miles away from the Abbreviationists of whom Margaret Floy Washburn was a major proponent. But instead of focussing on movement, as Washburn did, they focus upon communicative skills. Vygotsky and Piaget also had somewhat similar views regarding the "internalisation" of "egocentric speech" in young children. In other words (and very roughly) our communicative acts become internalised as mind.

Does that help you to fathom my position a little better or are you still at sea from swerve of shore to bend of bay?

Joe said...

I shall have a look. Thanks for the context.

Mark said...

Daniele re <... If you define 'symbolic representation' in logic or informatics-like terms, namely a format of information that can undergo mathematical/logical operations (i.e. turin machine-like behavior) ...>

I have a question about this.

It is my understanding that within the cerebrum, if not other regions of central nervous system, the cortex is a layer of unmyelinated neurons which are functionally grouped into cylindrical shaped associations which stretch the several millimetre distance from the outer surface to the inner surface. These have been called mini-columns. Neurons within each mini-column interact with each other in response to inputs from elsewhere, such as sensory detectors, or from other cortical regions. The mini-columns interact with local neighbouring columns either to inhibit them or to cooperate in certain stereotypical ways. The output from each mini-column may go to any number of other cortical areas or off into pathways which innervate muscles and and make them move.

The point here and the question is that the mini-columns in many regions provide response and registration relating to characteristic input types - for example a particular hue [of colour] or a particular direction of motion or an object, or some other more abstract relationship or event. This is so whatever the external object is.

Is that not symbolic representation under your criterion?

I mean: *if* the redness of some part of a flower, or car, or sunset may be reliably and characteristically embodied, in the particular context
*and*, as far as the rest of the brain is concerned,
activation of that particular mini-column *means* 'red of this particular hue or related variations',
*then* that meaning, or contribution to the overall process of depiction/embodiment, is an embodiment of part of a fact or belief about the world. It is available to be incorporated into or juxtaposed to other such facts.

Indeed such activations may be effectively the 'atoms' of our subjective experience, and the 'atoms' of symbolic meanings and relationships.

Jim Hamlyn said...

Mark, nice try but I think you will find that Daniele limits his symbolic representations to language users only.

Daniele said...

Mark, I'm afraid Jim is right. You can implement the meaning of 'redness' or 'happiness' with embodied representations, but you will never manage to implement the meaning of logical functors like 'or', 'not', 'any' and so on. Barsalou tried, but his attempt doesn't work (as I think he admitted himself).

I guess language and reasoning are the only faculties clearly exploiting symbolic/logic operations. Unfortunately the former can be studies as we rely on solid intuitions, the latter is way more difficult to study.

Thomas said...

Daniele: does that mean you would (unlike Jim) grant the term 'representation' to this particular neural activity (in some particular non-symbolic sense of 'representation', being clear we should distinguish the various uses as I think you have)?

Daniele said...

I'm not sure we are on the same track. Representation is a very abstract and general term. In its very essence I think a representation is a bunch of information organised in a particular way geared to solve a specific task or towards a specific goal. Generally, this goal or task is to simplify, organise and memorise another bunch of information, generally of greater dimension, so that they can be retrieved and used at some later stage.

Examples:
a painting is a representation of a real scene.
an analog track on an old cassette is a representation of the sound that it can reproduce
a jpg file is a representation of a real picture, and it can be converted in pattern of activation of LCD pixels

the way we represent reality in our brain is rather complex. For instance in our vision areas we have multiple representations that encode a single feature (e.g. color, shape, movement etc.) and more complex representations that bind into a single map all these features together.
When we understand language we must use representations that are radically different from those that we use in visual perception. One of the key feature is their amodality, that is they have no analog relation between the perceived signal and the informational coding. It's like the difference between analog coding into a music cassette and digital coding into a modern HD. digital coding is arbitrary, it's made of 0s and 1s, and it needs an interpreter to decode such information.

Another critical feature is being 'symbolic'. You cannot apply formal logic operations to modal representations, but you can - in fact you do - apply mathematical and logic operations (e.g. first order logic: AND - OR - NOT, and functional application) onto symbolic representations. If you deny their existence, there's no way you can explain how language works, and probably even how reasoning works.

Jim Hamlyn said...

Daniele, both Thomas and I are trying to establish whether you do or do not hold a particular view regarding neural representation. Thomas assumes that you do not believe that there are actually symbolic stand-ins in the brains of language users whereas I suspect that you do. Could you clarify?

Daniele said...

well, if we show some specific behavior, in this case our linguistic competence witnesses the ability of applying formal operations to symbolic representations, there must be brain functions and networks in which such behavior is implemented.
Therefore, yes, there must be something in our brain that stands for a symbolic representation. Where or how this is implemented it still a mystery. Some people (e.g. Pullvermuller) believe that it is a single neuron that has the capacity of symbolic functions. Otherwise it could be a function emerging from population of neurons, networks, areas, who knows. Given that the plasticity of the brain, that is linguistic and reasoning skills can be recovered after a serious damage of specialized areas, I guess this capacity is solidly hard-wired in the whole brain. But this is just a guess, we don't really know..

Jim Hamlyn said...

Thanks Daniele, that is exactly what I thought.

So Thomas, you may think that I have been leading the witness, but it seems clear to me that I am not the only person in the world who believes that some theorists are of the opinion that there really are symbolic representations in the brains of some creatures. Admittedly it is only a small triumph on my part, but at least you can no longer claim that there is no possibility of confusion over these issues.

Thomas said...

"Thomas assumes that you do not believe that there are actually symbolic stand-ins in the brains of language users"
No, I think Danielle does think there are symbolic representations (ignoring your diversion to "stand-ins") in the brain (or I think he would prefer mind). He just doesn't mean 'symbolic representations' in the very peculiar sense you do. But I'm not comfortable continuing to debate what Danielle thinks because I don't know that (though I will look at his articles to explore this as his views look interesting, thanks Danielle).

Just to try and remove Danielle from our debate I take it that Danielle is saying that:
Minds comprise many symbolic and non-symbolic representations.
These representations are in some, as yet not fully understood way, realised in the brain.
Some of these representations are _symbolic_ representations, these are peculiar to language users and are necessary to explain many of the abilities of language-using minds.
Non-symbolic representations are present in both language-using and non-language-using minds(with of course many differences among them in both kind and sophistication).

Danielle, if you disagree with this synopsis of your views please note this so I can better understand your views.
Jim, if you disagree with these statements please address any concerns to me as I don't want to continue to impute views to Danielle and certainly prefer you don't impute views on my behalf. I don't claim Danielle denies symbolic representations in the brains of language users, just that he doesn't mean by that the very peculiar sense you use. In my view none of your attempted arguments against neural representations apply to the understanding of neural representations used in the cognitive science literature, which I take Danielle to be invoking.

Thomas said...

"So Thomas, you may think that I have been leading the witness, but it seems clear to me that I am not the only person in the world who believes that some theorists are of the opinion that there really are symbolic representations in the brains of some creatures. Admittedly it is only a small triumph on my part, but at least you can no longer claim that there is no possibility of confusion over these issues."
My debate wasn't that no theorists thinks there are representations (symbolic or otherwise) in the brain. It was rather that you resolutely fail to understand the sense of representation being imputed. I'm sure many theorists think there are representations in the brain. Meaning the sort of sub-personal level representations that are common to a great many cognitive science explanations. But not the sort of personal level representations you claim are the only sort of valid representation. Again I claim the only person here conflating these sub-personal and personal level accounts of 'representation' is you.
Where is your evidence that it was the personal level kind being imputed? Surely the mere fact these representations were imputed to the brain (or better mind) not the person shows it was not this personal level account that was being discussed.

Daniele said...

perfect synopsis.
I'd like to add, just for the sake of clarity, that:
a) in the history of cognitive science it is common to find approaches that reject the very concept of representation. Just look at the famous debate between behaviourism (only the relation between stimulus and response matters, all the rest is a black box not worth being investigated) vs. cognitivism (the mind and its representations should be the object of cog. science). Although some exponents of the embodied approach (e.g. mirror neurons theorists, embodied mind, embodied language, situated communication etc.) do not like this, many people consider them as neo-behaviourist. There is, in principle, nothing bad about this, if not that for some people (like me) it represents a leap backwards in science of about half a century.

b) also the definition of symbol - as well as symbolic representation - it is very fuzzy and liable to create confusion. In general, a symbol can just be "something that stands for something else". Just to make things clear, my definition of 'symbolic representation' is based on logic/mathematical properties of symbols (e.g. an entity or individual, a set, a property in first order logic). This of course does not imply that I reject or dislike other definitions of 'symbol'. Nor that I don't think they are relevant for cognitive science or philosophy.

Jim Hamlyn said...

\\ Just to make things clear, my definition of 'symbolic representation' is based on logic/mathematical properties of symbols (e.g. an entity or individual, a set, a property in first order logic).//

Perhaps this is the key to my confusion (as Thomas sees it) Daniele. Do you believe that neurological symbols function by way of properties? Because in ordinary language, symbols do not function by way of their unique causal characteristics. They function by way of their meaning.

Monkeys will swap a piece of fruit for an equivalent piece of food, but only language users will swap a piece of fruit for a word. The properties of symbols are _values_ within a system of exchange. This is what Wittgenstein meant when he famously declared that "meaning is use".

In other words you seem not only to be using the word "symbol" in a technical sense but also the word "properties". Is that right?

Jim Hamlyn said...

Thomas writes:
\\Jim, if you disagree with these [the following] statements please address any concerns to me//

The following are addressed to you then, Thomas.

\\Minds comprise many symbolic and non-symbolic representations.//

In ordinary language, representations are always perceptible entities. But this technical usage is pure Cartesianism. If a mind is “comprised” of these putative mindreps then they cannot be perceptible entities because a mind is not a measurable thing. The best evidence you can offer for these alleged mindreps is:

\\These representations are in some, as yet not fully understood way, realised in the brain.//

That is not evidence, it is naïve supposition. The reason they are not fully understood is because there aren’t any. They are figments of misguided supposition.

\\Some of these representations are _symbolic_ representations, these are peculiar to language users and are necessary to explain many of the abilities of language-using minds.//

Yes, It is _supposed_ by Daniele that these logic operable mindreps are peculiar to language users and are necessary to explain many of the abilities of language-using minds. But considering the paucity of evidence, Daniele has no reason to make such an extravagant supposition? It does not follow from the fact that we use logic that we have any neural form of logical operations going on within us. The mere suggestion is preposterously circular.

Logical operations allow us to model and predict the world. We can use logic to accurately model the behaviour of cells but this does not mean that cells contain logical operators or even a cellular equivalent. They certainly contain many chemical and atomic _relations_ that _conform_ to logic (which is why logic is such a powerful tool) but correlation is not causation? Nor are the products of cultural evolution – tools – to be found within the structures of biological evolution.

Jim Hamlyn said...

\\Non-symbolic representations are shared by language-using and non-language-using organisms (with of course many differences among them in both kind and sophistication).//

Yes, again, Daniele makes this extravagant supposition without sufficient evidence. And by doing so he faces a major and easily illuminated challenge. If the brains of nonverbals do not contain any neorosymbolic structures or relations then they must contain non-neurosymbolic structures or relations that enable animals to discriminate distances, speeds, trajectories etc. without inner logical operations. Or are these perhaps inner logical operations upon non-neurosymbolic representations? Is it not blindingly obvious that the challenge of explaining such a thing is even more formidable than the challenge being attempted to explain our presumably more sophisticated language skills? Something is fundamentally awry.

If the brains of nonverbals are involved in a more rudimentary form of neurorepresentation than neurosymbolisation, then once again where is the evidence of this presumably more basic form of neurorepresentation? If these nonsymbolic neurorepresentations are not manipulated via logical operations then how are they manipulated? And if they are manipulated with logical operations then how does a system perform logic upon something that isn’t in any way symbolic or even neurosymbolic?

At least when we deal with ordinary language representations we can make a clear distinction between verbal and nonverbal representations. We can even examine the correlative role of grid cells and place cells and decide whether they qualify as ordinary language representations or not. We can also debate whether it is logical to suppose that the brain could evolve a capacity to conduct logical operations upon these causally correlated structures and what further forms of transformation would be necessary (or impossible) to enable this.

Daniele claims that his neurosymbols have "properties". Are these properties distinct from their meaning? Do they even have meaning? If they don't have meaning then what is the nature of the symbolisation and how is it different from OL symbolisation? These are not trivial questions. They require clear and unequivocal answers that do not fall into the trap that I claim that Daniele is already fully entangled within of private language.

John R said...

I am very curious to learn why an organism could not evolve to have logical processes, or correlations to logical processes (though I don't immediately see any difference). I hope Daniele and others will respond. I find it very interesting to try to sort out what Jim is after. And I think we are getting very close to the heart of important issues here.

Peter said...

Jim

People assume that minds must use representations because they can't see how it could do certain things, such as planning future actions otherwise. There is an analogy with how computers do things. Of course, that isnt using a naive definition of representation as something visible to the naked eye, but thinking more in terms of a manipulable informational model.

If we do not have any neural logical operations going within us, how do we do logic? I can only guess that Jim has a beef with the idea of logic being intrinsic or fundamental.

A CPU does logic by shunting electrons around...which is to say, it just shunts electrons around, and that looks like logic from outside the black box. Why should a brain be different?

Jim Hamlyn said...

\\I am very curious to learn why an organism could not evolve to have logical processes, or correlations to logical processes (though I don't immediately see any difference).//
John, the behaviour of an amoeba is logical but that proves nothing about whether it's behaviour is driven by logic operations. Your question is well worded because you don't impute logic operations to organisms but it could be misread as supporting such a view. For example if you had said "I am very curious to learn why a falling brick could not have logical processes, or correlations to logical processes..." then I would also agree that a falling brick does indeed conform to logical processes. But that obviously doesn't mean that a falling brick is performing any internal logical processes or that the circumstances in which it is falling are doing any logic operations. Do you see?
Another way of thinking about it is to think of of an electronic logic gate. Does a logic gate perform logic operations? Does it parse "if this, then that" statements or whatever? No, it's just a specialised filter.

Jim Hamlyn said...

\\People assume that minds must use representations because they can't see how it could do certain things, such as planning future actions otherwise.//

That is precisely right Peter, they assume that, but what I am arguing is that an ability to point to the horizon is not an ability of the brain to perform inner acts of pointing. Representations are publicly negotiated, they are not private affairs.

\\There is an analogy with how computers do things.//

Exactly right again, there is — an *analogy*.

\\that isnt using a naive definition of representation as something visible to the naked eye, but thinking more in terms of a manipulable informational model.//

Yes again. The problems arise when it is assumed these neuroreps — that are different (in ways that are often not explicitly spelled out) from "naive representations" are supposed to be manipulated as if they have meaning rather than merely causal properties.

Meaning is socially negotiated.

Do you see where I am going?

Jim Hamlyn said...

\\If we do not have any neural logical operations going within us, how do we do logic?//

If we don't have pointing going on within us, how do we do pointing?

\\A CPU does logic by shunting electrons around...which is to say, it just shunts electrons around, and that looks like logic from outside the black box. Why should a brain be different?//

Damn good question Peter. But does the CPU do logic or does it conform to logical processes?

A hammer does hammering, so why should a brain be different? The missing ingredient in all of these analogies is the agent. As I keep trying to emphasise, agency is an attribute of representation (in the OL sense) users, not of their parts.

Thomas said...

Jim:
//a mind is not a measurable thing//
Sure it is, cognitive psychology measures aspects of the mind all the time. Representations and the mental computations performed on them are one of the key resources used to understand these measurements.

//The best evidence you can offer for these alleged mindreps is:
"These representations are in some, as yet not fully understood way, realised in the brain."//
We know much about the sorts of mental representations there are through experiments establishing the cognitive skills we are capable of and the sorts of information the mind (and so brain) must be able to deal in to realise these skills. There are many open questions here of course. We also are starting to learn a lot about the sorts of neural representations there are, through the many techniques in neurophysiology (again many open questions of course). The "as yet not fully understood" bit I was alluding to is the linking together of these two levels of explanation which is often very much less clear.

Also, that was not evidence. The best evidence is the mountains of psychological evidence as to the types of representations found in the mind and brain (embodied and situated as they of course are) and successful theorising done on the back of this. The best you can offer against this mountain is a few vague statements from your St. Ludwig and endless linguistic contortions.
The basic nature of neural representations is taken as clear in the literature. They are patterns of neural activity that transmit information between brain regions which perform computation on them (or something like this, the details of the definition is rarely debated as this is science not philosophy, as Dennett noted if you ask a scientist what exactly they mean by their terms then they'll generally just tell you a story about how they make their models work). E.g. http://www.cnbc.cmu.edu/~tai/readings/nature/zador_code.pdf starts with the question "The concept of neural representation is central to neurophysiology, but what does it actually mean to suggest that a neuronal signal is a representation?" But it never goes beyond a cursory definition of neural representation without consideration of any debate on this. The closest to a singular definition is: "Although a neural code is a system of rules and mechanisms, a representation is a message that uses these rules to carry information, and thereby it has meaning and performs a function." Rather than these definitional issues they focus on the kinds of neural representation there are, letting the phenomena speak for itself.

Thomas said...

//If these nonsymbolic neurorepresentations are not manipulated via logical operations then how are they manipulated?//
By various causally instantiated formal operations (i.e. computations), just not ones that adhere to formal symbolic logic.

//It does not follow from the fact that we use logic that we have any neural form of logical operations going on within us.//
As Peter noted, sure it does, in some sense. Where else do these acts of logic come from if not some corresponding neural operations (in what exact sense the underlying operations is logical is another question, one Danielle seeks to address). If we deny some corresponding neural operation what are we left with. Some mythical soul that floats free and communicates with the brain through the pineal gland? Some mythical person floating free of the physical parts of which they are constituted? Some shallow personal level account floating free of the sub-personal level accounts that underpin it? It seems to be you here who veers towards Cartesianism.
//Nor are the products of cultural evolution – tools – to be found within the structures of biological evolution.//
No, but the ability to engage in cultural evolution, e.g. to produce tools (not reducing all cultural evolution to this as you seem to), is realised by corresponding neural operations which were shaped by biological evolution (spurred on by previous cultureal evolution). Cultural evolution doesn't float free of biological evolution or cognitive processing. Though it does of course radically extend them. Even more powerfully, a complex interplay among them transforms all three, each iteratively feeding into the others to open new realms of possibility. To give just one example cultural evolution and enhanced cognitive processing transform biological evolution through altered mate selection and now biological engineering.

Thomas said...

//As I keep trying to emphasise, agency is an attribute of representation (in the OL sense) users, not of their parts.//
But, for a physicalist, the causal powers of representation users just are a complex product of the causal powers of their parts, nothing more. Of course this does not mean explanation should take place solely in terms of the causal powers of the parts, but it must adhere to them. But I think we should aim for something like the synoptic unity of the manifest and scientific image Sellars pushes, bringing the two into stereoscopic unity, not stab out one eye as you seem to want to do. Or better yet aim at a synoptic view taking in a whole range of different explanations at different levels (while recognising, tentatively as in all theorising, the ontological primacy of the basic physics image) as approaches like Ontic Structural Realism seem to aim at.

Again my arguments against you seem nicely captured in critiques of works you cite as influential, here from Fred Adams' (quite scathing) critique of Melser's The Act of Thinking that you linked above, which says:
//Melser paints as mutually exclusive the ideas that what goes on in one’s mind are information processing events that go on in one’s brain (on one hand) versus that thinking is a kind of activity that a person does (on the other hand). He (as Ryle and Wittgenstein before him) seems to take it as an unargued presupposition that it cannot be both. He takes it as given that the person cannot be a composite of sub-personal events or agencies. Here is where Melser seems to see a category mistake ... mistaking thinking for something that happens inside of one instead of something one does. But any materialist about persons must take the person to be a composite of sub-personal events. Even if my mental doings are actions of mine, actions at a personal level of description, when I do something, my doing (my causing) something cannot be more than a collection of the proper parts of my brain’s causing something too. My doing is the relevant portions of my brain’s causing the relevant events. The personal level of description may employ mental terms and actional language to describe it, but the very same physical events that make my actions possible have to be able to be described in sub-personal terms. Indeed on a token-identity theory, they are the very same events variously described at different levels (something any enlightened materialist accepts as true).//

Daniele said...

sorry, I'll try to comment in more detail in the next days.
I just gave a very quick read to your posts.
but yes, we can imagine animals and minds that don't do logical operations. Actually they are all the animals except the man.

if you can implement the neural system of an ant in a computer it does not mean that the ant is doing logical operations.

logic is not pointing, is not action, is not embodied, is not modal or multimodal, is not goal-oriented. Logic is symbolic, it's recursive, it's compositional, it's a turing-machine. C'm on guys I'm sure you know how to define it way more than I do smile emoticon

"People assume that minds must use representations because they can't see how it could do certain things, such as planning future actions otherwise. "
That's a common misconception in modern cognitive science, borrowed directly from behaviourism.

A representation presupposes the presence of a model. The representation of an object presupposes that we hold a model of how that objects moves, behaves, has a volume, consistence, temperature etc. Even if you claim that this information is implemented in a network of neurons that fire together and work together in a way that is totally different from that we would conceptualise with our diagrams, this wouldn't make one bit of difference. It is still a representation.

Daniele said...

To see why we must entertain logical representations, you must look closely at language. Again, look at the interaction of very simple and high frequency words like "or" and "not".

I drank beer <- event of drinking beer
I ate pizza <---- event of eating pizza

I drank beer and ate pizza <--- conjunction put the events together
I drank beer or ate pizza <--- disjunction is true if at least one disjunct is true

you can implement these operators via some Barsalou-like perceptual symbols that do not require any logic. The problem is when you combine operators together.

I did not drank beer or ate pizza <---- I did not do BOTH things
I did not drank beer and ate pizza <---- I only did one of the two things (actually this is ambiguous depending on where negation takes scope)

this is what you cannot implement in any way, unless you have a system that:

a) has the same properties of first order logic
b) importantly: it is recursive and compositional
c) it has a symbolic format of information to work with (i.e. Turing-machine)

Did this capacity evolve from motor skills? probably yes, it must have.
Is this capacity inborn? yes
can it be explained in terms of embodied cognition? no.

hope this helps smile emoticon

Joe said...

"C'm on guys I'm sure you know how to define it way more than I do"

That's actually a darned fine account. As a pseudo-aside, Kripke has recently proposed that if you accept the notion of effective calculability as a species of mathematical deduction (not unreasonable, given that all known algorithms, both human and machine, are first order), then the informal notion of a calculable procedure can be translated into first order logic and Church-Turing thesis is effectively proved, via Godel's completeness theorem. I don't even remotely understand Jim's position, so I can't comment. It does seem to me, however, that when Jim makes existential assertions such as "[Representations] are figments of misguided supposition" that he's gone more empirically AWOL than his opponents.

Jim Hamlyn said...

Thomas,
I wrote: "a mind is not a measurable thing" to which you responded:
\\Sure it is, cognitive psychology measures aspects of the mind all the time.//

That is not true.
1) Mind is not a thing.
2) Mind has no measurable properties.
3) Mind has no measurable magnitudes.
4) In order to measure something you first need to establish a standard. Check out the work of Chang (2009) or more recent research of Liz Irvine or Eric Schwitzgebel for clear elucidations of the problems of mind measurement.

\\Representations and the mental computations performed on them are one of the key resources used to understand these measurements.//

They are not measurements and it is misleading to suggest that they are. First person accounts and fMRI scans etc. are not measurements of mind.

\\We know much about the sorts of mental representations there are through experiments establishing the cognitive skills we are capable of and the sorts of information the mind (and so brain) must be able to deal in to realise these skills.//

Our skills are evaluated in public, not in the brain. You do not realise skills in the brain, you realise them where they can be evaluated, measured, challenged and refined: in public.

\\We also are starting to learn a lot about the sorts of neural representations there are//

No, theorists are making a lot of unfounded suppositions based upon a lack of clarity about the differences between ordinary representations and neuroreps.

\\The best evidence is the mountains of psychological evidence as to the types of representations found in the mind and brain//

There is not a single representation that has ever been found in the mind. The mere idea is preposterous, Thomas. The mind is not something we can go representation hunting in. All your claim amounts to is an assertion that scientists have found patterns of activity in the brain that they have decided to call representations.

Jim Hamlyn said...

\\...neural representations are patterns of neural activity that transmit information between brain regions which perform computation on them (or something like this...//
We can choose to call regions of activity whatever we like but this illuminates nothing and provides no evidence that computation is performed in the brain. Is this a technical usage of computation by the way because the brain clearly isn't a computer?

\\"Although a neural code is a system of rules and mechanisms, a representation is a message that uses these rules to carry information, and thereby it has meaning and performs a function."//
Do you really buy this junk, Thomas? Meaning is never a property of things, it is a way of using things as if they had properties that they do not possess. That is why we need to follow socially negotiated rules to communicate via symbols. Can you explain to me whether you take meaning to be a property of things and if so can we measure it?

\\Where else do these acts of logic come from if not some corresponding neural operations//
That is a good question that isn't addressed by supposing that our most sophisticated techniques and tools or some virtual equivalent are the answer. They will be the _means_ to find the answer but they will not be the answer. Locke made that mistake — we don't need to.

\\If we deny some corresponding neural operation what are we left with.//
Nobody is denying that the explanation we are looking for is a causal one, but what we need to ensure is that we do not impute agency to any of the processes concerned at any point whatsoever other than that of the perceiver as a whole. We are not agents full of agents. If we are not extremely careful with the explanatory tools we use we are extremely likely to undermine the whole operation. Currently we are rummaging around in the most complex organic structure known to us with some of the most inappropriate explanatory methods and worst of all, most philosophers see no problem.

I am largely in agreement with your comments concerning cultural evolution. Just to clarify, I reduce all cultural evolution to the accumulation of new ways and means to exploit the world through learned repeatable actions.

Jim Hamlyn said...

This is what I believe:
Mind is inhibited representing.
Mind is our skills as representers.
Mind is our capacities to represent.
Mind is our dispositions to communicate.
Mind is learned abilities in social exchange.
Mind is internalised communicative actions.
Mind supervenes upon our numerous abilities to offer stand-ins.
Mind is our preparedness to extend efficacious tokens to our conspecifics.
Mind is a behaviourally incipient or abbreviated set of actions of representation.
Mind is comprised of a range of socially negotiated capacities to successfully deploy memes.
Mind is our competences in the exploitation of genetically equivalent creatures through techniques of substitution.
Mind is a readiness to substitute one thing for another in ways that will be accepted by other similarly endowed creatures.
Mind is a repertoire of skills in the production of artefacts and behaviours that have advantageous causal influence upon our relationships with biologically equivalent creatures embedded in the same culture.
Mind is composed of triggered (but not enacted because inhibited) competences in the broadcast of socially viable nonverbal — and in the human case verbal — devices.
Mind is a range of genetically inherited and culturally acquired procedural facilities in the mediation of relationships between genetically similar organisms through the offering and acceptance of X's for X's or, in more sophisticated forms, X's for Y's.

Take your pick — I stand by them all (more or less).

Peter said...

Scientists have indeed found patterns of neural activation, and have called them representations., Jim. The problem is what? You don't like that usage of representation? You are the dictionary god, and any departure from your preferred usages is therefore preposterous?

Jim Hamlyn said...

Scientists can use whatever language they like — in just the same way that you or anyone else can talk of Scotsmen in whatever way you like. But when scientists are insufficiently precise such that they make incorrect generalisations or extrapolate from the evidence based upon inappropriate models or when they confuse the model for the thing modelled etc it should be the job of philosophers to point out the flaws in their reasoning. When Monty Hall (an English jornalist) pulled a boat up a mountain with his newfound Scottish friends — all of them, including Monty, wearing kilts — he was surprised to find that none of his friends had been so stupid as to suffer the agonising chafing caused my the avoidance of wearing underwear. True Scotsmen wear whatever they wish under their kilts, as your previous comment seems to confirm you appreciate.
The reason philosophers frequently define their terms is because they know that sharp tools cut deep and true. Sloppy craftsmanship doesn't help in the advancement of the understanding of our most complex organ and its indivisible incorporation into the autonomic behaviours and purposeful actions of the organism in which it is embodied.

Jim Hamlyn said...

Peter, I'm not trying to be a language nazi. I'm just pointing out that a clear conceptualisation and disambiguation of representation is absolutely fundamental to the examination and elucidation of mindedness. One of our foundation stones in the investigation of mindedness should be the acknowledgement that purposeful agency is the exclusive province of the organism as a whole. Once we establish this foundation stone we are absolutely obliged to be unwaveringly clear that there are no agents within the agent. Any suggestion that agency exists within the organism undermines the whole project. My strict, stringent, uncompromising, dogmatic or whatever you want to call it determination to urge for clarity and to criticise contradiction and is borne of this fundamental belief.

Peter said...

Jim, you have written many words, but they don't contain a substantive point.

Things like multiple personality disorder are evidence that a subsection of an organism can have agency.

Jim Hamlyn said...

So you think that there are agents and forms of agency within agents then Peter?

When I say that I am in half a mind to do something does this mean that I am 25% less involved than when I say that am in two minds about the same thing?

Surely it should be obvious that from my perspective a multiple personality disorder is a conflict of dispositions to represent. This does not give creedence to the supposition that sufferers of such conditions contain multiple personalities vying for expression.

Jim Hamlyn said...

Peter, from the article you cite:

\\For our purposes, an agent is an entity capable of autonomous, intelligent, goal-directed behaviour.//

Exactly right.

\\People are agents, clearly. So are corporations and governments, insofar as they pursue goals (like 'maximizing shareholder value' or 'defending territory'). //

Yes.

\\Even a plant can be said to have agency, since it 'wants' to grow toward the sun.//

Exactly wrong. Plants cannot produce representations of future states of affairs and are therefore disqualified from consideration as agents.

Bruce said...

Being too wedded to words like "agent" and "representation" may be more a vice than a virtue.

Jim Hamlyn said...

"Perceiver" and "token" will do as excellent substitutes Bruce.

Jim Hamlyn said...

Mind is a procession of causally triggered dispositions to represent, enabled by an ontogenetic and culturally informed repertoire of skills embodied by an organism as a whole and in large part influenced by the constitution of the brain.

Bruce said...

"Mind is" is not a great way to approach such a difficult problem as the self and its brain. But dispositions to represent would empower an organism to not only problem solve but to be able to solve virtual problems and to share and gain feedback from others. This "minding" can be regarded as emergent from the brain thus it is unlikely that mere neurological descriptions are adequate to encompass its complexity.

Jim Hamlyn said...

How would the nonverbal capacity to represent one thing with another of the same kind enable a creature to solve problems, Bruce? I agree that is does but I suspect that you are extrapolating into skills that do not follow from this nonverbal skill, namely language.

Bruce said...

All life is problem solving, Jim.

Jim Hamlyn said...

All life can be represented as problem solving, Bruce. That doesn't mean that all life IS problem solving.

Bruce said...

The integrity of the single cell is challenged perpetually.

Jim Hamlyn said...

To solve a problem you first need to know what the problem is. Only language users can do this. the very notion of a problem is a linguistic one. It doesn't convert and your being wedded to it is more of a vice than a virtue. ;)

Bruce said...

"To solve a problem you first need to know what the problem is. " - this is ridiculous.

Jim Hamlyn said...

When a simple organism adapts to an obstacle it is not solving a problem. It is changing its morphology or behaviour by natural selection. There is no problem solving going on.

When one strain of bacteria survives whilst another strain dies the surviving strain hasn't solved a problem. When I live and another differently configured human dies because their biological constitution is less robust I haven't solved a problem. It is your theory that is ridiculous, Bruce.

Bruce said...

I think a touch of inductivism is polluting this discussion - in contrast I see activity all the way down to the first self-producing organisms. Trial and error.

Jim Hamlyn said...

Only representation users can TRY, Bruce because only representations users can be said to have goals. Your theory doesn't stand up to scrutiny.

Furthermore it seems to me that you make no distinction between finding and inventing, between the discovery of exploitable resources and actions and the deliberate variation and recombination of actions in order to invent exploitable techniques and contrivances.

Are the folks at NASA solving the problem of travelling to Mars, Bruce? Are we solving the question of what consciousness is here on this forum? NASA are certainly trying to solve the problem of how to travel to Mars, just as we are trying to solve problems here (mostly of misunderstanding). But if you accidentally found yourself on Mars by walking through your front door whilst whistling Auld Lang Syne does that mean that you were trying to solve the problem of travelling to Mars? If nobody was trying to get to mars and you found yourself there by walking through your front door whilst whistling Auld Lang Syne would that mean that you had solved the problem of travelling to Mars?

So can you clarify how my assertion is preposterous, because to me the reverse seems to be the case?

Peter said...

Jim,

The authors give a broad definition of agency in terms of a

broad definition of goal. They go on to explain why plants are agents within their definitions. What s the problem? Yet again you seem to be insisting that you alone know how to use words correctly.

There is a perfectly reasonable sense of problem, in which an organism can be said to face the problems of feeding and reproduction. Usual coda ....

Jim Hamlyn said...

Wrong. I already made it clear that a problem cannot be known to a creature incapable of representing a future state of affairs. Simple organisms just vary -- some survive and some don't. That is not solving problems.

Peter said...

@ Jim

You're confusing trying as opposed to succeeding with consciously trying as opposed to unconsciously trying.

Jim Hamlyn said...

You can't unconsciously try to do something, Peter. The mere idea is nonsensical.

Thomas said...

Really. How exactly does a cat publicly represent the future state of affairs of catching the mouse to justify your notion of it trying to catch the mouse? Or is it not really (according to language god Jim, thanks Peter for that) trying? Should I look forward to some entirely idiosyncratic notion of trying now. Exactly how much of ordinary (not to mention scientific) language do you wish to warp while somehow claiming your bizarre conceptions are just the obvious meanings of terms?

Thomas said...

Really. How exactly does a cat publicly represent the future state of affairs of catching the mouse to justify your notion of it trying to catch the mouse? Or is it not really (according to language god Jim, thanks Peter for that) trying? Should I look forward to some entirely idiosyncratic notion of trying now. Exactly how much of ordinary (not to mention scientific) language do you wish to warp while somehow claiming your bizarre conceptions are just the obvious meanings of terms?

Jim Hamlyn said...

We can try and fail and we can try and succeed but we cannot unconsciously try and nor can we unconsciously fail. We can discover that we have found something despite not trying but the discovery is not a success if nothing has been invested. If you inherit a fortune it won't be because you were unconsciously trying and nor will anybody think you are a success as a consequence.

Peter said...

Sure you can try unconsciously ...attempt and fail a goal directed action. A moth that circles a light bulb is trying to navigate by the sun.

Jim Hamlyn said...

Do cats TRY to catch mice? Did I say that cats "try" to catch mice? If they do try to catch mice then they would need to be able to anticipate a future state of affairs that mediates their behaviour but I haven't claimed that cats try to do anything. They might. For all I know they are causally influenced to pounce on small moving objects without trying.
But to answer the irritably put question more fully. Why is it not obvious to you Thomas that the capacity to "perform" the action of catching a mouse is sufficient to qualify as a disposition to represent? Is a display behaviour not a form of representation? Do cats never communicate? Are they not intelligent?
And what is your concept of trying Thomas? Are we always trying? Are you trying to be condescending or is it just part of your nature?

Jim Hamlyn said...

\\A moth that circles a light bulb is trying to navigate by the sun.//
No it isn't. Moths do not "navigate". Navigation requires maps, knowledge of maps or a capacity to produce a map or at least a capacity to guide someone across the relevant terrain, building etc. Someone lost (like a moth) is not doing navigation.

Ian said...

Do you think, Jim, and you might be right but need to give reasons, why, that the notion of overdetermination in respect of actions is dubious. I am sure the influence of Freud has encouraged the thought that actions ostensibly serving one purpose can serve unconscious purposes eg unconsciously trying to emulate your mother or father in some way, or unconsciously trying to live up to demands made upon you by your parents etc. You know the kind of story. Is such talk really nonsensical?

Peter said...

Moreover, no cat has ever hunted a mouse, because hunting is an activity performed by humans on horseback, wearing red jackets and footing horns.

Thomas said...

//Why is it not obvious to you Thomas that the capacity to "perform" the action of catching a mouse is sufficient to qualify as a disposition to represent?//
So you think the capacity of the amoeba to perform the action of following a nutrient gradient is "sufficient to qualify as a disposition to represent"?

Jim Hamlyn said...

Thomas,
That is not a million miles away from what Bruce would have us believe. He argues that all life has knowledge. I disagree. I think there is a sharp cutoff between the autonomic behaviours of amoebae, or digestion or iris contractions etc. and the learned and purposeful behaviours (actions) of agents. I don't think an amoeba can learn, although I'm pretty sure they can develop dispositions to respond (vide Eric Kandel's work with sea snails) but I do not think they can develop dispositions to act.

So how do I distinguish between dispositions to respond and dispositions to act? For a candidate to qualify as a perceiver it must be capable of producing representations of its causal encounters that would be accepted as viable by other members of the same species. Ants and bees clearly qualify, so it is not difficult to suppose that other more complex organisms might be capable of producing representations of their causal encounters. We observe a huge amount of mimicry in nature especially among young animals and I consider this to be very strong evidence of capacities to represent.

Jim Hamlyn said...

You might find this useful; a snippet from Gadamer on the subject of animal play:

\\But does this mean that it is only in the human culture that the act of play is objectified with the specificity of "intended" behaviour? Play and seriousness seem to be interwoven into still deeper sense. It is immediately apparent that any form of serious activity is shadowed by the possibility of playful behaviour. “Acting as if" seems a particular possibility wherever the activity in question is not simply a case of instinctual behaviour, but one that ‘intends’ something. This ‘as if’ modification seems animated by a touch of freedom, especially when they playfully pretend to attack, to start back in fear, to bite, and so on. And what is the significance of those gestures of submission that can be considered the conclusive end of contests between animals? Here too, in all probability, it is a matter of observing the rules of the game. It is a remarkable fact that no victorious animal will actually continue the attack once the gesture of submission has been made. The execution of the action is here replaced by a symbolic [I would question whether this is genuinely symbolic] one. How does this fit in with the claim that in the animal world, all behaviour obeys instinctual imperatives, while in the case of man, everything follows from a freely made decision?
If we wish to avoid the interpretive framework of the dogmatic Cartesian philosophy of self-consciousness, it seems to me methodologically advisable to seek out just such transitional phenomena between human and animal life.// Hans-Georg Gadamer (The Play of Art, in: The Relevance of the Beautiful and Other Essays p.125).

You might also know of a Roger Sperry's work where he turned the eyes of tadpoles upside down. The frogs that developed would always project their tongue in the opposite direction from the prey and this responsiveness was found to be fixed. Humans on the other hand using inverting glasses do learn to adjust. I'm not sure about cats. I suspect they would learn to adjust too.

So, where does this leave us? I would argue that my Brookian theory of dispositions to represent is far more plausible than the alternative conjecture, namely that the brain has capacities to generate pseudo representations ex nihilo.

Jim Hamlyn said...

Ian,

I don't believe it is coherent to speak of unconscious action just as I don't think it is coherent to speak of unconscious perception. Unconscious sensory responsiveness, yes obviously but not perception. Actions are perceptually mediated and to say that actions (intentional behaviours) can be conducted without intention is simply nonsensical, yes.

Nonetheless, I do think organisms can and do develop dispositions to act whilst busy doing other things or during sleep etc. For this reason it is wholly within the scope of my theorisation that numerous disorders can and do arise as a consequence of the vulnerability of our system of dispositions to represent.

Jim Hamlyn said...

Peter, pack animals do hunt yes. Cats? I'm not sure. I suspect that they do and I certainly wouldn't try to correct someone for claiming that they do. Do viruses? I don't think so. Do ants? Maybe.

John S said...

Jim Hamlyn, I do think that actions that we routinely perform can become habitual and at times be performed unconsciously. Locking a door comes to mind here. What do yo think about such cases?

Jim Hamlyn said...

I have been thinking and writing about exactly that quite a lot in the last week or so, John. I wouldn't call it unconscious but I know what you mean. When I drive a long distance and find that I have been "on autopilot," as you might say (or in a "flow" state as Csikszentmihalyi puts it), it is tempting to say that I was unconscious of my actions but that is a retrospective assessment. If at any moment during my drive, someone were to ask me what I was doing in the car, I wouldn't be bewildered to find myself in the driving seat of a speeding lump of metal and plastic. I would be ready and able to explain.

Consciousness is an ongoing preparedness but it doesn't make sense to say that this preparedness is unconscious. It is not triggered but it is immediately triggerable.

But yes, we can learn skills such that they become automatic and allow us to concentrate on other more demanding skills instead. Perception is one of those automatic skills that it is learned, or at least massively refined, very early in life and is hugely influenced by culture even before we begin to speak.

Bruce said...

I agree that virtuality gives those beings that are so equipped a great leap in their potential repertoires. Gombrich's Popper-inspired point that making comes before matching is not however so easily superseded by Brook's elegant division. I alluded to inductivism. The point I make somewhat as a criticisable dogma is that knowledge grows only by refutation (followed by fresh conjecture), not by accumulation of existing knowledge. To recognise a representation is the correction of an anticipation, conscious or not. Accumulation only stacks dogma. Yes one drifts into anthropomorphism when seeking homologies with lower life, I understand the quest for linguistic purity but I would also caution the opposite, that it is easy to overrate the persistence of human conscious states. Matching does not come before making. All perception and indeed all knowledge growth is modified programs. Knowledge in the view I currently accept emerges with life, it could even be said that life is defined by knowledge. This knowledge is varyingly robust complexes of information that have adaptive functions, at least in potential, for the self regulating and self producing entity. Mutation, making, precedes selection which very often in lower stages of life is elimination of the whole organism. The evolution of representational or descriptive capacities is a huge step in conjectural capacity which allows for virtual elimination of error.

Jim Hamlyn said...

Shoals of fish match the behaviour of the fish in front of them. Does this mean that fish are makers beforehand?

Bruce said...

Yes, exactly.

Jim Hamlyn said...

But what are they making beforehand?

Bruce said...

“The problem 'Which comes first, the hypothesis (H) or the observation (O)?' is soluble; as is the problem, 'Which comes first, the hen (H) or the egg (O)?' . The reply to the latter is, 'An earlier kind of egg'; to the former, 'An earlier kind of hypothesis'. It is quite true that any particular hypothesis we choose will have been preceded by observations- the observations, for example, which it is designed to explain. But these observations, in their turn, presupposed the adoption of a frame of reference: a frame of expectations: a frame of theories. If they were significant, if they created a need for explanations and thus gave rise to the invention of a hypothesis, it was because they could not be explained within the old theoretical framework, the old horizon of expectations. There is no danger here of infinite regress. Going back to more and more primitive theories and myths we shall in the end find unconscious, inborn expectations.”
Karl Popper, Conjectures an Refutations (1963) p47.

Bruce said...

One can retrieve value in a diverse range of thinkers, however a problem I have with the schools of phenomenology for instance is encapsulated in the title of Maurice Merlau-Ponty's book: "The Primacy of Perception". I do not understand how perception can be described as primary. Perhaps the faculty of vision is so efficient that it has led both phenomenologists and Lockeans astray for although it seems that sensations flow through our eyes like water into buckets, more correctly the retinal cells are acting as flickering searchlights for mental preconceptions. The self is not a blank slate and cannot become one. It guesses its way through the world and these guesses are tested, modified and retested, continually, through experience. This does not imply that some sort of tacit knowledge or justified knowledge or certain knowledge is primary but rather that we have propensities, or archetypes if one likes, positioning our interactions with the world. A major mistake in epistemology is to conflate feelings of belief or certainty with truth or to treat knowledge only as personal e.g. "I know", "I believe", "I see".

Ian said...

Sorry to be obtuse, Jim, but I am still unclear whether you would rule out as unintelligible some locution along the lines that an intention manifested in a given action, whether he realised it or not, was X,Y, whatever.

Jim Hamlyn said...

Thanks Ian, Were you being obtuse? Did you do it purposefully or could you merely be accused of being obtuse because you seemed to be obtuse when in fact you were trying to be pointed? If you were trying to be obtuse and several of us agreed that your behaviour was consistent with the concept of obtuseness, then I would be justified in saying that you were indeed obtuse. But if you were not trying to be obtuse yet you agreed that your behaviour was consistent with the concept of obtuseness then I would say that you were unintentionally obtuse.
Does that help, because I am attempting to be very pointed?

Thomas said...

"Does the cat TRY to catch the mouse? We can only confirm that it did if we can trigger a public representation of its intended goal. "
Why only public, personal-level representations? I see no reason to exclude the plethora of private, causally realised, sub-personal level representations and computations that I think clearly underlie such processes.
As a very vague sketch of these:
What if we find a neural representation, a part of its brain activity, in the cat that tracks this goal, by e.g. isolating certain kinds of neural activity that only occur when the cat is confronted with stimuli suggestive of the nearby presence of a mouse (e.g. a visual siting or more indirect evidence such as certain sounds). What if we go on to find all sorts of neurocomputational processes are entrained by this initial neural representation that are efficacious in the cat catching the mouse. That the computations performed on incoming stimuli are now of the sorts that might indicate the presence of a mouse. That other neural representations arise, shaped by all sorts of complex neurocomputation to track the likely position, over time, of this possible mouse. That these neural representations of the potential location of the mouse, combined with other neural representations of the cats own position, and either representations or probably rather ingrained neurocomputational 'knowledge' of the possible ways the cat may alter its own position over time, are combined to yield neural representations of potential actions of the cat to bring these predicted future positions into collision. That this neural representation of an efficacious action is then used to actually enact action which leads to catching the mouse.

Thomas said...

This seems to be at least a valid alternative story to tell about what constitutes intentional, goal-directed action. What invalidates it? Jim seems to admit his account still leaves him entirely speculating on whether the cat actually 'tries' in what he takes as the only relevant sense. Sketches like the above at least hint at a useful scientific research programme. The fruitfulness of which I think is evidenced by the progress in cognitive psychology and neuroscience.
See Andy Clark's http://www2.econ.iastate.edu/.../WhateverNext... for a recent, far more detailed and properly scientifically informed, sketch of intention, perception and action, utilising the notions of neural representation and neurocomputation.

Of course Jim will likely dismiss this all off-hand as manifestly false talk of 'representations' and 'computation' in the brain. But I think I was generally clear to distinguish these sub-personal level notions from personal level ones (with the difficulty, as again noted below, of discussing sub-personal level things in my personal level terms). I don't think the cat tries to catch the mouse in the same way I am here trying to refute Jim's argument. I think there is a whole spectrum (with many discontinuities along the way) from candles 'trying' to stay alight (see http://www.lehigh.edu/~mhb0/physicalemergence.pdf for candles as primitive self-maintaining far-from-equillibrium systems), to unicellular organisms 'trying' to follow nutrient gradients, to a moth 'trying' to find the flame, to a cat 'trying' to catch the mouse, to me 'trying' to breathe, to me 'trying' to express these views. I struggle to see how Jim's "dispositions to publicly represent" give much traction in understanding this spectrum and its many discontinuities, though happily acknowledge a place here for that concept. But I think sub-personal analyses of neural representations/computation are also a great tool here, coupled with various personal level analyses such as Jim's (though not limited to his). I, like Sellars, am open to a synoptic view that unites the manifest and scientific images, the personal and the sub-personal, and provides both unification and a clear explanation of many discontinuities. I completely fail to see how dismissing out of hand the utility of such sub-personal notions as neural representation gets us anywhere near a complete explanation.

Thomas said...

Adding a couple of footnotes ignored in the sake of brevity and clarity of the story above:
1)Better would be a far more complex, dynamic plethora of neural representations and computations. It isn't a straight perceive-conceive-act path but rather a continual, situated cycle (even ignoring the larger scale organismic, biological and in some cases social evolution of these representations and computations).
2) Of course throughout I am using my concepts, I don't impute them to the cat. For instance in saying "presence of a mouse" I doubt cats, or their neural processing, care about the biological species of prey, just that it is prey not predator and the quality of prey. But of course this is to further try and dress the cats, or its brains, activities in my own conceptual garb. Something we must do to understand these processes in any way, but of course exactly what concepts the cat has are determined by the content of its neural and public representations not the labels I apply to communicate them to fellow humans.

Thomas said...

/No, theorists are making a lot of unfounded suppositions based upon a lack of clarity about the differences between ordinary representations and neuroreps.//
Name one theorist misled by these differences apart from you? Answer my repeated challenges to provide evidence of this lack of clarity in the literature. Answer John Ragin's challenge to show where Danielle's reasoning that used the term 'representation' went astray.
You claim "I'm not trying to be a language nazi. I'm just pointing out that a clear conceptualisation and disambiguation of representation is absolutely fundamental to the examination and elucidation of mindedness" and that "Scientists can use whatever language they like".
Yet all you do is jump on scientists for not using terms like 'representation' in your chosen way while completely failing to disambiguate the ways in which they actually use these terms or provide any evidence it is them not you that fails in your conceptualisation and disambiguation of the different uses of representation.
For instance when you said:
//All your claim amounts to is an assertion that scientists have found patterns of activity in the brain that they have decided to call representations.//
Despite the fact I had explicitly said neural representations are defined as "_patterns of neural activity_ that transmit information between brain regions which perform computation on them" (emphasis added). Yes, that's all my claim amounted to. That was my point, that is what they mean by representation and they seem to do pretty well with it. I was quite clear to disambiguate my use of the terms as I think the scientists are (though often implicitly through knowing their audience, certainly a naive outsider could be misled if they didn't actually read the works to uncover the concepts). I saw no counter to my claims, no evidence that this conceptualisation was wrong, just a bald assertion that it was preposterous because it didn't agree with your chosen terms. How is that not empty language policing?

Thomas said...

//
Thomas: "Although a neural code is a system of rules and mechanisms, a representation is a message that uses these rules to carry information, and thereby it has meaning and performs a function."
Do you really buy this junk, Thomas? Meaning is never a property of things, it is a way of using things as if they had properties that they do not possess. That is why we need to follow socially negotiated rules to communicate via symbols.//
Yes, meaning as a "way of using things as if they had properties that they do not possess" is basically what the researchers are talking about. That is why they claim that the electrochemical signals between parts of the brain have meaning. Because the brain uses these signals as if they had information which the signals themselves don't possess. The parts of the brain follow evolutionarily/developmentally negotiated rules (the neural code) in order to use the electrochemical signals to carry information when this information isn't a direct property of the signals. No these aren't the socially negotiated rules of human language, they are different rules, differently negotiated, hence the exact notion of 'meaning' is distinct. Still there seem to be pretty clear similarities in these two accounts that I think license the use of the terms 'representation' and 'meaning' in this context.
//Can you explain to me whether you take meaning to be a property of things and if so can we measure it?//
This meaning isn't an inherent property of these things, but rather of the system of sender-message-receiver and the neural code shared (through developmental shaping) by the sending/receiving brain areas. Identifying the meaning of these signals (within the context of the systems that enable them) is still highly useful. Just like we have dictionaries which identify the meanings of words, within our system. Or are dictionaries fundamentally misguided to you because they impute meaning to words?

I see little evidence of uniform misunderstanding among scientists as to the meanings of their terms (but of course many subtle differences among them). Certainly you haven't offered any evidence of this when I have repeatedly asked. It seems to be you who is unclear on the meaning of these terms yet you still wish to police them.

Jim Hamlyn said...

Ian, it is true that our capacity to envisage a goal may be incomplete, sketchy, hazy, uncertain or vague etc. but what we obviously cannot do is have a capacity to produce an accurate representation of a goal that turns out not to be our goal or only a vague representation of our goal.

John R said...

Damnit Jim! I am in the grocery store and it suddenly came to me that (due to your comments in this thread) I think I see it! I see what you mean. I also think it is probably very much in the right direction. I see possibility of adjustments here and there. But wow, what an interesting vision! Congrats.

Ian said...

Why on earth would a capacity to produce an accurate represenatation of a goal guarantee such accuracy and precision every time a representation was proffered? I have a capacity to play great golf, regretfully too often my golf falls short of greatness.

Peter said...

Peter,
Do cats TRY to catch mice? Did I say that cats "try" to catch mice? If they do try to catch mice then they would need to be able to anticipate a future state of affairs that mediates their behaviour but I haven't claimed that cats try to do anything.

You were arguing that cats don't try to catch much, because trying involves picturing a future state of affairs, an argument that hinges on using the word "try" in a certain way.

I was arguing that they do try, using my own definition of "try". My argument is surely as good as yours.

Jim Hamlyn said...

You misunderstand me, Ian. I can have a capacity to produce an accurate representation of the cup of tea I want by saying "I want a cup of tea," when I want a cup of tea. But if I say that "I want a cup of tea," when in fact I want a cup of coffee, then the representation does not guarantee accuracy. A conflict is possible because we are capable of inadvertently saying one thing and meaning another (usually when we are not concentrating very hard). Sometimes when we call the name of a close family member the wrong disposition to represent gets triggered. Nothing unusual about that.

Peter, Your argument is certainly as good as mine if it has the same explanatory power and scope, yes.
I suspect that many people would agree with me though, that you need a reason (a goal) in order to try. Trying without a reason is literally pointless (lacking in a goal).

John R,
Maybe you could help me out by clarifying whether you think that a logic gate does logic operations? Your encouragement is great, but I hope I don't have to wait until Pearl Harbour.

Jim Hamlyn said...

Thomas writes:
\\Yes, meaning as a "way of using things as if they had properties that they do not possess" is basically what the researchers are talking about.//

And this is precisely, explicitly and fundamentally where they go astray right at the very foundations of their thinking. I don't need to provide any further evidence than arguments for the severe incoherence of this view. Yours isn't a technical usage of "meaning." This is precisely the strategy by which all symbols work. Using things as if they had properties that they do not possess is one of the most sophisticated feats of intentional action ever to have taken place upon this planet. It categorically did not emerge twice; once inside our heads and then once again outside of them. That is the profound underlying fallacy of Daniele's argument and I will not accept it as in any way coherent or convincing.

You treat the capacity to exchange an X for a Y as if it were an evolutionary triviality that need not require already intelligent agents to pull it off. You are wrong. Profoundly wrong. Symbolisation is the exclusive province of agents because it requires knowledge, skills and prodigious capacities to exchange various things of various sorts for various things of various other sorts. These are prerequisites of symbol use that cannot be flouted.

John R said...

Jim, I don't know how my answer could be of help, and I can't put words together as well as you and the others have done in this thread. But, since you ask, I might say that a mixture of metal oxide and sand (CMOS) does not do or perform or operate anything at all - not anymore than a round, flat pebble does or performs skipping operations after being thrown out across the surface of a pond.

But there seems to be a magical mystery here. As soon as we consider the physical mixture as a "logic gate", we may unwittingly embody it into our extended, goal-oriented personhood. After being personified, it may be appropriate to go ahead and say that a "logic gate" performs "logic operations".

For fun, here is an excerpt I found from Wikipedia: <>
Was Peirce wrong or right? Does Andy Clark's pencil participate in his mindful, purposeful writing operations? When he puts the pencil back in the drawer, does the "pencil" turn back into a mindless, purposeless physical mixture of wood and graphite? Pretty weird stuff. Please ask me an easier question next time. :)

Jim Hamlyn said...

I won't ask you easy questions John because I think you are one of the most nimble minded people in this group and your willingness to entertain ideas and extrapolate from them is remarkable. I'm sure I am not the only person here to have noticed this.

Your analogy about skipping operations is inspired. Can a cluster of neurons skip? Obviously not. So why do people think that clusters of neurons can calculate or do logic operations or communicate with one another?

Skipping, calculating, communicating and performing logic operations are all things that we have to TRY to do. They do not happen without trying. Clusters of neurons cannot TRY to do anything.

The reason logic operations are impossible for sub personal clusters of cells is because sub personal clusters of cells cannot aim their activity towards a currently empty goal in an abstract future. Brains cannot conceive of absence which is the great undoing of representationalism. The concept of absence is impossible without the capacity to expect presence. Clusters of brain cells cannot expect presence because they cannot represent presence as a physical entity. Only organisms can do that?

Peter said...

Jim, theres a thing called temporal logic, which is has a bit to do with the future, and apart from that, logic has nothing to do with the future. Whatever you are denoting by logic isnt the logic of modern logicians, computer scientists, cognitive scientists, etc...they believe that the most basic logic operations can be performed by a few atoms, let alone a few neurones. Of course, you the,might be talking about something else entirely, rather than disagreeing with them...who can tell?

Jim, I have already defined trying in terms of goals. I am not detaching trying from goals, I am detaching it from the ability to represent future states.

Jim Hamlyn said...

Peter, could you clarify how a goal could be other than a future state?

Peter said...

To perform logic operations towards some envisaged goal requires the ability to envisage goals, right enough, but "towards some envisaged goal" isnt part of the definition of "logic operation".

Jim Hamlyn said...

An operation is a procedure, Peter, which means that it is temporal. Operations have results or outcomes. They either eventuate in an expected result, outcome or goal or else they do not. A result can only thwart an expectation to the extent that an expectation predicts (i.e. Is capable of representing) a result.

You cannot be disappointed if you didn't expect a certain state of affairs to be the case. Nor can you be surprised by an event if you expect it. Only if something is unexpected can it be surprising. And only an expectation thwarted can be disappointing.

Ian said...

Just a couple simpleminded questions, Jim. talk of unconsciously trying is commonplace in psychology, psychiatry, psychoanalysis. Is this way of talking to be explained away or to be reinterpreted by your analyses demanding that trying must be conscious? And to what extent does your insistence trying must be conscious turn upon supposing that trying is something we do, itself an action of sorts accompanying the substantive action. Do actions really involve tryings and the object of trying. So does typing involve trying to type and actually typing. Or whistling, trying to whistle and whistling?

John S said...

I don't understand why some of you are so afraid of admitting the distinction between voluntary conscious actions and programmed responses. After all all this really means is that some actions are caused by the character of the individual and some are caused by that individuals propensities. This is not a difference on kind but one of project. I have no problem with the possibility of a computer becoming conscious. My chess program learns from its mistakes and changes its play with the same sequence of moves after a loss. The input from a former loss becomes part of its new programming. Also after playing against the chess program I begin to learn about its style of play. You can call this its character. When the computer acts in character I have no problem of assigning it consciousness, just like I would with a fellow human being. Anyone who disagrees with this is also performing a conscious decision. They are typing on this thread because of a reason and not just an automated response to external stimulation. This is what free action means.

Peter said...

Jim

Now you are conflating temporal, as in takes time to perform, with temporal, as in puposively directed to a future state.

Jim Hamlyn said...

I'm not conflating them, Peter, I am suggesting that they are intertwined.

Peter said...

What evidence supports the suggestion?

Jim Hamlyn said...

Ian, \\Talk of unconsciously trying is commonplace in psychology, psychiatry, psychoanalysis. Is this way of talking to be explained away or to be reinterpreted by your analyses demanding that trying must be conscious?//

Does the insomniac suffer for want of unconscious trying to sleep? Is the comatose child just not unconsciously trying hard enough to come out of it? Is stress caused by high levels of unconscious trying? And what does this trying achieve? And who should we commend on their consummate inattention?

Must I conform to orthodoxy (or are we actually talking about a technical usage of "trying" here) because my thinking leads to the conclusion that unconscious effort is an oxymoron? Sure, people learn skills without trying but that doesn't mean that there is some unconscious realm where they are actually exerting themselves. It is simply incoherent to say that we can learn to ride a bicycle by not trying.

\\And to what extent does your insistence trying must be conscious turn upon supposing that trying is something we do, itself an action of sorts accompanying the substantive action. Do actions really involve tryings and the object of trying. So does typing involve trying to type and actually typing. Or whistling, trying to whistle and whistling?//

Trying to ride a bike is not simultaneously riding a bike. Climbing a mountain is not simultaneously having climbed a mountain. An attempt is not a concurrent achievement. Achievements enable attempts, but such attempts are not infinitely divisible clusters of striving.

Jim Hamlyn said...

Peter, purposes by their very nature involve ends and means — the ends are the goal and the means are the procedures necessary to achieve the goal. I don't see that I need to provide any evidence to support the obvious.

Thomas said...

Jim: "purposes by their very nature involve ends and means — the ends are the goal and the means are the procedures necessary to achieve the goal. I don't see that I need to provide any evidence to support the obvious."
Perhaps. But what do 'purposes'/'goals'/'ends'/ 'means' have to do with the 'procedures' of formal logic? That connection seems far from obvious and equating(/conflating) these seems deserving of some pretty good evidence.

Jim Hamlyn said...

Thomas, what logic is there in a technique, procedure or operation without an end, goal or outcome? Is it not true that the operations of logic are intent upon an outcome (because driven by agents) for the very reason that the outcome is empty, blank, absent or otherwise unknown at the outset? What possible motive could there be to engage in logic — other than practice or instruction — if the outcome were already known? What further evidence do you need that this is the case?

Jim Hamlyn said...

I'm aware that you have not addressed, responded to, or acknowledged my comment regarding your views on systems of meaning in the brain. It is a profoundly important issue upon which we clearly have a lot resting. I hope your silence is not the result of dismissiveness. I take your challenges extremely seriously. Nobody here has invested more in engaging with me than you have and I appreciate your input enormously.

\\Why only public, personal-level representations? I see no reason to exclude the plethora of private, causally realised, sub-personal level representations and computations that I think clearly underlie such processes.//

Because only personal level behaviours are the actions of agents — that what makes them "personal level" behaviours.

You ask what invalidates your sketch of a cat hunting a mouse. What invalidates it is the tapestry of personal level terms intended to describe a sub-personal process. To be blunt, it is so poorly crafted that any correspondence to what actually happens simply slips out of the picture as soon as you start. You really might as well be trying to achieve an explanation using pulleys and wires, as Locke did.

Sketches like the one you offer do not "hint at a useful scientific research programmes." They pull the wool over the eyes of people who don't know any better, and most worryingly of all: scientists. That is a manifest wrong in my view, and it is the job of philosophers who care about logical coherence, parsimony and clarity to point this out.

You cite Andy Clark as evidence of the progress of neuroscience (even though he is a philosopher). The only progress we see in these theories is in elaborate models that correspond with what brains do, not in what is actually happening in brains. One of the principal reasons that this theoretical work is so popular is because of AI, not because it is unlocking the secrets of the brain.

\\I think there is a whole spectrum (with many discontinuities along the way) from candles 'trying' to stay alight for candles as primitive self-maintaining far-from-equillibrium systems), to unicellular organisms 'trying' to follow nutrient gradients, to a moth 'trying' to find the flame, to a cat 'trying' to catch the mouse, to me 'trying' to breathe, to me 'trying' to express these views.//

That is a major failing in your reasoning in my view. You are arguing that white can be found along the spectrum somewhere and that science will someday prove you right.

\\I, like Sellars, am open to a synoptic view that unites the manifest and scientific images, the personal and the sub-personal, and provides both unification and a clear explanation of many discontinuities.//

Great, but don't make the mistake of supposing that we are examining a spectrum before we have even put our prisms to work, Thomas. I am arguing that the distinction between the personal and the sub-personal, or better; between purposeful (because driven by dispositions to represent) actions and mere causally predictable responses is the very prism we need to be using in this quest. As I have repeatedly emphasised, you tend to obliterate this distinction even though you have never expressed a desire to refute it.

\\I completely fail to see how dismissing out of hand the utility of such sub-personal notions as neural representation gets us anywhere near a complete explanation.//

Some of the research can probably be salvaged but that will become much clearer when we get our foundations straight and true. Nonetheless, it is not merely our foundations that need to be unshakeable but the tools and resources we use to build. Personal level ascriptions to sub-personal entities and processes are the hallmarks of theory gone awry.

Jim Hamlyn said...

Peter, could you clarify what kind of evidence you are intent upon? If you cannot be clear then I cannot be expected to try to provide it.

Peter said...

Jim Hamlyn, the claim that needed support was the claim, contra various specialists, that no subpersonal group of neurons can perform a logic operation. You then diverted to a claim about puposive logical operations, and then, subsequently, forgot about the logical operations entirely, and started discussing purposive action in general..

Jim Hamlyn said...

Peter, the weight of evidence is not upon me, it is upon you to show how the agency of an INDIVIDUAL is divisible into smaller particles of agency. Your attempt to shift the burden to me is nothing but a ploy. If you want evidence, then you better be prepared to provide it. I don't demand it because I don't need it in this instance. What you offer as evidence is nothing but the opinions of specialists. You are making an argument from authority, which I need not accept, whereas I am making an argument from coherence which you have failed to refute.

Peter said...

The opinions of specialists is what is otherwise known as expert testimony, and is admissible as evidence in courts of law.

If you are arguing that agency arrives all at once, out of nowhere, then you are claiming magical emergence,and that is an extraordinary claim.

Your opponents, by contrast, are claiming that personal level agency is made of subpersonal agencies. If they were claiming these subpersonal agencies were fully agentive that would be a problem, but they are not. If they were claiming that agency goes all the way down, that would be a problem, but they are not...Dennett says it only goes to the neoronal level.

Explaining person level agency in terms of lesser degrees and kinds of agency of subunits is totally in line with scientific explanation. The people who were saying that heliotropic plants exhibit goal directed behaviour, were explicit that this was only "in a sense".

Jim Hamlyn said...

Probabilistically speaking, siding with scientists is generally a wise policy, but it secures nothing Peter, because scientists are not necessarily equipped with the conceptual tools necessary to interpret the evidence correctly. There was a time when everyone, not only the experts, thought that the sun revolved around the Earth. The shift I am advocating is a lot less radical.

Nobody is claiming magical emergence but I am claiming that the disposition to represent is instrumental in the attribution of agency.

Agency is not on a spectrum with crystals and candles at one end and countries at the other. Countries do not envisage the future, only INDIVIDUALS do, including those individuals we choose to _represent_ us. The giveaway is in the etymology:

\\From Medieval Latin individualis, from Latin individuum (“an indivisible thing”), neuter of individuus (“indivisible, undivided”), from in + dividuus (“divisible”), from divido (“divide”).//

Peter said...

It might well be possible for someone, somewhere to do better than scientists, but that person is not you Jim, because you still have not supported your claims.

Jim Hamlyn said...

That's another resoundingly silly piece of reasoning you have just made there, Peter. So, if I support my claims, then I might be the person capable of doing better than scientists? How does that work? Clearly when I support my claims with arguments it makes no difference. Presumably you mean that I have to support my claims not only with arguments but with empirical evidence because arguments alone — even though they are better than your incoherent arguments — are insufficient. As it happens I do have a little evidence but it wouldn't make the slightest difference to you because you aren't actually interested in evidence. You are just calling my bluff because you can't defeat my arguments.

I'll provide you with evidence once you provide me with some evidence that you find my arguments persuasive, or at least compelling.

Jim Hamlyn said...

“A search for a neurological explanation for a faculty of language must ultimately explain what it is that carries meaning in a brain. The unit of signaling currency in a brain is the action potential or spike output of an individual neuron. ” https://www.ucl.ac.uk/jonathan-edwards/neuronsandlanguage

What distinguishes the above from the following: A search for a neurological explanation for a faculty of firelighting must ultimately explain what it is that carries fire in a brain. The unit of ignition in a brain is the action potential or spike output of an individual neuron.

Rob said...

We are capable both of lighting fires and of talking. Our brains aren't.

Meaning and language are not related as fire and firelighting are but both sets of statements are nonsensical. As a matter of fact we could not say anything meaningful or understand anything if we didn't have brains but that does not tell us what the meaning of the phrase 'carries meaning in the brain' is. Things we say are meaningful or nonsensical but no sense can be made of a 'unit of signalling currency in the brain'. There is something like the mereological fallacy in Jonathan Edwards' article - ascribing things to brains that can only be properly ascribed to human beings but I want to say that there is another layer of nonsense involved as well (if it makes sense to speak of another layer of nonsense!)

Jim Hamlyn said...

I agree with every word you say Rob, and I'm delighted to read such incisiveness here. Could I ask you to tease out that extra layer, dimension or aspect of nonsense?

Gottlob said...

Jim, if the brain is seen as an idea or type of pattern then language may reduce to that idea or pattern

I think Jim Hamlyn is saying something very important. How do we functionalise how the brain works without reducing the brain to a function?

Peter said...

The sense that can be made of the phrase "carries meaning in the brain" is whatever features in a reductive explanation of language, an explanation of how linguistic performance is assembled out of smaller.units. It may be philosophically satisfying to Rob and Jim to assume that the brain is an incomprehensible black box that just does stuff all in a flash, but neuroscientists don't have that luxury.

Jim and Rob might also care to reflect on the fact that, if they understood the above, they succeeded in assembling its meaning out of small units called words and letters.

Rob said...

Peter and Jim - I'll say a bit more in a little while. I'm just working on something else at the moment. I'll just say very briefly that I don't think there is such a thing as a reductive explanation of language which explains the meanings of words according to the letters they are composed of. But just baldly asserting that isn't very satisfying! -- I'll come back in a bit.

Ben said...

There are so many things to consider when thinking about language that it's impossible to even begin to describe them here in depth. In order to satisfactorily explain why human languages are the way they are, you have to be able to answer why all humans seem to have a language, you have to be able to answer why all these languages are hierarchical, you have to be able to answer why there are apparent variations, in syntax, morphology and phonology. You have to answer why you get languages like English which don't have lots of prefixes and suffixes, and languages like Mohawk which can use one word to express the meaning of an English sentence.

Jim Hamlyn said...

Peter, The fact that we can run does not mean that we have lots of little walkings and tiny footfalls in our brains. Yes, many of our competences are comprised of other previously developed sub-skills but none of these are instantiated in the brain. They are instantiated in our interaction with the world and with other perceivers.

Ben, That's right. A full explanation of language would indeed be a mammoth undertaking and what you outline there would be an account of the proper kind. By "proper" I mean appropriate to the array of procedural tools of which language is comprised. These tools are not second order discoveries of techniques that our brains have already developed through biological evolution. They are tools and techniques that have been developed through cultural innovation and cannot have arisen in any other way.

Rob said...

An explanation of meaning is something like this: The word 'coati' means 'a raccoon-like animal found mainly in Central and South America, with a long flexible snout and a ringed tail'. It does not add anything, in terms of explaining the meaning if I add "...and the word coati is made up of the letters 'c', 'o', 'a', 't' and 'i'." and I certainly don't add anything if I start talking about things going on in someone's brain when they utter the word 'coati' or write the word 'coati'.

Rob said...

....That's not to say that nothing could be explained by looking at people's brains or by saying things about letters. I'm not doubting that there is legitimate work to be done in neurophysiology - there is, a lot. But results of brain scans do not tell people what a word means and nor do lists of the letters that words are made up of. You can tell someone how to spell 'coati' by saying that 'it's 'c', 'o', 'a', 't', 'i'" but that isn't explaining the meaning of 'coati'.

Peter said...

Rob,

Can you show that there is no reductive relationship between a sentence and it's constituent *words*.

The idea that talk of neural spike trains could substitute fir dictionary definitions among lay people is both blatantly false and nothing to do with what the article was claiming.

Rob said...

I'm not sure what you have in mind Peter but one thing to think about is the circumstances in which sentences are uttered. To pinch an example from Wittgenstein: it is doubtful whether it makes sense to say "I don't know if there's a hand here" if the sentence is uttered by someone holding their hand in front of their face but we can imagine circumstances in which it would clearly make sense - somebody searching rubble for human survivors sees something that looks a bit like a hand. In those cases you have the same string of words but the sense of the sentence depends on the circumstances. That suggests that sentence meaning isn't just a matter of piecing together the meanings of words.

Peter said...

Rob,

But again, no one is actually arguing that context etc don't matter. Jim is saying that meaning and language have no sub personal components, but his opponents aren't putting forward their own cut off point.

Rob said...

You're shifting the goalposts Peter. My point about context was meant to address your question about sentence meaning being reducible to word meaning - not to say anything about what is in the article. I'll have to look at the article again at some point and respond to your comments about that as well as respond to Jim about nonsense. I'll have to think a bit and I'm trying to do other things at the same time so I might be a little while!

Jim - re: layers of nonsense. I didn't have a very clear idea of what I meant when I said it - I was just clear that there were things going on in the article other than the mereological fallacy as far as nonsense goes. One example of that is the talk of the 'spike output of an individual neuron' carrying meaning. I'm not sure what it is for anything to 'carry meaning', let alone neuron spikes. It isn't a simple case of the mereological fallacy. It isn't clear what would be meant by saying that a human being carries meaning. You could perhaps say that a word carries a meaning.

Jim Hamlyn said...

Excellent points Rob. I think you are also right to doubt the possibility of words carrying meaning. We commonly say that words convey meaning but that is not to suggest that meaning is a property or perceptible characteristic of words. If words carried meaning, then "snow" would carry the same meaning no matter the context of presentation or reception. Clearly that isn't the case. Words are attributed with meaning by mutual consent amongst a community of users. Meanings are socially negotiated. This is one of the obvious reasons why it is incoherent to suppose that meaning might be instantiated in the brain.

Ben said...

I think unless you are going for radically anti-biological account, I think that you have to have at least some psychological account of building concepts, and some neurological explanation for why we can do that.

Jim Hamlyn said...

Why Ben? What if a more parsimonious alternative were on offer?

Rob said...

Ben - I think accounts of meaning can do without neurophysiological explanation altogether. What you're talking about is something different. You might ask what it is about humans that means that they are able to acquire language while other animals cannot. But that's a different sort of question.

Jim Hamlyn said...

Millions of years experience in the exchange of X's for Y's. That is what makes the difference in the human case.

Rob said...

There you go. That one's covered Ben.

Bruce said...

I am indebted to Karl Popper for just regarding objectivity as pertaining to an object. A scientific statement, a photograph, a sculpture, a musical notation are all objects. They can be observed and tested by multiple people. They are inter-subjectively criticisable, unlike nebulous mental states of subjects.

The problem is not is our view of a photograph objective but can our interpretations of it be justified as agreed on as representing some aspect of reality (truth) by all intelligent observers? Possibly or not, but if we relinquish the quest for epistemic justification the problem evaporates. If we see all of our interpretations as conjectures then we can compare conjectures, probe them, and apply the results to whatever problem situation we are facing. Theories about the reality of the interpretations we make of photographs have their genesis in problems, not protocol statements.

Photographs, like evidence statements, are valuable not because we cannot argue about them but because the jury members can generally easily agree about them sufficiently for the problem at hand.

Jim Hamlyn said...

Agreement is indeed crucial Bruce. But what we need to be clear about (i.e. agree on) is that some of our representations are more objective than others. You may remember Donald Brook's distinction between "object accounts" and "picture accounts" in his 1969 essay "Perception and the Critical Appraisal of Sculptures." In the essay he recognised that we have two equally efficacious ways of describing he world. We can say that a large distant thing "looks small" and we can also say that the same thing "looks like a large distant thing." The first of these locutions is a picture account whereas the second is an object account. It should be obvious which is the more objective. Efficacy however, trumps objectivity every time.

If you look into the work of another favourite source of mine — the psychologist Jan Deregowski — you will find that people from different cultures often have very different responses to pictures that we find straightforward, whereas they have no difficulty with models.

Bruce said...

The sculpture is an object, the photograph is an object. If the problem is which of them best represents the subject for say the problem of a witness identifying a criminal then we can make a truth judgment. The problem comes first.

Jim Hamlyn said...

That is right. The blind witness will choose the sculpture every time and their testimony will be believed.

Bruce said...

I hope their testimony will not just be believed but be true.

Jim Hamlyn said...

Yes, but you are missing the point. The value of photographs lies in their efficacy not in their objectivity.

Bruce said...

I dare say that we value photographs very highly as we at a lay level have adequate enough understanding of the biases implicit in the technology. They are amenable to critical analysis and agreement for the particular purposes by a very wide jury. This value is perhaps being somewhat diluted by digital manipulation which can involve skilful deceit.

Jim Hamlyn said...

Bruce, Yes. here's a relevant passage from a forthcoming text by Deregowski:
\\The duality of awareness which ensures that the same face is seen in a passport photograph presented at various angles also ensures that a picture is hardly ever treated as the depicted object, even when such treatment is blatantly appropriate. Thus it has been shown (Klapper & Birch, 1963) that actions demonstrating how one would use a tool are more vigorous when the tool is presented than when its picture is, and that learning to which location on the table each of various household objects belongs is faster when objects are used than when their photographs are used (Deręgowski & Jahoda, 1975). There is also evidence that such object/picture differences vary with cultural origin of the responders. For example; whilst no difference was found between performances of Zambian and Scottish children asked to sort out toys, a difference appeared when photographs of these toys were used, Zambian children performing less well (Deręgowski & Serpell, 1971).//

Post a Comment