Tuesday, 30 June 2015

The Price Of Intention

"Don't for heaven's sake, be afraid of talking nonsense! But you must pay attention to your nonsense." —Ludwig Wittgenstein, "Culture and Value" p.59.
I am sure I have often said and have certainly often heard others say: "I didn't realise until afterwards, what I was trying to do." This post is an attempt to pay close attention to the nonsense buried in this utterance and to tentatively suggest that the associated notion of unconscious striving — of unconscious desire even — is incoherent. It may already be clear that such a view runs contrary to one or two foundational ideas within psychoanalysis, some of which continue to garner significant recognition within the arts. If we shouldn't be afraid of talking nonsense, then we probably shouldn't be afraid of identifying nonsense either.

To realise something after the fact is to have learned something new; to have become aware of something that was previously unclear or inaccessible. The sentence "I didn't realise until afterwards, what I was trying to do." is commonly used as an acknowledgement that our goals are often vague, fragmentary or imprecise and only through the gradual, or sometimes sudden, accumulation of understanding do we become capable of clearly articulating this more developed knowledge. But whilst our goals can be sketchy, a vague intention is not an intention for vague outcome. A sketchy idea for a diagram is rarely a desire for a sketchy diagram.

There is nothing confused or paradoxical about such thoughts. Where the confusion arises is in the suggestion that some part of us, some inner and inaccessible intelligence, seeks to express itself through our actions; that an alleged unconscious or subconscious self is trying to tell us things that may only dawn upon our conscious awareness later. Certainly there are times when we recognise patterns or significances in our past actions. But is this sufficient grounds for the supposition that we are host to unconscious intentions that are striving to articulate themselves? I hope to show that it is not.

Trying, striving, endeavouring, pursuing, envisaging, seeking, aiming etc. are intentional goal orientated behaviours. Without goals there can be no striving  because there can be nothing to strive towards. If we were never capable of communicating an outcome of our actions in any shape or form, then we could not be said to try to achieve something either. "All 'willing' is willing something," as the neurologist Oliver Sacks puts it.

If someone asks us what we are doing and we have no answer, then we cannot be said to be acting purposefully. It is for this reason that goals are fundamentally reliant upon our powers of communication; upon our ability to offer some sort of token, word or gesture that would be acceptable to others as a representation of our intention.

In a 2006 paper, Jack Glaser and John Kihlstrom argue that: "Unconscious Volition Is Not an Oxymoron." They write:
...the unconscious, in addition to being a passive categorizer, evaluator, and semantic processor, has processing goals (for example, accuracy, egalitarianism) of its own, can be vigilant for threats to the attainment of these goals, and will proactively compensate for such threats.
Clearly Glaser and Kihlstrom recognise that volition demands goals, but it is extraordinary that they are prepared to suggest that unconscious behaviour is fundamentally intentional: that it has ulterior motives, even if these are as seemingly benign and basic as accuracy and egalitarianism etc. In its most extreme form, such a view opens the door to any number of unwitting intentions and renders us as nothing more than witnesses to motives beyond our control or ken.

If goals can be pursued without our conscious awareness or control, then we are puppets in a theatre not of our own making, and all we can do is observe our actions like passive audience members in the hope of gleaning some comprehension of the hidden goals that actually drive us. Consciousness never looked more wretched.

The alternative is to reject the notion of unconscious volition altogether and to seek a less extravagant explanation.

When in 1974 Oliver Sacks broke his leg whilst fleeing from a bull, the trauma of the injury left him with a temporary inability to properly sense or move his leg. In essence the episode had rid him of all knowhow in the use of his leg. A closely related condition is sometimes experienced by people who become temporarily blind in response to the traumatic loss of a loved one or some other major upset. These sorts of psychological responses to trauma are known as "Conversion Disorders" and it is interesting to note that the term was first coined by Freud as an alternative to "hysteria" or "hysterical blindness."

In a paper on the subject of conversion disorders, Harvey et al. (2006) point out that: "One difficulty facing research in this field is the complexity of the conceptual issues and variable ways in which terminology has been used." The authors helpfully include a table of definitions and explanations of key terminology and they also explicitly state that conversion disorders are “not intentionally produced” and cannot be feigned. It should be made clear that they do not make any suggestion that conversion disorders are the result of unconscious intention, striving, trying etc.

When overtired drivers are overcome by sleep, their unconscious is not striving to take control. If you attempt to kill yourself by holding your breath, it is not inner volition that will rob you of consciousness before the job is done. These are simply highly evolved autonomic responses that have no goals and do not have to strive, seek or endeavour to impose themselves. They have no more volition than the iris of the eye. No doubt conversion disorders are similarly rooted in complex autonomic processes.

Can we aim for one thing only to find that we were actually aiming for something else? If goals are necessarily communicable, then it follows that we cannot be oblivious of those we are pursuing. We can certainly aim for inappropriate goals or be confused, uncertain or vague about our goals, as I have already mentioned. But I don't think we can be mistaken that the thing we are intent upon is actually the thing we are intent upon. That would come at an extremely high price; the price of intention itself.

Monday, 15 June 2015

Indivisible Atoms of Intention


We are purposeful creatures. That would seem to be a fact beyond all doubt. Aside from the odd twitch, sneeze, hiccup, yawn, blush and a whole variety of other much more common autonomic processes, our purposeful — as distinct from merely efficacious — behaviours are invariably of the intentional sort. They are actions.

We are intentionally directed creatures not because we think of every action in anticipation but because we are prepared to communicate our goals when appropriately prompted. But how can a preparedness, readiness or disposition to communicate a goal be sufficient to justify purposeful behaviour? This post is intended to offer an explanation.

When you arose from your bed this morning no doubt you did it intentionally. My aim is to show that the intention to do so, was not initiated by a special pattern of neural activity but by a special constitution of you as an organism — including your brain of course. Certainly there was activity in your brain prior to your getting up, but it would be mistaken to suppose that this activity constituted a nascent intention. There is no intender in your brain pulling any strings to lift you from your slumber or to remove you from your bed.

Intention is not an activity of brain cells so much as a state of you as an organism in which certain causal influences lead to certain patterns of behaviour. In other words, when you get up, you simply act out of habit. This is not to say that habits are unintentional. What I am suggesting is that many—perhaps all—of our actions are a consequence of the way we are configured; of our dispositions to respond to certain causal influences in certain ways. This is surely what we mean when we say that someone “acts in character.” What we need to examine in order to understand intention is these dispositions to act — in particular our dispositions to represent.

If at any point in time I were to interrupt you and ask what you are doing, I’m pretty sure that you would have an answer at the ready. It would certainly be disconcerting for us both if you didn’t. Now it cannot be true that you are covertly narrating your life as you live it just in case someone asks you to explain yourself. You are simply adept—as we all are—in the skill of offering representations of your causal engagements on demand. So I am saying that this preparedness to produce representations is is necessary to explain intentional behaviour because the preparedness in itself is causally influential upon behaviour.

Certainly there are times when we are thrown into unfamiliar circumstances where we need to consider the possible consequences of our actions. But it is important to bear in mind that there is nothing about these forms of activity that makes them any more intentional than our more ordinary acts of habit. Some of our actions are the result of explicitly contemplating reasons or anticipating future states of affairs. Others are the product of images, objects, gestures and enactments that we are capable of producing, performing or describing.

Representations are intentional artefacts and behaviours. It is for this reason that we need to distinguish  sharply between the objects and behaviours that representations are from the indivisibly intentional creatures — the individuals — that produce them. The only way a creature can contain an intentional artefact or behaviour is by an intentional act. There are no intentional agents or behaviours within us. If we assume that the agency capable of producing representations exists within us — as opposed to being a state of us — then we undermine the whole project of examining and explaining what it is to have agency in the first place — to be an intentionally directed creature and to have a mind.

There has to be a cutoff. Intention doesn't reach all the way back. I am arguing that the cutoff is at the level of the organism, not its organs, its cells or its atoms.


Tuesday, 9 June 2015

Animal Minds?


The American essayist and poet Ralph Waldo Emerson, once wrote "The ancestor of every action is a thought." Superficially this idea seems plausible enough, but on closer examination it turns out to conceal a vicious paradox. If every purposeful action necessitated a prior thought, then every act of thinking – every thought – would itself have to be initiated by a further motivating thought. This inextinguishable spiral of antecedent thoughts is sometimes known as a “Rylean regress,” named after the British philosopher Gilbert Ryle who took the view that “intelligent practice is not a step-child of theory.”

At the core of Ryle’s philosophy was the conviction that intelligent behaviours are not the result of practical knowledge but are instead instances of practical knowledge. Confusion arises because we tend to conceive of knowledge as an independent "thing" that leads to, results in or produces actions. Ryle exposed the “category mistake” implicit in this reified conception. Knowledge for Ryle is neither an entity nor a neurological region that we can point to on a fMRI scan – it is a repertoire of aptitudes, skills and dispositions. For Ryle, skillful action is a form of thinking. And if thought and action are indivisible in this way, then there need be no prior “thought processes” driving intelligent behaviour. Actions are already integrated processes of intelligent engagement with the world.

So, do animals think before they act? A Rylean analysis would suggest not. But this is not to support the view that only language users are capable of contemplating the future or of making plans. Many plans are diagrammatic objects after all. Nonetheless what it does strongly suggest is that the skills involved in planning and other sophisticated forms of future directed activity, rely upon techniques that must be learned and practiced through trial and error. In the human case this is achieved through publicly negotiated forms of communication, both verbal and nonverbal. Without such public exchanges it is extremely doubtful that any creature could ever develop the capacity to ponder with any degree of complexity or proficiency. Skills are demonstrated and tested in the unforgiving crucible of actuality, not in the cosseted ether of thought or imagination.

Wednesday, 20 May 2015

Misunderstanding Media: the medium is not the message*



In his famous essay "The Medium Is The Message" (1964) media guru, Marshall McLuhan uses the analogy of electric light to illustrate his view of the relation between a medium and what he sees as its wider message. He writes: "The electric light is pure information. It is a medium without a message."

McLuhan makes no attempt to clarify what he sees as the difference — if any —between this notion of "pure information" and say, pure form, pure matter or pure energy. Nor does he provide any guidance regarding the question of what quantity or kind of information might be left over once a message has been stripped of its content. If electric light is all information and no message then, according to McLuhan's logic, it is possible to rid a medium of its message whilst retaining pure information. If information exists independently of content then it follows that information must inhere or adhere to its medium in some form — presumably a detectable form. But if this information is detectable then what extra ingredients does electric light possess beyond its raw properties?

Further questions are begged. Does only electric light qualify as pure information or might gaslight also make the grade? And what of candlelight or sunlight? What is information after all? Are flowers informed by the light that falls upon them? Does the light of Springtime inform trees that it is now the moment to blossom?

If light transmits information and this information informs things, then what is the difference between information thus regarded and messages conventionally regarded? And what are we actually left with once we remove all messages from information? What information could there possibly be in an uncrackable code? Is it not the case that an unreadable message is devoid of content precisely by being devoid of information? What information is to be had from a language that cannot be understood?

Or are we to say that an unintelligible message is pure information to the extent that we recognise it as a message; as a communicative tool? That seems fair, but it still fails to explain how ordinary electric light constitutes pure information.

A further puzzle emerges. If electric light is pure information, then it follows that the electric light in a fibre optic cable is pure information also, even when it carries no encoded information. Likewise, when information is encoded and sent along a fibre optic cable it must be encoded information further comprised of pure information: an informational wheel within a wheel.

Something has evidently gone badly awry in McLuhan's thinking. Electric light is no more "pure information" than gaslight, candlelight, paint in a jar, or a stick. Almost anything can be used as a medium so long as we can control it sufficiently to produce representations of one kind or another. A medium without representation is a material without a function. In other words it is just a potentially manipulable resource. 

Media are not things that we attach messages to like clothes on a washing line. Strictly speaking, a medium doesn't actually exist as a medium unless it is used to represent something. Media are techniques in the use of objects and materials for the purposes of communication. There is nothing intrinsic to objects and materials that confers anything other upon them than the properties they already possess. Information is not an inherent property of matter    it is a culturally negotiated attribution. To interpret something as information is to be an informed member of a culture and to be an informed member of a culture is to be possessed of skills in the use of materials and resources for the purposes of communication.

A resource is no more a medium than a stick is intrinsically a tool.


* For Tom

Wednesday, 13 May 2015

Language In A Petri Dish: the scientific misunderstanding of signals



No sane person deliberately seeks to misinterpret messages or to incorporate falsehoods into their reasoning. We pay attention to symbols because the consequences of ignorance or misunderstanding can be disastrous. Driving through a red traffic signal is an act that thankfully almost all drivers wisely avoid. If we weren't careful about the ways that we use symbols — their meanings — then communication would quickly descend into incomprehensible babble. It really does matter how we use signs and for the most part we stick to the rules. But sometimes even scientists are sloppy. This post is about a very specific but widespread form of scientific sloppiness: the misattribution of symbol-use to cells and simple organisms. 

Symbol users act in extremely strange ways. On the basis of a simple sign — a word, a coloured light or an abstract scrawl — we can be led to engage in some of the most elaborate, sophisticated, and sometimes the most bizarre, behaviours. And perhaps the most bizarre thing of all, is that the sign itself can be formed from absolutely anything. That is the extraordinary power of symbols: we can use anything to symbolise anything else, so long as the people we are communicating with know the rule we are using.

Rule following is perhaps the most fundamental requirement of symbol use. If we do not know the rule, we cannot know how to respond. This is why only the most intelligent creatures are capable of using symbols — because only the most intelligent creatures are capable of using tools; of putting raw materials to uses for which they were never designed.

A very brief scan of current research within the biological sciences will be sufficient to demonstrate that talk of chemical "signalling" between organisms (and even between cells) is extremely common. And neuroscience is almost entirely committed to the conviction that neurons produce signals. In my experience the merest suggestion that such talk is mistaken is often regarded as tantamount to heresy, not because there are particularly compelling reasons for supporting such a view, but because there seem to be so few reasons against it. In other words, talk of biological signalling is simply a terminological habit or convenience that adds just a little glamour to terms that would otherwise be restricted to "stimuli" and "causal triggers".

I am sometimes asked whether I think it does any harm to talk of biological or neurological signalling. My usual response is to say that I’m not in a position to know. But a better response would be this: what good does it do to suggest that we can observe the rudiments of language in a petri dish? Since when was it wise for scientists to get it manifestly wrong about their philosophical foundations, and since when was it wise for philosophy to follow suit?

Monday, 13 April 2015

Judgement And The Handover Of Language



Some forms of amoebae can discriminate between different strains of their own species. Quite how they sense the difference isn't altogether clear, but what should be obvious is that  no judgements are involved. To make a judgement requires reasons and it would therefore be absurd to suppose that single celled creatures have reasons for their behaviour. If microorganisms could reason, then perhaps we too would have reason to replace some of the highest ranking members of our legal system with bewigged protozoa.

Philosophers and psychologists quite commonly talk of "perceptual judgements". Presumably they do not mean to suggest that the capacity to see the difference between ultramarine and cerulean blue relies on the exercise of reason. So one could surely be forgiven for questioning the value of their tacit support of such a misconception.

Judgements are evaluative, and value is a thoroughly abstract notion for which only humans are sometimes prepared to die. It might be argued that many creatures value things — their lives and those of their kin, for example, or the food they eat. Many animals are certainly prepared to do or a great deal to ensure their own survival and that of their offspring. But is this because they know the value of life or is it merely instinct? Are instincts judgements? Judgement must surely be the very opposite of instinct.

Judgements are deliberative and to this extent all judgements require rational grounds. Judges make assessments and take careful (usually) account of evidence. Judgement is a disciplined practice that only the well trained can perform. Nobody is born a judge, but babies are certainly born perceivers — not highly skilled perceivers granted, but perceivers nonetheless and some of their perceptual skills are remarkable. In a study of of the crying "melodies" of French and German newborns it was discovered that babies mimic the speech patterns of their mothers. And a 2009 study found that newborns develop expectations of rhythmic beats that bring about a measurable neurological response when the continuity of the beat is violated.

From where does the capacity to make judgements derive, and moreover, how did it emerge in the first place? The first answer is straightforward. In order to make judgements — to reason — we need the categories and concepts but most especially the logical structures of language. Without these sophisticated procedural techniques, reasoning is impossible. Nonverbal creatures have highly sophisticated perceptual skills but they do not manipulate concepts. We know this because we know how demanding it is to learn concept manipulation: language. Every parent knows that the judgements of infants stand in stark contrast to their perceptual skills.

The emergence of judgement — of the concepts of good and bad for example, is less clear. But here is a speculative hypothesis. Our ancestors were making stone tools as long ago as 2.5 million years. Stone tools are time consuming to produce, they require extremely sophisticated skills and their materials can only be sourced in certain places. Unlike tools, behaviours are ephemeral — they disappear in the moment of production. An utterance can be produced but it cannot be exchanged. Tools, on the other hand, are artifactual tokens that endure and as such they are fungible: they can be exchanged. 

There is nothing about exchange itself that requires anything more than perceptual skills. To be a perceiver is to be capable of offering or accepting one thing that is indiscriminable from another. What makes tools different is that they have a dual identity. A stick used as a weapon is no longer merely a stick. But when a stick is not being used as a weapon it doesn't necessarily revert back to being a stick. If it is kept, it remains as a potential weapon: a tool.

I contend that this aspect of possession of useful artefacts is the very basis of language. It is the instrumental root of all of our skills of symbolisation (as well as numerous handy metaphors). Many animals use symbolic communication, but only humans exchange artefacts and only humans attribute value to them. To value something is not simply to be disposed to protect it. To value something is to know what one would (or would not) be prepared to exchange for it. Language emerged through our practices of exchange, of transactions and betokenings. I suggest that language is a product of the hand.

Saturday, 28 March 2015

The Mismeasure of Minds

In order to tell whether something is fixed, one needs something else that is known to be fixed and can serve as a criterion of judgement. But how can one find that first fixed point? We would like some nails in the wall to hang things from, but there is actually no wall there yet. We would like to lay the foundations of the building, but there is no firm ground to put it in.” —Hasok Chang (2009)
When we perform a skilful action like catching a ball, is it necessarily the case that the brain also performs a series of skilful measurements and calculations? Are our skills of perception and intelligent action evaluative? Many theorists confidently assert that they are. I aim to explain why I think such an assumption is both explanatorily extravagant and a hindrance to enquiry.

The word “calculate” derives from the Latin word "calculus", which once referred to the stones used as counters in abacuses. In standard usage, to calculate something is to determine a value by the use of various mathematical operations applied to quantities represented by symbols. It is possible to perform many calculations without the use of symbols, but the demands of doing so (i.e. the quantities involved), make non-symbolic calculation extremely unwieldy. Certainly we have no evidence of non-symbolic neural calculation even in the brains of simple creatures. So if it is the case that brains perform calculations, then they must be using a symbolic system to encode information. Although there is an enormous quantity of literature on the subject of neural encoding, as yet no code has been identified or unravelled. This alone should make us wary of neurocalculation.

Symbols are highly sophisticated entities. The Roman symbol “IX” bears no resemblance to the quantity it represents. Nor does the equivalent linguistic symbol “nine". Many people in the world who recognise "9" and "IX" do not recognise “nine”. Turn "9" and "IX" upside down and their corresponding quantities change. And if you say “nine” in Germany the meaning is not a number at all. It should be obvious even from these trivial examples that symbolic representation is far from straightforward. Neurosymbolic communication thus carries an extremely heavy explanatory burden. Why, for instance, would brains need symbols if they were already so advanced that they could develop their own symbolic system? As brains evolved, how did the parts that didn't know the code understand the parts that did? Why do we have no evidence of basic symbols amongst simple brains? And why are we denied access to the computational power of our own brains to such an extent that we have to sit through hours of instruction and practice to learn just a tiny fraction of what our brains are allegedly capable? None of these issues refutes neurocalculation but they do help to suggest that we have little reason to be confident about its credibility.

And what of neuromeasurement? 

In 2009 Hasok Chang published a book entitled: “Inventing Temperature: Measurement and Scientific Progress.” In it he provides a detailed history of the many complexities involved in arriving at a system for the measurement of temperature. It seems obvious to us now that water freezes at 0 degrees and boils at 100. But in fact the variables involved, like chemical impurities and atmospheric pressure as well as the fact that thermometers themselves had no standard measure, made the whole process an extremely challenging one, requiring numerous iterative improvements, insights and innovations.

Chang raises a significant obstacle for advocates of neuromeasurement. In order to measure an unknown distance for example, a standard would first be needed. But in order to establish a standard, a unit of measure would also be required. According to Chang: "This circularity is probably the most crippling form of the theory-ladenness of observation" and is the very problem that has caused well documented difficulties in every region of the science of magnitude.

One thing is certain, brains did not evolve their own inner form of science and technology. But if it took science and technology to enable the invention and refinement of our skills of measurement, then it seems extremely unlikely that brains could evolve similar techniques independently. 

Could there be such a thing as a set of evolved biological standards equivalent to those developed by culture? Are such standards actually necessary, or is there a more simple and plausible explanation for the many sophisticated skills we possess?

Imagine a single celled organism that consistently moves towards one sort of thing rather than another similar sort of thing. In that case we would say that the organism is capable of discriminating between these two similar things. Such differential responsiveness is a commonplace amongst organisms and makes up by far the greater proportion of all organismic behaviour. When the iris of the eye contracts in response to bright light as opposed to dim light or when the liver produces bile in response to fatty food as opposed to carbohydrate we do not suppose that any calculation is going on, nor any measurement. The organism is simply behaving in response the presence of one sort of thing rather than another. No choice is being exercised, no calculation, no judgement, no theorisation, no measurement, merely evolved responsiveness.

Certainly the processes involved in more sophisticated forms of responsiveness are of a higher order of complexity. But the fact of differential responsiveness gives us good reason to first explore the potential of this more fundamental form of sensory discrimination in the actions of sophisticated creatures long before we go attributing skills of neurocalculation and neuromeasurement to brains.

When a child discovers by chance that a stick can be balanced across their hand, is sensory discrimination sufficient to explain their capacity to balance the stick? I don't see why not. To be disposed to move the stick in one direction as opposed to the other in order to maintain its horizontal position would seem to be a far more straightforward answer than neurocalculation and neuromeasurement.

Skills like balancing sticks are comparative. They trade both on sensory discrimination and — crucially — its lack. To be disposed to respond to two things (or two parts of one thing, in this case) in the same way because we fail to discriminate between them in one or more respects has significant efficacy. To know how to make a stick balance across your hand is to know how to make one side of the stick behave in the same way as the other side. In essence the skill is based upon an ability to make one side of the stick match the other in respect of the forces acting upon it: to make both sides of the stick indiscriminable from one another in respect of their tendency to fall. From the point of view of artificial intelligence this is an extremely complex skill, but from a haptic point of view it is child's play.

Thanks to Jason Streitfeld for the FB discussion that led to this post.

Thursday, 19 March 2015

Acquisitions of Brain and Body


Like most adults I have a fair quantity of scars on my body, mostly on my hands. The majority of these marks and disfigurements are minor, nonetheless it often strikes me as surprising that so many of us manage to retain all of our fingers through life. Considering the many dangerous things we do with our hands, it is a testament to our skills and foresight (and no small amount of first aid and general hygiene) that our hands are often in such good shape.

We commonly speak of scars as acquisitions, as things we obtain through mishap and misadventure. Scars are additions, and sometimes, in the more extreme cases, evidence of subtractions from the body – from what we would otherwise have.

It is common also to speak of skills as acquisitions – as abilities we gain through practice and experience. Knowledge also, is a capacity we tend to think of as an acquisition.

To acquire something is to gain, or to form, a certain kind of possession, typically of an object or else a demeanour, attitude, disposition or tendency. In these latter cases the term “acquisition” is used in a technical sense that could just as easily be replaced with adopting an attitude, forming a disposition or developing a tendency.

In ordinary usage, acquisition most commonly pertains to objects or other forms of material wealth. To acquire a trophy is usually – trophy-scars notwithstanding – both to acquire an object as well as the admiration, acclaim or appreciation for which it stands. But to gain recognition is not actually to acquire anything so much as it is an increased likelihood to be treated preferentially by people in the know. It is very common for such social achievements to be recognised through the use of material tokens: trophies, epaulettes, titles etc.

So, like our less extreme scars, acquisitions are most commonly additions to what we already possess and whilst such possessions may take up little space, they do nonetheless need to be accommodated. Even digital information needs to be stored.

It is no surprise therefore that we tend to describe skills and knowledge in terms of content, as things we bundle away in our heads ready for later use. It’s as if our brains were vast repositories of information which we routinely access in the same way we retrieve books from a library or artefacts from a museum. If we acquire stuff, then it follows that we need somewhere to store it. And what better place than the brain? But what seems obviously the case is not necessarily the case.

Consider our hands again. When we learn to play the piano we do not speak of acquiring new additions to our hands. Our hands are not repositories. They do not store their capacities, even though they clearly have capacities, or at least they participate to a very large degree in the capacities of the person as a whole. When we learn a new technique requiring dexterity, we may develop the musculature of our digits etc. but there are no new hand acquisitions as such – not, of course, unless we inherit another scar or two. And if we are unfortunate enough develop carpal tunnel syndrome, arthritis or ganglion cysts these are not strictly speaking acquired – they develop or arise. The capacity was already there.

Many critics of the notion of mental content continue to speak of “skill acquisition” or “knowledge acquisition” as if there were no ground to be lost as a consequence. I’m not so sure the term helps us. In fact I think it may be a hindrance. Perhaps it would make more sense to speak of knowledge and skills in the same way that we regard the changes that occur in our hands when we learn a new technique. Perhaps we should make a point of regarding skills and knowledge as developmental changes rather than as acquisitions. You cannot acquire a development but you can develop a skill and you can develop your knowledge. Organisms and organs are things that develop. There is no room in a brain for any acquisitions. All the space is already taken. Knowledge and skills are develop-mental.


Saturday, 21 February 2015

Tools in the Workshop of Language



Do nonhuman animals form and use concepts? Is their negotiation of the world informed by abstract ideas of causality, agency, dominance, submission, absence, etc? Do animals theorise? I intend to use this post to explore the view that conceptualisation is possible only in direct proportion to the capacity of creatures to use symbolic communication and to the degree that they have an evolutionary history in which valuable objects – tools in particular – are  commonly manufactured and exchanged.

Language is, in large part, a highly sophisticated form of symbolic communication. By "symbolic" I mean the ability to exchange thing A for thing B despite the fact that thing A and thing B need share no properties in common. For example, we commonly exchange material goods and services for mere pieces of paper despite the fact that the pieces of paper concerned have practically no intrinsic value of their own. We can only do this because we collectively agree to abide by the rules of monetary exchange.

Now, it might be claimed that I am simply setting the bar unreasonably high for qualification as a concept-user and moreover that concepts must surely be prerequisites of the skills that enable tool-use. I hope to show otherwise. Firstly, it should be clear that language is by no means the only thing that sets us apart from other animals. Our skills in the manufacture and use of tools and composite tools outstrip those of other animals by many orders of magnitude and we have clear evidence that our ancestors worked with skillfully crafted stone tools for at least 2 million years with untold years of prior use of sticks, bones, leaves and other raw materials. It would be strange in the extreme if this long history of cultural and biological co-evolution had only a minor influence upon the development and acquisition of our skills as language-users.

Secondly, the assumption that certain sophisticated behaviours can only be explained by concept-possession is only justified if every other explanatory alternative has been ruled out. It might be argued, indeed I would support the view, that nonverbal capacities in particular deserve to be examined much more extensively regarding their potential to explain intelligent behaviour. Elisabeth Camp (2009) contends that some very plausible explanations of complex behaviours can be provided by nonlinguistic compositional capacities, and on the basis of this research she rejects the claim (Cheney and Seafarth 2007) that baboon behavior can only be explained by a “language of thought” (Fodor 1975). Nonetheless Camp’s research, whilst important, is a mere synapse in a vast cortex of research that directs its primary focus towards what animals might plausibly think rather than what animals intelligently do.

There are two problems that seriously impede progress in this area. The first is the overriding reliance upon representational theories of mind and the second is an almost universal paucity in understanding of the nature and role of capacities of nonverbal representation in the story of intelligent behaviour. Theorists like Jerry Fodor clearly realise that there are certain kinds of representation that are simply impossible in the brain. So inner displays and inner models of the perceived world are straightforwardly ruled out. (Grid cells and place cells are held by many to be evidence if inner representation but other theorists – Hutto and Myin (2011) for example – convincingly argue that correlation and/or covariation do not amount to representation). Similarly, inner pictures have so far evaded the probes, electrodes and scanners of neuroscience. Only inner symbols seem safe from the scrutiny of fMRI and for this reason Fodor’s Language of Thought continues its merry march across the pages of books, conference papers and PhD theses (although its weakening stranglehold seems to be allowing a little more blood to reach the neurons of some thinkers).

Like Davidson (1982) and Chater and Heyes (1994), I hold that language and concepts are inextricably intertwined, and I know of no instance of claimed animal concept-possession that cannot be more plausibly explained by the possession of nonverbal (i.e. non-conceptual) skills. However, unlike Davidson, I think we can reasonably ascribe the capacity for surprise to animals, and once again the explanation derives from an account of the nonverbal capacities involved. To be a conscious creature and to act deliberately is to have expectations, but these expectations are not concepts. To expect a ripe apple to be sweet is not predicated upon a concept of sweetness. To expect the ground to be solid under one’s feet is not a theorisation. We do not need concepts to be surprised when the song we are listening to on the radio suddenly stops. Expectations are skills, but they are not conceptual skills. They are practical, not ratiocinative. They are nonverbal.

For advocates of inner representation, the act of perception involves the production of internal representations that correlate with the world. I agree that representation plays a vital part in perception, but I do not agree that brains manufacture representations in any shape or form. A much simpler, and I think more plausible explanation, conceives of perception as a cluster of skills in the production of public representations, as a readiness or preparedness to substitute the thing seen, touched or heard etc. for its duplicate in one or more respects.

Experience leads to the development of expectations about the regularities of the universe and of unfolding events. I propose that we first attempt to rule out all possible nonverbal explanations before we attempt to ascribe capacities of conceptualisation to nonverbal creatures.

One further point. Nonverbal expectations cannot be conceptual because, as Gareth Evans pointed out in 1982, perception is fine grained in a way that concepts simply are not: “Do we really believe the suggestion that we have as many colour concepts as there are colours in the rainbow.”

Many animals use symbolic communication, from dolphins and prairie dogs to bats, birds and honey bees. We observe such forms of communication in a great variety of quantities and degrees. But what we do not observe is anything like the degree of tool-use that we find in human culture. Nonetheless, if it is true that conceptualisation is in part enabled by the skills of symbolic communication then it should be possible for symbol-using animals to acquire one or two very basic components of concept use. It should be noted though, that a capacity to acquire one or two basic components of a broader set of skills and the acquisition or possession of the rudiments of that skillset are by no means the same. A fistful of clay does not a sculptor make. Nor is it necessarily the case that the mere acquisition, even of a large number of proper or common nouns for example, enables these to be manipulated and recombined in intelligible ways. And the question of how the skills of combining and contrasting concepts – of conceptual reasoning – could be practiced, refined and evaluated in the absence of social feedback would seem to present a insurmountable explanatory obstacle for advocates of private concept formation.

Concepts are tools in the workshop of language. But without the techniques that enable the skillful use of these tools, concepts are as purposeless as mere sticks, stones, leaves and bones.

Sunday, 11 January 2015

The Certainty of Good and Evil



Bertrand Russell writes: "Most of the greatest evils that man has inflicted upon man have come through people feeling quite certain about something which, in fact, was false." But, to be consistent, the reverse must also be true: most of the greatest goods that man has bestowed upon man have come through people feeling quite certain about something which, in fact, was true. What otherwise is true love? So, it would be mistaken to conclude from Russell's remark that certainty is intrinsically malign or contradictory.

To say that there are no absolutes is itself a straightforward truth claim — an absolute. The argument should really end there, but clearly this has not convinced the doubters. So let me try to explain why I think relativism is an error of reasoning.

What is it to be certain of something? Is a gibbon certain that the branch is really there? What advantage would doubt confer on the gibbon in such a case?

Is this breath I take an instance of certainty? Is the evolutionary exploitation of universal regularities driven by certainty? Or is it more true to say that certainty is merely a characterisation; one of a catalogue of what we take (often mistakenly) to be our commitments?

It seems to me that the relativist begins with the possibility of doubt and extrapolates into barely plausible fantasy and then takes this as sufficient cause for radical doubt. 

But doubt, like certainty, is a concept. Does the gibbon doubt? Does a fly? Nonverbals do not reason. They have no propositional faculties. That is largely why they are called "nonverbals" after all.

Sure, a gibbon might hesitate (not a fly). But the gibbon's hesitation is not due to intellection. The supposition that facts (knowing-that) underlie all knowledge is the ruin of intellectualists BTW. No, the gibbon hesitates because it has insufficient knowhow.

The fact that we can conceive of doubtful things, that we can be ambivalent, equivocal and uncertain is no ground for rejecting all certainty. Likewise, the fact that we can take some things for granted is no justification for supposing that our extrapolations are also true. That, I propose, is the root of all evil.

Nonverbal creatures are never evil. Evil is the product of language users. It is a cultural contrivance—ideas and ideologies—that require systematic planning and, in their worst manifestations, the recruitment of others. Nonverbal creatures can be malign of course, but they cannot plan systematically. Nor can they recruit others because they cannot persuade.


Thanks to Brian and John for questioning my unshakable faith in realism and to Ashok for the quote from Russell.