Showing posts with label Knowledge. Show all posts
Showing posts with label Knowledge. Show all posts

Thursday, 16 June 2016

Dretske’s Dreadful Theory Of What We See



In a presentation from 2008, the late (as of 2013) Stanford philosophy professor Fred Dretske argues that seeing includes the perception of objects (including relations between objects), properties (shape, size and colour etc.) and facts. For Dretske, the facts we perceive are “things we come to know by seeing”. He claims that there is a danger that we take a failure to notice or detect objects and properties as a failure to actually see them. He presents the following two images in order to explain this claim.
Image A
Image B

Image B contains an additional shape. In his presentation, Dretske presents the images one after the other and inserts a transition between the two that makes it difficult to see the difference between them. In Dretske’s view: “One should not conclude from the fact that you didn’t see that there was a difference to the conclusion that you didn’t see the object that made the difference.” He remarks:
Even if you don’t detect it, even if you don’t notice it, even if you don’t know that you are seeing different things. Even if you don’t see the fact that there is a difference, you still might see the objects and properties that make the difference. You just don’t realise you do.
It should be obvious that if you do not detect, realise or notice that the traffic lights are red, then you cannot be said to see them (at least in respect of their being red). Nonetheless Dretske holds that seeing is independent of conscious awareness. In his view, we can genuinely see objects and properties without detecting, noticing or knowing that we do so. Dretske uses this questionable conclusion to promote his theory of conscious experience. He claims that: “Your experience of an object is conscious if it gives you knowledge of that object.”
For Dretske, all knowledge is knowledge of facts, and in order for an experience to be conscious, it must provide such knowledge. But this is absurd. If we look at a familiar object, we do not lose consciousness of it because it ceases to give us knowledge. If consciousness depends on the acquisition of knowledge, then all lapses in attention must be accompanied by lapses in consciousness. This is quite evidently not the case.
We perceive the phonemes, morphemes and sentences in which facts are typically stated but it is debatable whether we actually perceive facts at all. Earlier today I asked my partner “When did you last see a fact.” She looked at me quizzically and remarked: “You don’t see facts.” Some people might contend that texts, graphs and diagrams etc. can be seen and thus are visible facts, but it is important to note that representations of facts are not the facts they are used to represent. Many facts can be demonstrated, but it does not follow that a demonstration of a fact can be reduced to the fact it is intended to demonstrate.
Earlier in his presentation, Dretske states: “The facts we see—the things we come to know by seeing—we come to know them by seeing objects and their properties.” Clearly then, Dretske realises that perception of things is not the same as the “perception” (his use, not mine) of facts. Facts must be derived in some way from perception, but quite how they are derived Dretske neglects to mention.
The absurdity of Dretske’s position becomes more pronounced towards the end of his presentation. He argues that when we look at a wall of 350 bricks for a few seconds, we acquire 350 “distinct pieces of knowledge—one for each brick in the wall.”  Not only this, but during the Q&A session it becomes clear that he thinks we acquire “infinite” knowledge “for free” when we look at things. The reason he believes this is because he is committed to the idea that the perception of facts involves the acquisition of what he calls “tacit knowledge”.
You don’t have to actively think something to know it. There are a great many things you know tacitly, not because when you acquired the knowledge you are actually thinking about it or believing it but because you have the kind of experience of it which, if you do later think about it, will tell you what you need to know.
There is no such thing as a memory “telling” us anything, so Dretske can only mean that we have to interpret an experience ("later think about it") to derive factual knowledge from it. If we do not think about it—if we do not interpret the memory by means of our conceptually enabled inferential skills—then we cannot be said to have yet formed any factual knowledge on the basis of the memory. An analogy will help. If we have a workshop full of materials, we may be capable of using these to create some tools, but unless we actually manufacture these tools, we cannot claim to possess them. Tacit knowledge, and knowledge in general in fact, is not something that we possess like a scar or a souvenir, it is something we are capable of doing, something we are capable of bringing about by the application of skill.
When we fail to see a difference between two different things, at least two factors need to be taken into consideration. Firstly, our sensory organs may be limited by evolutionary constraints that give rise to regularly occurring fallibilities in certain circumstances. Secondly, the circumstances of encounter (the illumination, position, angle of view, delay between images etc.) may limit discrimination more than might otherwise be the case. Certainly any actual differences within our visual field may have a causal influence upon our sensory system, but this does not mean that these influences are dealt with at some non-conscious, unconscious or “sub-personal” (Dennett 1987) level of seeing that we "just don't realise". It just means that there are various complex processes involved in perception that are not conscious and that some of these have insufficient influence to rise to the level of purposeful action: consciousness.

At the 25:00 minute mark of the presentation, Dretske discusses an image of a square divided into nine coloured portions. He claims that if each of the nine squares was coloured the same shade of blue and placed so close together that we couldn't see the edges between them, we would still see nine squares. This is like arguing that we see the screen when we watch a movie. If pressed, we would probably agree that the screen is visible while we watch Star Wars or Gone With The Wind etc. But what we would be very unlikely to concede, and what follows from Dretske's thesis, is that we see each pixel on the screen and thus gain millions of distinct pieces of knowledge—one for each pixel. It should also be noted that pixels are themselves divisible into individual photons. Do we have distinct pieces of knowledge of photons too? 

At some point we have to acknowledge that all sensory systems are limited in various ways and it therefore follows that these limitations make it impossible to discriminate between things (that may in fact be quite different) in certain circumstances and in certain respects. It is by virtue of differences that all creatures discriminate between things. Without sensory discrimination there would be no life. Nonetheless, sensory discrimination is not free of limitations. Without these limitations, there would be no question of our mistaking any one thing for another different thing, and there would also be no question of our accepting a flat thing as a viable stand-in of a three-dimensional thing. The fact that things of one sort (images say) can be mistaken for things of another sort (three dimensional objects for example) makes images supremely apt as representational tools; as things that can stand-in for the objects they represent.
I suggest that any explanation of perception that fails to account for the role of discrimination failure within our practices of nonverbal representation is probably doomed.



Tuesday, 9 June 2015

Animal Minds?


The American essayist and poet Ralph Waldo Emerson, once wrote "The ancestor of every action is a thought." Superficially this idea seems plausible enough, but on closer examination it turns out to conceal a vicious paradox. If every purposeful action necessitated a prior thought, then every act of thinking – every thought – would itself have to be initiated by a further motivating thought. This inextinguishable spiral of antecedent thoughts is sometimes known as a “Rylean regress,” named after the British philosopher Gilbert Ryle who took the view that “intelligent practice is not a step-child of theory.”

At the core of Ryle’s philosophy was the conviction that intelligent behaviours are not the result of practical knowledge but are instead instances of practical knowledge. Confusion arises because we tend to conceive of knowledge as an independent "thing" that leads to, results in or produces actions. Ryle exposed the “category mistake” implicit in this reified conception. Knowledge for Ryle is neither an entity nor a neurological region that we can point to on a fMRI scan – it is a repertoire of aptitudes, skills and dispositions. For Ryle, skillful action is a form of thinking. And if thought and action are indivisible in this way, then there need be no prior “thought processes” driving intelligent behaviour. Actions are already integrated processes of intelligent engagement with the world.

So, do animals think before they act? A Rylean analysis would suggest not. But this is not to support the view that only language users are capable of contemplating the future or of making plans. Many plans are diagrammatic objects after all. Nonetheless what it does strongly suggest is that the skills involved in planning and other sophisticated forms of future directed activity, rely upon techniques that must be learned and practiced through trial and error. In the human case this is achieved through publicly negotiated forms of communication, both verbal and nonverbal. Without such public exchanges it is extremely doubtful that any creature could ever develop the capacity to ponder with any degree of complexity or proficiency. Skills are demonstrated and tested in the unforgiving crucible of actuality, not in the cosseted ether of thought or imagination.

Saturday, 28 March 2015

The Mismeasure of Minds

In order to tell whether something is fixed, one needs something else that is known to be fixed and can serve as a criterion of judgement. But how can one find that first fixed point? We would like some nails in the wall to hang things from, but there is actually no wall there yet. We would like to lay the foundations of the building, but there is no firm ground to put it in.” —Hasok Chang (2009)
When we perform a skilful action like catching a ball, is it necessarily the case that the brain also performs a series of skilful measurements and calculations? Are our skills of perception and intelligent action evaluative? Many theorists confidently assert that they are. I aim to explain why I think such an assumption is both explanatorily extravagant and a hindrance to enquiry.

The word “calculate” derives from the Latin word "calculus", which once referred to the stones used as counters in abacuses. In standard usage, to calculate something is to determine a value by the use of various mathematical operations applied to quantities represented by symbols. It is possible to perform many calculations without the use of symbols, but the demands of doing so (i.e. the quantities involved), make non-symbolic calculation extremely unwieldy. Certainly we have no evidence of non-symbolic neural calculation even in the brains of simple creatures. So if it is the case that brains perform calculations, then they must be using a symbolic system to encode information. Although there is an enormous quantity of literature on the subject of neural encoding, as yet no code has been identified or unravelled. This alone should make us wary of neurocalculation.

Symbols are highly sophisticated entities. The Roman symbol “IX” bears no resemblance to the quantity it represents. Nor does the equivalent linguistic symbol “nine". Many people in the world who recognise "9" and "IX" do not recognise “nine”. Turn "9" and "IX" upside down and their corresponding quantities change. And if you say “nine” in Germany the meaning is not a number at all. It should be obvious even from these trivial examples that symbolic representation is far from straightforward. Neurosymbolic communication thus carries an extremely heavy explanatory burden. Why, for instance, would brains need symbols if they were already so advanced that they could develop their own symbolic system? As brains evolved, how did the parts that didn't know the code understand the parts that did? Why do we have no evidence of basic symbols amongst simple brains? And why are we denied access to the computational power of our own brains to such an extent that we have to sit through hours of instruction and practice to learn just a tiny fraction of what our brains are allegedly capable? None of these issues refutes neurocalculation but they do help to suggest that we have little reason to be confident about its credibility.

And what of neuromeasurement? 

In 2009 Hasok Chang published a book entitled: “Inventing Temperature: Measurement and Scientific Progress.” In it he provides a detailed history of the many complexities involved in arriving at a system for the measurement of temperature. It seems obvious to us now that water freezes at 0 degrees and boils at 100. But in fact the variables involved, like chemical impurities and atmospheric pressure as well as the fact that thermometers themselves had no standard measure, made the whole process an extremely challenging one, requiring numerous iterative improvements, insights and innovations.

Chang raises a significant obstacle for advocates of neuromeasurement. In order to measure an unknown distance for example, a standard would first be needed. But in order to establish a standard, a unit of measure would also be required. According to Chang: "This circularity is probably the most crippling form of the theory-ladenness of observation" and is the very problem that has caused well documented difficulties in every region of the science of magnitude.

One thing is certain, brains did not evolve their own inner form of science and technology. But if it took science and technology to enable the invention and refinement of our skills of measurement, then it seems extremely unlikely that brains could evolve similar techniques independently. 

Could there be such a thing as a set of evolved biological standards equivalent to those developed by culture? Are such standards actually necessary, or is there a more simple and plausible explanation for the many sophisticated skills we possess?

Imagine a single celled organism that consistently moves towards one sort of thing rather than another similar sort of thing. In that case we would say that the organism is capable of discriminating between these two similar things. Such differential responsiveness is a commonplace amongst organisms and makes up by far the greater proportion of all organismic behaviour. When the iris of the eye contracts in response to bright light as opposed to dim light or when the liver produces bile in response to fatty food as opposed to carbohydrate we do not suppose that any calculation is going on, nor any measurement. The organism is simply behaving in response the presence of one sort of thing rather than another. No choice is being exercised, no calculation, no judgement, no theorisation, no measurement, merely evolved responsiveness.

Certainly the processes involved in more sophisticated forms of responsiveness are of a higher order of complexity. But the fact of differential responsiveness gives us good reason to first explore the potential of this more fundamental form of sensory discrimination in the actions of sophisticated creatures long before we go attributing skills of neurocalculation and neuromeasurement to brains.

When a child discovers by chance that a stick can be balanced across their hand, is sensory discrimination sufficient to explain their capacity to balance the stick? I don't see why not. To be disposed to move the stick in one direction as opposed to the other in order to maintain its horizontal position would seem to be a far more straightforward answer than neurocalculation and neuromeasurement.

Skills like balancing sticks are comparative. They trade both on sensory discrimination and — crucially — its lack. To be disposed to respond to two things (or two parts of one thing, in this case) in the same way because we fail to discriminate between them in one or more respects has significant efficacy. To know how to make a stick balance across your hand is to know how to make one side of the stick behave in the same way as the other side. In essence the skill is based upon an ability to make one side of the stick match the other in respect of the forces acting upon it: to make both sides of the stick indiscriminable from one another in respect of their tendency to fall. From the point of view of artificial intelligence this is an extremely complex skill, but from a haptic point of view it is child's play.

Thanks to Jason Streitfeld for the FB discussion that led to this post.

Thursday, 19 March 2015

Acquisitions of Brain and Body


Like most adults I have a fair quantity of scars on my body, mostly on my hands. The majority of these marks and disfigurements are minor, nonetheless it often strikes me as surprising that so many of us manage to retain all of our fingers through life. Considering the many dangerous things we do with our hands, it is a testament to our skills and foresight (and no small amount of first aid and general hygiene) that our hands are often in such good shape.

We commonly speak of scars as acquisitions, as things we obtain through mishap and misadventure. Scars are additions, and sometimes, in the more extreme cases, evidence of subtractions from the body – from what we would otherwise have.

It is common also to speak of skills as acquisitions – as abilities we gain through practice and experience. Knowledge also, is a capacity we tend to think of as an acquisition.

To acquire something is to gain, or to form, a certain kind of possession, typically of an object or else a demeanour, attitude, disposition or tendency. In these latter cases the term “acquisition” is used in a technical sense that could just as easily be replaced with adopting an attitude, forming a disposition or developing a tendency.

In ordinary usage, acquisition most commonly pertains to objects or other forms of material wealth. To acquire a trophy is usually – trophy-scars notwithstanding – both to acquire an object as well as the admiration, acclaim or appreciation for which it stands. But to gain recognition is not actually to acquire anything so much as it is an increased likelihood to be treated preferentially by people in the know. It is very common for such social achievements to be recognised through the use of material tokens: trophies, epaulettes, titles etc.

So, like our less extreme scars, acquisitions are most commonly additions to what we already possess and whilst such possessions may take up little space, they do nonetheless need to be accommodated. Even digital information needs to be stored.

It is no surprise therefore that we tend to describe skills and knowledge in terms of content, as things we bundle away in our heads ready for later use. It’s as if our brains were vast repositories of information which we routinely access in the same way we retrieve books from a library or artefacts from a museum. If we acquire stuff, then it follows that we need somewhere to store it. And what better place than the brain? But what seems obviously the case is not necessarily the case.

Consider our hands again. When we learn to play the piano we do not speak of acquiring new additions to our hands. Our hands are not repositories. They do not store their capacities, even though they clearly have capacities, or at least they participate to a very large degree in the capacities of the person as a whole. When we learn a new technique requiring dexterity, we may develop the musculature of our digits etc. but there are no new hand acquisitions as such – not, of course, unless we inherit another scar or two. And if we are unfortunate enough develop carpal tunnel syndrome, arthritis or ganglion cysts these are not strictly speaking acquired – they develop or arise. The capacity was already there.

Many critics of the notion of mental content continue to speak of “skill acquisition” or “knowledge acquisition” as if there were no ground to be lost as a consequence. I’m not so sure the term helps us. In fact I think it may be a hindrance. Perhaps it would make more sense to speak of knowledge and skills in the same way that we regard the changes that occur in our hands when we learn a new technique. Perhaps we should make a point of regarding skills and knowledge as developmental changes rather than as acquisitions. You cannot acquire a development but you can develop a skill and you can develop your knowledge. Organisms and organs are things that develop. There is no room in a brain for any acquisitions. All the space is already taken. Knowledge and skills are develop-mental.


Thursday, 29 August 2013

Imagining Itself (Part XV: Capability and Knowledge)



Can we become capable of doing things that we are currently unable to do, simply by thinking of them, by imagining ourselves doing them? Is imagination an enabler of action? Could I become capable of making a violin simply by carefully imagining the whole intricate process? The answer to these questions should be obvious but the underlying reasoning will require a certain amount of patient chiseling and shaping to be carried out beforehand.

Ten years ago I decided to make a kitchen table. I knew I had all of the basic skills necessary to start the job and I also knew that with a little research and care I could acquire the further skills necessary to overcome any of the foreseeable obstacles I might encounter. In short, I was certain that I could make a simple table and this gave me the confidence to be a little daring and attempt to learn on the job. The process was slow and I made many mistakes (fortunately none that I couldn’t fix or replace) but eventually through many unexpected twists and turns I completed what I still consider to be a handsome piece of oak furniture that is in daily use. Buoyed up by this success I decided to embark on the more ambitious venture of making a double bed from cherry wood. Once again I made several silly but salvageable mistakes along the way but eventually ended up with a simple but elegant piece of furniture held together by 52 hand cut mortise and tenon joints. It’s a thing of pride and an object I am obviously intimately acquainted with.

Neither of these ‘projects’ would have been possible if I hadn’t already acquired the skills (and tools) to at least commence them. However, even without those skills I could easily have imagined what it would be like to make these objects. But then, imagining is not knowing - and knowing, as we will soon find out, is not necessarily the capability many people are inclined to think it is.

Around the same time as I was chiselling mortise and tenon joints, Derek Melser, a furniture-maker-turned-philosopher living in New Zealand, published a book entitled “The Act of Thinking”. The underlying thesis is very similar to that outlined in Part XII of this series of blog posts, and to that extent I think he has got it absolutely right: thinking is a species of action. Nonetheless, there are several respects in which Melser’s theory doesn't adequately explain the less physical of our actions – most especially he is curiously vague on the subject of how imagined perceptions might constitute actions. For example, what might an ‘actional’ visualisation consist of? Melser writes: “To ‘visualise’ thing T is to covertly token a certain visual perceptual behaviour.” To “token”, for Melser, is to enact only a fragment of an action and in doing so the token becomes a referent to the thing tokened.

The argument that some inhibited tracking movements of the eyes, or some tokened verbal descriptions of the things or experiences imagined are sufficient to explain visualisation is unconvincing. So far as gesturing and speaking are concerned I think Melser may be largely, if not wholly, right (I’ll return to this presently). But as to his account of imagined perceptions, I think a more expansive explanation is due.

To be fair to Melser, he stages the different performances that comprise his overall thesis with genuine skill and he directs the various actors expertly, making them speak to each other and to us with close attention to nuanced argument and overall coherence (which is to care for the audience with clarity – a commendable thing in any philosopher). Where his workbench is a little shaky though is in its incorporation and understanding of representation and representational strategies.

Melser takes the view that perception is achieved “when and only when, […] an appropriate verbal act is performed” such as the infant’s exclamation of “mummy!” at the appearance of her mother. Melser is partly correct, I think, but he makes the mistake of overemphasising verbal representation  (or “concerting” as he calls it) at the expense of other equally valid forms of representation.

My son, who will be 3 very soon, is still learning to name colours. Does this mean that he doesn’t perceive them? On Melser’s account we have no option but to conclude that he doesn’t yet perceive colours but I can prove this is incorrect with a simple experiment that I tried more than 4 months ago. I set up 5 different coloured objects and asked him to find others of the same colour. He got it right every time. We have already encountered an answer to what is happening here provided by Donald Brook’s theory of representation. Children are able to select Matching and Simulating representations long before they are able to speak their names.

I agree with Melser that the ability to represent something is a precondition for perception. But the skill of representation is by no means first acquired through our entry into language. If a forthcoming paper by Donald Brook is anything to go by, the capacity to represent, in rudimentary form at least, is also shared by many animals and possibly some insects also (bees for instance) which suggests that there must be a genetically inherited component at work. Furthermore, if thinking is a form of covert action, then who knows how many animals might be capable of rudimentary forms of representation? This is a question only science can answer.

Melser writes: “One of the main features of imagining is that you can do it where real X-ing [seeing a ghost for example] is impossible.” This seems perfectly right doesn’t it? I can imagine jumping to the moon but I can’t do it. But if imagining is a species of action then what imaginary action could we possibly ‘do’ to visualise a ghost? The problem is one that Melser’s theory simply cannot solve. However, if we expand the conception of action to include representational action then suddenly the whole difficulty evaporates. To Imagine a ghost is to imagine what a ghost would look like i.e. how we would represent a ghost to others, for example by cutting holes in a sheet or doodling a white image on black paper or by wafting steam about etc.

So, to return to the question posed at the beginning of these bloggy thoughts: Can a capability emerge as a consequence of imagining?

If imagining something is a process of representationally oriented action, then to be capable of representing an ability is no guarantee of the capability of doing it. The capabilities of Matching representation that involve bodily motion and control (i.e. gestures, postures, facial expressions etc.) on the other hand are genuine proof of ability. If I can mimic your dance steps, footfall for footfall, then there is no question that I know how to perform your dance. But if I can describe your dance, footfall for footfall, no matter the intricacy of the detail, there is no guarantee whatsoever that I can follow my description. Different forms of representational ability presuppose capacities, but most commonly these are capacities of representational action, not of performance.

Imagination is a form of what Ryle would call “knowing how”. Too often people confuse the knowing how to represent with the more practical capabilities of knowing how to do.
“A child who had never manifested in words, gestures, or play the working out of simple problems could not be said to work them out ‘in his mind’, any more than he could be said to know ‘in his mind’ the names of colours, if he was unable to say their names, or to point or to fetch the right colours when their names were called out. Thinking in ones mind (silent thinking, pausing to think) is not the most fundamental form of thinking, but instead presupposes thinking in play, work, or words.” -Norman Malcolm