Wednesday, 28 May 2014

Recognition



Language has to be learned, but what of images?

In recent decades, research using functional brain imaging techniques has shown conclusively what many people already suspected; namely, that when we re-encounter something, our brain activity corresponds closely to that of earlier encounters of the same kind. There really couldn't be a better word to describe this than "recognition". We also know from related studies that representations also trigger corresponding recognition networks to the things represented, and even linguistic representations—nouns for instance—which bear nothing in common with the things they nominate, activate many portions of the brain associated with the things named, as well as others involved with language processing and conceptual understanding. Image recognition on the other hand, although it may trigger language associations, is not reliant upon language. Pictures, photorealistic pictures especially, are recognised because in certain respects and in certain circumstances they share much in common with the things they depict.

Last week, in a BBC programme devoted to the work of artist Michael Craig-Martin, the artist' final words are:
If you go back to the books that children are taught words in, you will have a picture of a ball and then there's the word "ball". And the reason for the book is to teach the child to know the word ball. But it's based on the idea that they already know what the picture is, because there isn't a ball there. There's a picture of a ball. So the child already has learned how to read pictures of things as though they were the things. Now, we do that so early, we are probably doing that at about two or three months. That's the foundation of language, not the words - it's the pictures of things that are the basis of our understanding.
We could quibble over the finer logic of Craig-Martin's thinking here but the general point is correct: we recognise representations as representations long before we learn to speak or read. What is less clear though, and what research has yet to confirm, is at which developmental period infants and animals become capable of responding differentially to representations as opposed to the things they represent. In other words: we don't yet know at what point infants learn that representations are representations, i.e. when their brain activity switches from the simple recognition of things in depictions to a recognition of depictions as depictions—as tools. It seems likely that this emergence is gradual, as gradual as the emergence of consciousness itself. In fact, I would go as far as to say that the ability to discriminate between a representation and whatever is represented lies at the very heart of consciousness.



Wednesday, 21 May 2014

Black from White



When I was around the age of seven, my parents had close friends who lived a few doors away from Camberwell School of Art, where I think they had studied some years earlier. They had a daughter, Esmie, who was three years my junior and I remember she once said something that struck me as obviously wrong - she claimed that white was not a colour. I tried to disabuse her of this misconception only to be gently corrected by our dads. I don't recall their explanation or if they even gave one but ever since I have always harboured a distrust of white.

Strictly speaking, Esmie was right that white is not a colour, at least not a chromatic colour. Look in the Chambers English Dictionary though, and the first entry you will find states: "White /wīt or hwīt/ adj of the colour of snow, the colour that reflects the maximum and absorbs the minimum of light rays." As we will see, getting to grips with white, turns out to be a little more difficult than might at first seem the case. White, as it turns out, is a bit of a grey area.

When the dads stepped into my dispute with young Esmie over the subtleties of colour definitions, perhaps they responded with something like: "White is actually a combination of all the colours." That, and the fact that I was obviously in the minority, might have shut me up for a while but a quick visit to a set of poster paints would soon have demonstrated the inadequacy of this explanation. No amount of deft colour mixing—at least of the kind that I was familiar with—would ever have generated white. Why otherwise was there a block of white in all the sets of paint I had ever owned if not for the fact that it is impossible to concoct such a non-colour from a combination of all the colours? Were my parents trying to taunt me with my obvious inability to blend colours in precisely the right formulation? Was the art of colour combination simply too sophisticated for a mere child equipped with a set of cheap paints? When in later childhood I discovered that watercolour sets standardly include no white, my worst suspicions seemed confirmed. Quite how the four-year-old Esmie could have understood such complexities escapes me. Perhaps her collection of felt-tip pens revealed a deeper secret.

Of course the answer to this conundrum lies in the distinction between subtractive and additive techniques of colour-mixing. Dyes or pigments are the components of the subtractive system because, when mixed, they subtract from white or—strictly speaking—they absorb more wavelengths of light. The additive system on the other hand mixes coloured light - the equal combination of red blue and green (yes green) combining to form white (for humans that is).

So far so good, whiteness consists of the combination of equal quantities of visible spectra. But the problems really only start here because although we have established that white is not a chromatic colour, we haven't yet established much else besides. Indeed, if it weren't for the fact that we can easily select instances of white in the world then the challenge of exemplifying the concept would be almost impossible. One possible strategy might be to say that white is the opposite of black. This is helpful but we're not out of the woods yet, far from it, in fact we're heading straight into a thicket and by the time we're through to the other side I hope to have convinced you that, in certain circumstances and without changing its chemical composition or intensity at all, white can become a perfectly serviceable simulation of black.

All we need for this thought experiment is a dimmish room, a white screen and a projector. So long as the environment isn't too bright then we should be able to see any image cast into the screen by the projector. First imagine that the projector is off. Every normally sighted human capable of expressing a rational opinion about what they see, would describe the screen in this state as white. For the sake of accuracy let's also check the intensity of light reflected from the screen to determine that it is currently reflecting an exposure value (EV) of 5 which is perfectly good for reading etc. but isn't too bright for our purposes.

Now let's turn the projector on and cast a blank rectangle of light into the screen. The physical properties of the screen haven't changed so it seems likely that everyone would still agree that the screen remains white—though the illuminated parts are now obviously brighter than the unlit parts. If we take another light-measurement of these dark portions of the screen we will find that they remain at an EV of 5 (i.e. what everyone agreed was white).

Now, let's project a simple greyscale image into the screen, an image of a room with black and white floor tiles and let's ask the audience whether the tiles look black and white. Once again everyone will agree - the tiles in the image look black and white. We can also take another light-measurement of the black tiles and once again we will find that it remains at 5, or even possibly slightly brighter.

So, what's going on? How can an objectively white thing seem so uncontentiously black? Philosophers and psychologists, like well-meaning dads, are tempted to jump in at this point to inform us that the phenomenon is the result of an illusion in which we misjudge, misread or miscalculate the white as black. Essentially they take the view that illusion is dependent upon a failure of cognition of some kind. Failure does indeed play an instrumental role in the explanation of what is going on, but the failure is definitely not one of reason (judgement), literacy (reading) or computation (calculation), nor in fact is it a failure of perception: it is a failure of sensory discrimination.

So, let's set the question of illusion aside here and start instead with a simple explanation of sensory discrimination. All organisms are engaged with the world by means of their senses. If senses are triggered, organisms respond. Brains then, are part of sophisticated systems of responsiveness in which organisms possess both genetically acquired and learned responses to the things they encounter. Some stimuli will inevitably be more important than others and it will therefore be in each organism's interests to respond advantageously to relevant stimuli and to be unresponsive to irrelevant stimuli. Sensory discrimination then, is a vital capacity that enables organisms to respond in different ways to different stimuli.

Equipped with this understanding of sensory discrimination, we can return to the projected image scenario and discover that we need not assume any failures of reasoning at all. All we need to assume is that the depicted black tiles trigger many of the same sensory responses as would actual black tiles seen from the depicted viewpoint. On a perceptual level we would have no difficulty in differentiating between the simulated black and an actual instance of black. And if we were practiced in skills of rational judgement there is no reason why these could not enable us to recognise the differences between simulation and reality. Nonetheless it is important to repeat that nothing hangs on these skills. Nor would any improvement in our skills of judgement make the slightest difference to our susceptibility to discrimination failure, to the seeing—in this case— of white as black.

Thursday, 15 May 2014

The Discovery of Blue



Around a year ago I spoke with several painters about my hunch that it would never have occurred to ancient peoples to say that distant mountains "look blue". They were unconvinced, "Everyone sees distant mountains just like we do," they told me, "so there is no reason why distant mountains wouldn't have been described as looking blue." I argued that, whilst our sensory capacities have indeed remained the same, we can only say that distant mountains look blue because we are acquainted with pictorial techniques (and their associated terminology) that were completely unknown to our distant ancestors and even to recent cultures unfamiliar with these innovations. I had nothing to back up my claims except further argument and the near certain knowledge that there is no evidence anywhere in the world of ancient depictions of hazy blue mountains. 

Last week, with the help of another friend, I came across some very persuasive evidence in support of my claims.

In 2010 the linguist Guy Deutscher, published a book entitled "Through the Language Glass" which discusses the research of W. E. Gladstone, who in 1858, a decade prior to taking up office as British Prime Minister, published a 3 volume study of the work of Homer. Gladstone noted that Homer's entire oeuvre contains not a single mention of the colour blue. Hundreds of references are made to other colours but blue is entirely absent. Soon after the publication of Gladstone's work, a German philologist by the name of Lazarus Geiger, discovered that this absence of blue is characteristic of ancient texts the world over and is a notable instance of many similar cultural variations in the description of colours. Following Geiger's discoveries, the most obvious explanation — that the cause was attributable to colour-blindness — was immediately ruled out. Even Gladstone had initially concluded that this was the case but to his credit he also offered another more insightful explanation:
"The art of painting was wholly, and that dying was almost, unknown...The artificial colours with which the human eye was conversant, were chiefly the ill-defined, and anything but full-bodied, tints of metals. The materials, therefore, for a system of colour did not offer themselves to Homer's vision as they do to ours. Particular colours were indeed exhibited in rare beauty, as the blue of the sea and of the sky. Yet these colours were, so to speak, isolated fragments; and, not entering into a general scheme, they were apparently not conceived with the precision necessary to master them. It seems easy to comprehend that the eye may require familiarity with an ordered system of colours, as the condition of its being able closely to appreciate any one among them."
Deutscher describes several 20th Century anthropological studies in which many indigenous cultures have been found to have similarly limited vocabularies of colour. He discusses the commonplace theoretical explanations for these findings and how that mistakenly assumed various forms of physiological cause. Only very recently has it been confirmed that Gladstone's alternative explanation was correct: differences in the vocabulary of colour across cultures are the result of cultural developments (of acquired skills, tools and materials), not biological changes.

In order to elucidate the issues Deutscher invents a brilliant fantasy that deserves to be quoted at length:
"Imagine we are sometime in the distant future when every home is equipped with a machine that looks a bit like a microwave but in fact does far more than merely warm food up. It creates food out of thin air-or rather out of frozen stock cubes it teleports directly from the supermarket. Put a cube of fruit stock in the machine, for example, and at the touch of a few buttons you can conjure up any imaginable fruit: one button gives you a perfectly ripe avocado, another button a juicy grapefruit.
      But this is an entirely inadequate way to describe what this wonderful machine can do, because it is by no means limited to the few “legacy fruits” that were available in the early twenty-first century. The machine can create thousands of different fruits by manipulating the taste and the consistency on many different axes, such as firmness, juiciness, creaminess, airiness, sliminess, sweetness, tanginess, and many others that we don’t have precise words to describe. Press a button, and you’ll get a fruit that’s a bit like an avocado in its oily consistency, but with a taste halfway between a carrot and a mango. Twiddle a knob, and you’ll get a slimy lychee-like fruit with a taste somewhere between peach and watermelon.
      In fact, even coarse approximations like “a bit like X” or “halfway between Y and Z” do not do justice to the wealth of different flavors that will be available. Instead, our successors will have developed a rich and refined vocabulary to cover the whole space of possible tastes and consistencies. They will have specific names for hundreds of distinct areas in this space and will not be bound by the few particular tastes of the fruit we happen to be familiar with today.
      Now imagine that an anthropologist specializing in primitive cultures beams herself down to the natives in Silicon Valley, whose way of life has not advanced a kilobyte beyond the Google age and whose tools have remained just as primitive as they were in the twenty-first century. She brings along with her a tray of taste samples called the Munsell Taste System. On it are representative samples of the whole taste space, 1,024 little fruit cubes that automatically reconstitute themselves on the tray the moment one picks them up. She asks the natives to try each of these and tell her the name of the taste in their language, and she is astonished at the abject poverty of their fructiferous vocabulary. She cannot comprehend why they are struggling to describe the taste samples, why their only abstract taste concepts are limited to the crudest oppositions such as “sweet” and “sour,” and why the only other descriptions they manage to come up with are “it’s a bit like an X,” where X is the name of a certain legacy fruit. She begins to suspect that their taste buds have not yet fully evolved. But when she tests the natives, she establishes that they are fully capable of telling the difference between any two cubes in her sample. There is obviously nothing wrong with their tongue, but why then is their langue so defective?
      Let’s try to help her. Suppose you are one of those natives and she has just given you a cube that tastes like nothing you’ve ever tried before. Still, it vaguely reminds you of something. For a while you struggle to remember, then it dawns on you that this taste is slightly similar to those wild strawberries you had in a Parisian restaurant once, only this taste seems ten times more pronounced and is blended with a few other things that you can’t identify. So finally you say, very hesitantly, that “it’s a bit like wild strawberries.” Since you look like a particularly intelligent and articulate native, the anthropologist cannot resist posing a meta-question: doesn’t it feel odd and limiting, she asks, not to have precise vocabulary to describe tastes in the region of wild strawberries? You tell her that the only things “in the region of wild strawberry” that you’ve ever tasted before were wild strawberries, and that it has never crossed your mind that the taste of wild strawberries should need any more general or abstract description than “the taste of wild strawberries.” She smiles with baffled incomprehension.
      If all this sounds absurd, then just replace “taste” with “color” and you’ll see that the parallel is quite close. We do not have the occasion to manipulate the taste and consistency of fruit, and we are not exposed to a systematic array of highly “saturated” (that is, pure) tastes, only to a few random tastes that occur in the fruit we happen to know. So we have not developed a refined vocabulary to describe different ranges of fruity flavor in abstraction from a particular fruit. Likewise, people in primitive cultures-as Gladstone had observed at the very beginning of the color debate-have no occasion to manipulate colors artificially and are not exposed to a systematic array of highly saturated colors, only to the haphazard and often unsaturated colors presented by nature. So they have not developed a refined vocabulary to describe fine shades of hue. We don’t see the need to talk about the taste of a peach in abstraction from the particular object, namely a peach. They don’t see the need to talk about the color of a particular fish or bird or leaf in abstraction from the particular fish or bird or leaf. When we do talk about taste in abstraction from a particular fruit, we rely on the vaguest of opposites, such as “sweet” and “sour.” When they talk about color in abstraction from an object, they rely on the vague opposites “white/light” and “black/dark.” We find nothing strange in using “sweet” for a wide range of different tastes, and we are happy to say “sweet a bit like a mango,” or “sweet like a banana,” or “sweet like a watermelon.” They find nothing strange in using “black” for a wide range of colors and are happy to say “black like a leaf” or “black like the sea beyond the reef area.”
      In short, we have a refined vocabulary of color but a vague vocabulary of taste. We find the refinement of the former and vagueness of the latter equally natural, but this is only because of the cultural conventions we happen to have been born into. One day, others, who have been reared in different circumstances, may judge our vocabulary of taste to be just as unnatural and just as perplexingly deficient as the color system of Homer seems to us."
Cyanometer, 1789, for measuring the colour of the sky by Swiss physicist
 Horace-Bénédict de Saussure and German naturalist Alexander von Humboldt


Wednesday, 7 May 2014

Actions Louder Than Words



In a recent article for the New York Times, Adam Grant investigates the psychology of moral education and discusses various research suggesting that children are more responsive to praise aimed at the self as opposed to praise aimed at the act. He cites a research study in which researchers praised child generosity in one of two ways, either by targeting the act by saying: “It was good that you gave some of your marbles to those poor children. Yes, that was a nice and helpful thing to do.” or else by targeting the self: “I guess you’re the kind of person who likes to help others whenever you can. Yes, you are a very nice and helpful person.”

Two weeks later when the researchers gave these same children more opportunities to give and to share, they found that the second group of children were much more generous than the children whose actions alone had been praised. When only their actions were praised, children tended to make no association between their actions and their own moral character whereas when they were praised for being generous this encouraged them to think of themselves as generous people.

Although the article makes no mention of the closely related research on motivation, it should be noted that praise without content (i.e. praise with no information about what the praise is actually for) has been found to be of limited long–term value no matter how it is targeted and it has also been shown in some cases to do long–term harm to intrinsic motivation (I have written of this previously herehere and here).

Grant also mentions the flipside of praise and the research of June Price Tangney which distinguishes between shame on the one hand (the feeling of being a bad person) and guilt on the other (the feeling of having done a bad thing). Grant writes: “Shame is a negative judgment about the core self. […] In contrast, guilt is a negative judgment about an action, which can be repaired by good behaviour.”

Appropriately targeted, informative, positive judgments reinforce the development of a child's moral character, whilst disappointment – as opposed to punishing disapproval – is regarded as the best response to poor behaviour (once again protecting the child’s self-image). This seems to be the message, but Grant also points out that there appears to be an optimum period – between the ages of 5 and 10 – where this strategy has greatest influence, outwith this 5 year window the impact is negligible.

Grant also mentions a classic study by J. Philippe Rushton in which 140 children were given tokens for winning a game. They then watched a "teacher" play the game either selfishly or generously, followed by a pronouncement from the same teacher on the value of taking, giving or neither. They then had the option of donating some of their tokens to a child with none. The results were startling. In every case the children were significantly influenced by the actions of the teacher but not the teachers words. Even when the teacher preached selfishness but gave generously, 49% more than the norm gave generously also. Two months later, Rushton returned to see if any residue of these effects remained. To his surprise, the most generous children were those who had previously watched the teacher give generously whilst saying nothing.

Should we be surprised that these children were more likely to match the behaviour of the teacher when this was not accompanied by verbal exemplification of the underlying moral principles? Although we commonly say that "actions speak louder than words", it seems reasonable to suppose that actions plus speech would be even more effective. Certainly in the praise-case, mentioned earlier, there wasn't anything for the researcher to actually model in behavioural terms and therefore it seems fair to conclude that the information accompanying the praise provided the means for each praised child to generate a self-conception: to be able to describe themselves as "nice and helpful".

The two examples discussed present us with two quite different forms of learning and it might be worthwhile to distinguish between these in a little more detail. Psychology provides two terms — two types of knowledge in fact — that might be useful here: "declarative knowledge" and "procedural knowledge", or what the British philosopher Gilbert Ryle, called "knowing that" and "knowing how".

When children in the first study were praised for their generosity, their actions were already demonstrative of generosity: of knowledge of the procedure of generosity. Targeted praise reinforced this knowledge with additional declarative information that seems to have disposed these children towards further acts of generosity. On the other hand, the children who learnt generosity by example were not provided with any declarative understanding of how to represent themselves, yet they behaved more generously than their verbally instructed peers. How should we interpret this?

We should be careful not to make the mistake of supposing that by emulating generous actions, children simultaneously acquire the declarative knowledge necessary to describe themselves as generous. If that were so, then the self-defining information contained in the praise-case would be of no consequence. It seems far more likely that the token-sharing children simply copied the teacher because they wished to fit-in or because they assumed that giving tokens away was the done thing. Perhaps they hadn’t yet formed any form of “self image” in this regard but were simply behaving in a mimetic mode that might later be galvanised, i.e. explicitly represented (with the help of a guide or teacher) as a character-trait as opposed to a mere unreflective form of mimicry.

This is not to say that unreflective actions are mindless reflexes, but what it does strongly imply - if not confirm - is that such actions are not driven by any form of conceptual knowledge. In other words, they are procedural capacities whereas morals, on the other hand, are abstract concepts that we attribute to our own and other peoples' actions and are therefore not an intrinsic nor necessary part of prosocial behaviour. Nonlinguistic humans and animals are not immoral, they are amoral - they simply do not have the conceptual tools necessary for moral reasoning (or reasoning of any kind). But even a piranha doesn't eat its own kin. 

One outstanding question remains: why are children more generous when generosity is exemplified in the absence of verbal reinforcement? The most obvious answer is probably the right one: they take their cues wherever they can find them, but if cues of different kinds are available they will emulate procedural forms of exemplification over declarative ones. The simpler the representation the better. In fact, verbal representation (declarative conceptualisation) would appear to distract significantly from procedural exemplification by introducing competing behavioural options.

The point is this: moral action is enabled by conceptual capacities that we acquire through language. Prosocial behaviour, on the other hand, is conceivable in the absence of language by virtue of nonverbal procedures of exemplification that must have preceded language - including during human infancy.