In
a paper entitled "What Makes Perceptual Content Non-conceptual?"
(1998), Sean Dorrance Kelly examines Gareth Evans' (1982) arguments for a
non-conceptual understanding of perception. Kelly lists these as follows:
"a) that perceptual content is the same for
humans and animals,
b) that perceptual content is
belief-independent,
c) that perceptual content is, sometimes
at least, irreducibly articulated in terms of dispositions by the perceiver to
act upon the object being perceived, and
d) that perceptual content is more finely
grained than the concepts in terms of which we classify our thoughts."
Before
I discuss each of the above (in this and following posts) I'd first like to say
a few words about what Kelly, Evans and others term "perceptual
content." In my view, to speak of content in this way is to beg the
question: content of what form? If the answer is "representational
content", as Kelly and Evans make plain, then we already have an unexamined
assumption on the table before we have even begun any serious analysis. It
is my contention that a thorough analysis of social practices of representation lies at the heart of any principled explanation of perception,
and any unfounded assumption about what representations
are—and more importantly: how they function—is both premature and potentially obstructive to insight.
For this reason, when discussing perception I intend to speak only
of perception and to reserve the term "perceptual content"
specifically for the discussion of theories that make explicit use of the
term.
In (a) Evans' takes the view that humans and animals share the
same "informational states" and thus the same perceptual
content. He writes: "The informational states which a subject acquires
through perception are non-conceptual, or nonconceptualised. Judgements based
upon such states necessarily involve conceptualisation." Although I agree
that perception is non-conceptual and that judgements necessarily involve
conceptualisation, I find Evans' claim that perceptual content is the same for
humans and animals to be lacking in argumentative force. All he provides is a rather flimsy assertion that
perception is non-conceptual. Furthermore, as as John McDowell (1994) points
out: "the word 'content' plays just the role in Evans' account that is
played in that position by the fraudulent word 'conceptual.'" Unlike Evans, McDowell
(who edited and published Evans' book after his death in 1980), conceives of
perception as fundamentally conceptual. Nonetheless, as Daniel Hutto
(1998) points out, McDowell's commitment to conceptual content, like that of
other intellectualists, creates a "heavy burden when it comes to
explaining the origin and development of concepts."
The
problem is readily located in the entailments of the content view. If we
conceive of brains as containers then it will be necessary to provide both a
theory and evidence of what these containers actually contain and how this
content is generated, distributed, stored, retrieved and interpreted. The
alternative is to conceive of brains as embodied organs that have evolved in
response to challenging environmental influences and which have therefore
developed highly sophisticated dispositions to respond to a variety of causal
encounters. On this view, "content" is simply a convenient but misleading
characterisation of these complex dispositions to respond. When someone asks
your name, you are disposed to say your name — no inner representation is
necessary. Nonetheless, despite the explanatory power of this view, the
mainstream position throughout much of the cognitive sciences and philosophy
holds that brains are containers and that their content is fundamentally
representational. A variety of competing representational theories are proposed
but in all cases the basic idea is that brains obviously cannot construct their
own 1:1 copy of the world with which to do their work. Instead, it is assumed that brains generate representational states which correspond in informationally
significant ways with the encountered world. A minimal version of this approach
construes content as a form of "covariation" between internal (brain)
and external (world) events — the idea being that as a perceived event
occurs, corresponding changes occur in the brain and as the event varies, so
too do the corresponding brain states. These states are thus regarded as
representational states or, as Mark Cain (2013) puts it: "according to the
causal covariation theory, the LOT [Language Of Thought] symbol
HORSE means horse because tokens of that symbol are caused by, and only
by, horses." Hutto and Myin (2013) are unconvinced:
"If information is nothing but covariance
then it is not any kind of content—at least, it is not content defined, even in
part, in terms of truth-bearing properties. The number of a tree's rings can
vary with the age of the tree; however, this doesn't entail that the first
state of affairs says or conveys anything true about the second, or vice versa.
The same goes for states that happen to be inside agents and which reliably
correspond with external states of affairs—these too, in and of themselves,
don't 'say' or 'mean' anything just in virtue of instantiating covariance
relations."
The
idea that mental content "means" or is "about" the things
it is directed towards is a central article of faith amongst many, if not all,
representationalists, but as Kathleen Atkins (1996) points out:
"In an important sense we do not really
know what "aboutness" is. Certainly, at the outset, a vague realism
about the directedness of mental/neural events is adopted: representations are
"tied" to objects and properties and hence (there being no good
reason to suppose otherwise) bear some kind of relation to them. But if we do
not know exactly what it means to regard a particular as a particular, to see
this thing as being of a certain type, this place as the same place, and so
on—hence what kinds of capacities or abilities are involved in having
representations that are about those things—then we do not know, in any
substantive sense, in what that relationship consists. We only trust that it is."
Theorists
are right to enquire into the relationships between neural events and the
perceived world, but whether those neural events are "about" the
world is simply too uncertain to merit the credibility it currently
commands, especially if this credibility is simply taken on trust.
On
the issue of whether we know what aboutness is, perhaps we don't in the context of
alleged inner representations but what we do know is that in the publicly
perceptible world, aboutness is very well understood. Aboutness is the
meaning we attribute to symbolic representations of numerous kinds and forms,
all of which rely on rules of use for their efficacy. For something to be about
something else — for it to signal, refer, indicate, signify, token,
connote, or denote, is for it to be generated and interpreted within an
already intelligent system. Without such a system
or rule to give it meaning, a wink is nothing more than a contraction of the
eyelids (Ryle 1968). Without a system or rule, a symbol or signal cannot
reliably stand-in (i.e. represent) anything (Bickhard and Turveen 1995). It
merely is what it is.
Moreover, acts of symbolic communication are intentional and herein lies the greatest challenge in proposing a coherent account of mental content. Because if an inner representation is needed to drive an intentional action, then a further intentional representation is needed to initiate this intentional representation and this regress of representations is without end! An alternative naturalistic theory is therefore needed that avoids this regress, not to mention the equally illogical assumption that some form of symbolic language underlies language itself (Fodor 1975, Fodor and Pylyshyn, 1988). As Gilbert Ryle put it to Daniel Dennett in a letter in 1976: "Fodor beats Locke in the intricacy of his 'wires-and-pulleys', when what was chiefly wrong with Locke was the (intermittent) intricacy of his 'wires-and-pulleys'!"
Moreover, acts of symbolic communication are intentional and herein lies the greatest challenge in proposing a coherent account of mental content. Because if an inner representation is needed to drive an intentional action, then a further intentional representation is needed to initiate this intentional representation and this regress of representations is without end! An alternative naturalistic theory is therefore needed that avoids this regress, not to mention the equally illogical assumption that some form of symbolic language underlies language itself (Fodor 1975, Fodor and Pylyshyn, 1988). As Gilbert Ryle put it to Daniel Dennett in a letter in 1976: "Fodor beats Locke in the intricacy of his 'wires-and-pulleys', when what was chiefly wrong with Locke was the (intermittent) intricacy of his 'wires-and-pulleys'!"
In
the next post I will explore how the challenge of intentional action can be met
without recourse to inner representations. Such an account will also prove
helpful in understanding how we might reasonably attribute intentional action
to animals. In so doing it may also be possible to give some substance to Evans' intuition over the common foundations of animal and human perception.