1. Introduction: The Battle of the Bulge
In trying to develop a theory of how consciousness arises in the brain, it is important to begin with an account of which kinds of brain events can be conscious. Once we know which brain events are conscious, we can investigate what distinguishes them from those that are not. Pretty much everyone agrees that some activities in the brain are never conscious. For example, it would be hard to find researches who think there can be conscious events in the cerebellum. Most researchers nowadays also deny that there can be conscious events in subcortical structures, though there is an occasional plea for the thalamus (Baars) or the reticular formation (Damasio). But what about the neocortex? Is any activity in that folded carapace a candidate for conscious experience? Does each cortical neuron vie for the conscious spotlight, like the contestants on a televised talent show? (Recall Dennett on “cerebral celebrity.”)
To make progress on this question, we can ascend from brain to mind, and ask which of our psychological states can be conscious. Answers to this question range from boney to bulgy. At one extreme, there are those who say consciousness is limited to sensations; in the case of vision, that would mean we consciously experience sensory features such as shapes, colors, and motion, but nothing else. This is called conservatism (Bayne), exclusivism (Siewert), or restrictivism (Prinz). On the other extreme, there are those who say that cognitive states, such as concepts and thoughts, can be consciously experienced, and that such experiences cannot be reduced to associated sensory qualities; there is “cognitive phenomenology.” This is called liberalism, inclusivism, or expansionism. If defenders of these bulgy theories are right, we might expect to find neural correlates of consciousness in the most advanced parts of our brain.
In this discussion, I will battle the bulge. I will sketch a restrictive theory of consciousness and then consider arguments for and against cognitive phenomenology.
2. The Locus of Consciousness
Not only do I think consciousness is restricted to the senses; I think it arises at a relatively early level of sensory processing. Consider vision. According to mainstream models in neuroscience, vision is hierarchically organized. Let’s consider where in that hierarchy consciousness arises.
Low-level vision, associated with processes in primary visual cortex (V1) registers very local features, such as small edges and bits of color. Intermediate-level vision is distributed across a range of brain areas with names like V2 through V7. Neural activations in these areas integrate local features into coherent wholes, but, at the same time they preserve the componential structure of the stimulus. The intermediate level represents shapes as presented from particular vantage points, separated from a background, and located at some position in the visual field. High-level vision abstracts away from these features, and generates representations that are invariant across a range of different viewing positions. Such invariant representations facilitate object recognition; a rose seen from different angles in different light causes the same high-level response, allowing us to recognize it as the same. Following Ray Jackendoff, I think consciousness arises at the intermediate level. We experience the world as a collection of bounded objects from a particular point of view, not as disconnected, edged, or viewpoint invariant abstractions.
To drive home this point, consider some facts about low- and high-level vision. The contents of low-level vision don’t seem to match the contents of experience. For example, when two distinct colors are rapidly flickered, we experience one fused color, but low-level vision treat the flickered colors as distinct; fusion occurs at the intermediate level (Jiang et al., 2007; see also Gur and Snodderly, 1997). Likewise, the low level seems oblivious to readily perceived contours, which are registered in intermediate areas (Schira et al. 2004). There is also evidence that the suppression of blink responses doesn’t arise until intermediate-level V3; V1 activity cannot explain why blinks go unperceived (Bristow et al., 2005). High-level vision does better than low-level vision on these features, but suffers from other problems when considered as a candidate for consciousness. Many high-level neurons are largely indifferent to size, position, and orientation. High-level neurons can also be indifferent to handedness, meaning they fire the same way when an object is facing to the right or the left. In addition, high-level neurons often represent complex features, so activity in a single neuron might correspond to a face, even though faces are highly structured objects, with clearly visible parts. It would seem that the neural correlates of visual consciousness cannot be so sparsely coded: just as we can focus in on different parts of a face, we should be able to selectively enhance activity in neurons corresponding to the parts of a face, rather than having a single neuron correspond to the whole.
Such considerations lead me to think that, in vision, the only cells that correspond to what we experience are those at the intermediate-level. I think this is true in other senses as well. For example, when we listen to a sentence, the words and phrases bind together as coherent wholes (unlike low-level hearing), and we retain specific information such as accent, pitch, gender, and volume (unlike high-level hearing). Across the senses, the intermediate-level is the only level at which perception is conscious.
I now want to argue that all conscious experience can be fully explained by appeal to perceptual states at the intermediate-level, including conscious states that seem to be cognitive in nature. That makes me a restrictivist.
3. Arguments for Cognitive Qualia
Expansionists say we can be conscious of concepts and thoughts, and that such experiences outstrip anything going on at the intermediate-level of perception. Here I will review my favorite expansionist arguments. I cannot do justice to these arguments here, but they will serve to illustrate some of the response strategies available to restrictivists.
Tim Bayne (2009) defends expansionism by appeal to a neurological disorder called associative visual agnosia. Such agnosics cannot recognize objects, but they seem to see them. When presented with an object, they can accurately describe or even draw its shape, but they can’t say what it is. Bayne thinks their experiences are incomplete. He thinks knowing the identity of an object changes our experience of it. This is intuitively plausible. Consider the picture below. You may not be able to make it out at first, but when you do (hints below), it will change phenomenologically.
Does the phenomenological change here require cognitive phenomenology? I think not. Instead, we can suppose that our top-down knowledge of the meaning changes how we parse the image. When we decipher the picture, we imaginatively impose a new orientation (we rotate 90 degrees to the left); we segment figure and ground (the lower black bar becomes background); and we generate emotions (social delight) and verbal labels (the name “Tim”), which we experience consciously along with the image; these are just further sensory states—bodily feelings in the case of emotions, and auditory images in the case of words. I think features of this kind can also explain what is missing in agnosia. Without meaning, images can be hard to parse, and associated images and behaviors do not come to mind.
Another argument comes from Charles Siewert (1998; forthcoming). He focuses on our experience of language. Sometimes, when hearing sentences, we undergo a change in phenomenology, and that change occurs as a result of a change in our cognitive interpretation of the meanings of the words. For example, if you hear an obscure sentence without following it, and then re-read with full comprehension, your experience seems to change. Likewise, if you change your interpretation of an ambiguous word. Phenomenology also changes when we repeat a word until it becomes meaningless, or when we learn the meaning of a word in a foreign language. In all these cases, we experience the same words across two different conditions, but our experience shifts, suggesting that assignment of meaning is adding something above and beyond the sound of the words.
The intuitions here are compelling. Clearly the very same sentence can give rise to different experiences depending on how it is understood. But there are many sensory changes that take place as a result of sentence comprehension. First, we form sensory imagery. If I tell you that ng’ombe means cow in Swahili, you will undoubtedly form a visual image of how cows appear. Repeating “cow” over and over will cause the cow image to fade and attention to focus on the sound of that peculiar word. Second, comprehension effects parsing. When we come to comprehend a sentence, intonation and phrase bindings may alter in our verbal imagery. If I tell you that habari gani is a question, you will repeat it with a rising pitch on the second word. Third, comprehension entails knowing how to go on in a conversation; if I ask, “Habari gani?” you might be left in a silent stupor, but, if I say “How are things going?” your mind will fill with appropriate verbal replies. Fourth, meaning effect emotions. Incomprehension can engender feelings of confusion, frustration, or curiosity. Comprehension can evoke emotions that depend on meaning. To take a Siewart example, if I say, “I’m very hot,” you might feel annoyed at my vanity or concerned about my thermal state. Different actions may follow and produce bodily sensations; an eye-roll of exasperation or a hospitable dash to the air conditioner.
The third argument I will consider comes from David Pitt (2004). He begins with the observation that we often know what we are thinking, and we can distinguish one thought from another. This knowledge seems to be immediate, not inferential, which suggests we know what we are thinking by directly experiencing the cognitive phenomenology of our thoughts.
The most obvious reply is that knowledge of what we are thinking is based on verbal imagery. I know that I am thinking, “Expansionism lacks decisive support,” by experiencing those very words in my mind’s ear. Pitt counters that knowledge of what we are thinking cannot consist in verbal imagery, because comprehension is immediate, and verbal images are only indirectly related to thoughts. Just as we can hear a sentence without knowing its meaning, we could hear words in the mind’s ear without knowing what they mean. But, when we become aware of our thoughts, we are immediately aware of their meaning.
I think this is a kind of illusion. We erroneously believe that we are directly aware of the contents of our thoughts when we hear sentences in the mind’s ear. This belief stems from two things. First, we often use verbal imagery as a vehicle for thinking; for example, we might work out technical philosophical problems using language. Second, when contemplating a word that we understand, we can effortlessly call up related words or imagery, which gives us the impression that we have a direct apprehension of the meaning of that word. Our fluency makes us mistake awareness of a word for awareness of what it represents. This illusion goes away when we hear words in a foreign tongue or sentences we cannot parse. The claim that we grasp the content of our words and not just the words themselves cannot be maintained without an account of what that content consists in. Many philosophers believe that content is not in the head, but in the world (the meaning of a thought is the fact to which it refers), in which case content could not be directly grasped. Pitt might suggest that we grasp words in a language of thought, which are intrinsically meaningful, but the claim that we have a language of thought, beyond the languages we speak, is hugely controversial, and even its primary defenders (e.g., Jerry Fodor) have never suppose that the language of thought is conscious—otherwise he could convince people that it exists by introspection.
Putting these points together, I think restrictivsts should admit that thinking has an impact on phenomenology, but that impact can be captured by appeal to sensory imagery including images of words, emotions, and visual images of what our thoughts represent. Expansionists must find a case where cognition has an impact on experience, without causing a concomitant change in our sensory states. That’s a tall order.
4. Arguments Against Cognitive Qualia
At this point the dispute between restrictivists and expansionists often collapses into a clash on introspective intuitions. Restrictivists give their strategies for explaining alleged cases of cognitive phenomenology, and expansionists object that these strategies do not do justice to what their experiences are like. By way of conclusion, I will try to break this stalemate by sketching five reasons for thinking restrictivism is preferable even if introspection does not settle the debate.
The first argument was already intimated above. To make a convincing case for cognitive phenomenology, expansionists should find a case where the only difference between two phenomenologically distinct cases is a cognitive difference. But so far, no clear, uncontroversial case has been identified. This is a striking failure given the ease with which we can find clear cases of sensory qualities. If every phenomenal difference has a plausible candidate for an attendant sensory difference, the simplest theory would say that sensory qualities, which every one agrees to exist, exhaust phenomenology.
The second argument points to the fact that alleged cognitive qualities differ profoundly from sensory qualities in that the latter can be isolated in imagination. If I see an ultramarine sky, I can focus on the nature of that hue and imagine it filling my entire field with no other phenomenology. In contrast, it seems impossible to isolate our thoughts. For example, it doesn’t seem possible to think about the fact that the economy is in decline without any words or other sensory imagery. If other qualia can be isolated, why not cognitive qualia?
Third, it is nearly axiomatic in psychology that we have poor access to cognitive processes. For example, when comparing three identical objects, people prefer the one on the right but they don’t realize it (Wilson and Nisbett, 1978). All we experience consciously is the outcome, a hedonic preference for the rightmost object, together perhaps with some confabulated justifications of the choice, which may be experienced through inner speech. Given the fact that cognitive processes are characteristically inaccessible, the claim that we can be conscious of our thoughts is perplexing. If so, we don’t have access to all the biases that influence how we think. The only processes we ever seem to experience consciously are those that we have translated, with great distortion, into verbal narratives.
A fourth argument follows on this one. The incessant use of inner speech is puzzling if we have conscious access to our thoughts. Why bother putting all this into words when thinking to ourselves without any plans for communication? I suspect that the language is a major boon precisely because it can compensate for the fact that thinking is unconscious. It converts cognition into sensory imagery, and allows us to take the reins.
Finally, expansionism seems to dash hopes for a unified theory of consciousness. If only one kind of mental state can be conscious—intermediate-level perceptions—then there is hope of finding a single mechanism of consciousness. Indeed, I think that mechanism has been identified; there is evidence for the conclusion that perception is conscious when and only when we attend (Mack and Rock, 1998; Prinz, forthcoming). But there is little reason to think a single mechanism could explain how both perception and thought can be conscious, if cognitive phenomenology is not reducible to perception. This is especially clear if the mechanism is attention. There is no empirical evidence for the view that we can attend to our thoughts. There are no clear cognitive analogues of pop-out, cuing, resolution enhancement, fading, multi-object monitoring, or inhibition of return. Thoughts can direct attention, but we can’t attend to them. Or rather, thoughts become objects of attention only when they are converted into images, words, and emotions. Expansionists might say that thought and sensations attain consciousness in different ways, but, if so, why think that the term “consciousness” has the same meaning when talking about thoughts, if it does not refer to the same mechanism? Disunity threatens to make expansionism look like an unwitting pun.
For some people it’s introspectively obvious that we have cognitive phenomenology; for me, I am somewhat embarrassed to confess, it’s obvious that we don’t. Perhaps introspection cannot resolve this debate (cf. Schwitzgebel, forthcoming). Even so, I hope these concluding arguments might collectively tip the balance towards restrictivism.
Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge: Cambridge University Press.
Bayne, T. (2009). Perception and the reach of phenomenal consciousness. Philosophical Quarterly, 59, 385-404.
Bristow, D., John-Dylan Haynes, J-D., Sylvester, R., Frith, C.D., & Rees, G. (2005). Blinking suppresses the neural response to unchanging retinal stimulation. Current Biology, 15, R554-R556.
Damasio, A. R. (1999) The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York, NY: Harcourt Brace & Company.
Gur, M. & Snodderly, D.M. (1997). A dissociation between brain activity and perception: Chromatically opponent cortical neurons signal chromatic flicker that is not perceived. Vision Research, 37, 377–382.
Jackendoff, R. (1987). Consciousness and the computational mind. Cambridge, MAL MIT Press.
Jiang, Y, Zhou, K. & He, S. (2007) Human visual cortex responds to invisible chromatic flicker. Nature Neuroscience, 10, 657-662.
Mack, A., & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press.
Pitt, D. (2004). The phenomenology of cognition, or, what is it like to think that P? Philosophy and Phenomenological Research, 69, 1-36.
Prinz, J. J. (forthcoming). The conscious brain. New York: Oxford University Press.
Schira, M., Fahle, M., Donner, T., Kraft, A., & Brandt, S. (2004). Differential contribution of early visual areas to the perceptual process of contour processing. Journal of Neurophysiology, 91, 1716–1721.
Schwitzgebel, E. (forthcoming). Perplexities of consciousness. Cambridge, MA: MIT Press.
Siewert, C. P. (1998). The significance of consciousness. Princeton, NJ: Princeton University Press.
Siewert, C. P. (forthcoming). Phenomenal thought. In T. Bayne and M. Montague, eds., Cognitive Phenomenology. Oxford: Oxford University Press.