Chaire de recherche du Canada
Institut des sciences cognitives (ISC)
Université du Québec à Montréal
Montréal, Québec, Canada H3C 3P8
School of Electronics and Computer Science
University of Southampton
SO17 1BJ UNITED KINGDOM
ABSTRACT: It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how.
We can reduce just about everything that cogitive science needs to explain to three pertinent anglo-saxon gerunds — doing, feeling, and meaning — plus a fourth, which is explaining itself.
There’s doing, which covers everything that people and animals are able to do (all of their “know-how”). That’s not just moving around; it includes recognizing and manipulating objects in the world and exchanging strings of words (talking, language) about them (Harnad 2010).
Then there’s feeling: it feels like something to do most of the things people and animals are able to do (while they’re awake). It feels like something to see, recognize and manipulate an object.
Then there’s meaning: The strings of words that people (not animals) exchange mean something.
And, last, there is the problem of explaining all of the above: explaining how people can do what they can do; explaining how people can feel; and explaining how words can mean.
It has become fashionable lately to call the problem of explaining doing the “easy problem” — compared to explaining feeling (i.e., consciousness), which is the “hard problem“.
The easy problem is easy in the sense that although cognitive science cannot yet explain how people and animals are able to do what they can do, there’s no reason to expect that eventually it will not succeed. Alan Turing set out the method in 1950: Design a robot that can do anything and everything that a normal human being can do, and whatever turns out to be the successful causal mechanism inside that robot — the one that gives it the capacity to do whatever we can do — will be an explanation of how and why we ourselves are able to do what we can do (Harnad 2008).
People usually reply: But I’m not a robot, and I’m not interested in how robots can do things. I want an explanation of how I do things. (That usually means: I want an explanation of how my brain does things.) Fair enough. For those who are unsatisfied with a causal mechanism that can “merely” do anything and everything a normal human being can do — even a “Turing robot” that can pass the Turing Test, passing as one of us, indistinguishable from a real human being for an entire lifetime based only on what it does and says — the causal mechanism can be further elaborated so it also “does” everything the brain does, internally as well as externally (Harnad 2011). Externally, of course, the brain does what our bodies do. But internally there are neurons and connections and patterns of activation, even chemical “doings.” A “Turing biorobot” must do both.
Now once we have a Turing biorobot — its doings, external and internal, indistinguishable from our own — the “easy problem” of explaining doing is solved. But what about feeling? Would the Turing robot or the Turing biorobot feel? Unfortunately, unlike doing, feeling is not something that can be observed by anyone other than the feeler. And the Turing robot , if it is indeed indistinguishable from us, would of course behave exactly as if it feels. And if asked, it would reply exactly as any of us would reply: “Of course I feel! What a question!” This is called the “other minds problem”: The only one I can be sure feels is myself. With others, I have to infer it from what they do and say (including how they move and how they look and sound: facial expressions, tone of voice).
The other-minds problem, however, is not the “hard problem,” although it is related to it. Even though we can’t know for sure whether other people feel, it’s just about as probable that they do feel as the fact that apples always fall down, not up (we can’t be sure about that either, but it’s close enough): People are pretty good at “mind-reading.” So that’s why we don’t worry about whether other people really feel, or just act and talk as if they do.
Now the Turing robot is indistingushable from the rest of on any of these external cues; the Turing biorobot is not even distinguishable on internal cues (although we don’t normally invade people’s brains to determine whether they feel!). But we do know that Turing robots are synthetically made rather than natural, so the uncertainty about whether or not they feel is greater than it is with our fellow human beings (greater even than with our fellow animals, except perhaps the ones that are the most unlike us) (Harnad 2001).
But the “hard problem” is not that greater uncertainty about whether a Turing robot or biorobot feels. Let’s suppose it does feel. That doesn’t help, because that still doesn’t explain how and why it is able to feel, if it does feel. In the case of explaining its know-how — the causal mechanism that successfully generates its ability to do what it can do, indistinguishably from the rest of us — the explanation is complete, and accounts for anything and everything the robot can do, whether or not it can feel. If it does feel, that’s nice; but, unlike the doing, the feeling is not explained by the causal mechanism of the doing.
We might pause and consider just how hard a problem this is. We feel. For example, we feel pain when our tissues are injured. The temptation is to say that we need to feel the pain otherwise we would not notice the tissue injury and we would not do what needs to be done about it. But doing is doing. If something needs to be done, why is it not enough to have a mechanism that, when it detects tissue injury, sees to it that what needs to be done is done — withdraw the hand from the fire, avoid fire in future, remember, learn, compute and even say whatever needs to be done — without bothering to feel anything at all? What’s the bonus from feeling something? What causal role does feeling fulfill, that doing alone does not? Our Turing robot can do everything that needs to be done, whether it feels or not. If it does feel, it remains to explain what causal role the feeling itself is playing. And that’s the hard problem (Harnad 1995).
There would be an easy solution if there were psychokinetic (mind-over-matter) forces in the world, alongside ordinary electromagnetism, gravitation and atomic forces. Then feelings could be an independent further force, and their causal role would be measurable and explainable. But there aren’t any psychokinetic forces (despite generations of parapsychology experimentation seeking to detect them). And even aside from the fact that there’s no evidence for psychokinesis, not only are the known physical forces already enough to get anything that needs doing done as well as explained (whether the doings are those of a galaxy, an atom, a steam engine, an organ or an organism), but there isn’t even any causal room left for any forces beyond the known ones.
So whereas explaining doing is easy, explaining feeling is hard (perhaps even impossible). Now let’s move on to meaning: Let’s consider written words (though we could just as well have considered spoken words, or words in a gestural language): Written words have a graphical “shape”: They look like something. And, in addition, they also mean something. Now a Turing robot, like us, could detect the shape of a word: could read it, speak it, point to its referent (if it has a concrete referent, like an apple) or describe its referent (if it has a more abstract referent, such as “truth”) (Harnad 1990).
But all of that is just doing. Is meaning, too, just doing — as in the ability to point to what a word refers to (an apple) as well as to describe the word’s sense (a round, red fruit)? I suggest that it is no more true that meaning is just doing than that seeing an apple just amounts to the ability to identify an apple. It feels like something to see an apple when you are looking at one. And it also feels like something to mean an apple when you are talking (or thinking) about one.
The fact that differences in meaning are also felt differences is even clearer with words that have double meanings: the word “pound” can be used in the sense of a unit of weight or in the (British) sense of a unit of currency. The words sound the same, but they have different meanings. Context presumably sorts out for the hearer what the speaker means when he says he’s “lost a pound.”
But what about when I’m the one saying it, and all I say is “I’ve lost a pound”? Not only does it feel different (to me) to say and mean “I’ve lost a pound” in the sense of losing money, compared to what it feels like to say it in the sense of losing weight, but such differences in feeling — subtle though they are, and hard to describe — are not just differences in what I do, or can do, or am disposed to do: They are also differences in what I feel: It feels like something to mean something. It also feels like something to think, believe, understand or doubt something.
Most philosophers reject this conclusion. They suggest that meaning, thinking, believing, understanding or doubting something is not the same sort of thing as seeing, hearing, touching or tasting something, nor even like feeling a migraine or a mood. So don’t ask a philosopher; instead, ask a psychophysicist. Psychophysicists are the ones who specialize in measuring sensations and the detection of differences: Does this look (sound, smell) brighter (louder, stronger) than this? They have even narrowed it down to the “just-noticeable-difference,” or JND, which is the smallest difference between two inputs that a person can tell apart.
Psychophysicists don’t philosophize; they just measure what differences people can and cannot tell apart — in other words, what people can do. But if you ask psychophysicists how people make same/different judgments, they will of course tell you that it’s based on whether things feel (look, sound, taste, smell) the same or different. You can’t do much psychophysics on someone who is fast asleep, not feeling a thing.
Well, in psychophysical terms, we are making same/different judgments on the basis of differences in meaning (not sound!) constantly, in our discourse. Insofar as measurement is concerned, JNDs in semantic space would look pretty much the way JNDs do in sensory space. It would be surprising, then, if sensory differences were felt, whereas semantic differences were unfelt. That would make talking and thinking much more like “blind-sight” — in which patients with certain kinds of brain damage report that they can no longer see at all, yet they are somehow still able to distinguish things presented to their (intact) eyes (Overgaard 2011). The current consensus is that blind-sight patients are still feeling something — even if it is only the movement of their eyes, which is controlled by an eye-movement control system that is also still intact in their brains — and they are using that feeling of involuntary movement (or the felt urge to move) as the basis for telling (some) things apart.
But “blind-semantics” would be more like “speaking in tongues,” with words coming into our ears and going out of our mouths as if they were a foreign language that we did not understand (rather the way Searle 1980 describes it, in the Chinese Room). Yet we know that we can sense what we are meaning as surely as we can sense what we are seeing. And “sensing” is just a synonym of “feeling,” here.
Why do (most) philosophers think that meaning — unlike sensing and emoting — is unfelt? It has to do with the “hard” problem, again: It’s so hard (perhaps impossible) to explain how and why we feel that it seems like a good idea to try to wrest as much as possible of cognition from the clutches of feeling, to get it over onto the “easy” side of the ledger — doing — the side that we have some hope of explaining. And, as we’ve already noted, talking to one another is certainly something we do. But distinguishing and manipulating things through viewing and touching is also something we do. And we all know that different things look different (up to a JND). Surely the differences among the things we say and mean are not just differences in the shapes of the words (or what they sound like)? Rather, differences in meaning are felt differences too (O’Callaghan 2011, Strawson 2011).
So the “hard” problem of explaining how and why we feel is even more pervasive than sensory and emotional experience. Semantic sense is afflicted with it too.
I would like to close by suggesting that the hard problem is not a metaphysical one, at least not for cognitive science. It’s surely true that the brain causes both doing and feeling (and hence also meaning), somehow. The problem is explaining how — and, even more problematic, why? With doing, it’s easy to explain how and why we can do what we can do. With feeling it’s hard, if for no other reason than that doing alone already does the job: it’s enough to explain what kind of Darwinian survival engines organisms are, i.e., by what causal process we evolve or learn the ability to do what needs to be done for our survival, reproduction and lifetime success (such as it is). The Turing robot (or biorobot) will be fully explained by the explanation of the causal mechanism that generates its ability to do what it can do. Even if the robot does feel, that causal explanation will not explain how it can feel, let alone why. The causal mechanism will be equally compatible with the presence or absence of feeling.
The challenge to my commentators, then, is to use whatever actual facts you know about our know-how and the internal mechanisms that generate it — or whatever speculative evidence and mechanisms you think could turn up in the future — in such a way as to sketch how it could ever be explained how and why we feel rather than just do. If (as you should) you include our language capacity and use in our know-how (Harnad 2007), then please also address the problem of meaning: Why does it feel like something to understand someone’s words, rather than simply causing unfelt internal processes, that in turn cause further words and other doings on our part?
“Causing” is evidently our fifth pertinent gerund, and feeling seems to be causal explanation’s nemesis.
Harnad, Stevan (1990) The Symbol Grounding Problem Physica D 42: 335-346.
________ (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1:164-167.
________ (2001) Spielberg’s AI: Another Cuddly No-Brainer.
________ (2007) From Knowing How To Knowing That: Acquiring Categories By Word of Mouth. Presented at Kaziemierz Naturalized Epistemology Workshop (KNEW), Kaziemierz, Poland, 2 September 2007.
________ (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer
________ (2010) From Sensorimotor Categories and Pantomime to Grounded Symbols and Propositions. In: Handbook of Language Evolution, Oxford University Press.
________ (2011) Minds, Brains and Turing. Consciousness Online.
O’Callaghan, C. (2011) Against Hearing Meanings. The Philosophical Quarterly. DOI: 10.1111/j.1467-9213.2011.704.x
Overgaard, Morton (2011) Visual experience and blindsight: a methodological review. Experimental Brain Research 209(4): 473-9
Searle, John R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-57
Strawson, Galen (2011) Cognitive phenomenology: real life. In T. Bayne & M. Montague (eds.): Cognitive Phenomenology. Oxford University Press
Turing, A.M. (1950) Computing Machinery and Intelligence. Mind 49: 433-60