The Star Wars character C3PO is so convincingly depicted that we may have to remind ourselves that there was no real robot behind the elegant mannequin. The passage of time has not remedied this deficiency; nor, alas, have I a blueprint to offer. I do believe, however, that it will repay us to identify some attributes a robot would need in order to count as humanoid. By clearly distinguishing among such features, and considering what our attitudes toward such a device might be, we can enrich our understanding of what it is to be human.
Perhaps the most striking feature of C3PO is that it’s so smart. (It? Yes; but don’t think I wasn’t tempted to write “he”. I’ll return to this temptation.) Why do we think of it as smart? We see it confronted with complex situations that no one could have anticipated, and we observe it doing things and saying things that are appropriate to those situations. “Appropriateness” here implicitly refers to its goals — in this case, helping the rebel forces, protecting Luke and, where compatible with those goals, protecting itself.
So, if we want to build a real C3PO, we’ll have to make something that can speak and act appropriately to its goals over a wide range of unexpected circumstances. That, in a nutshell, is what the key idea behind Alan Turing’s famous test for computer intelligence becomes when reformulated for robots.
We will also want to insist that whatever enables our robot to act appropriately in unexpected circumstances be entirely “on board” — that is, encased within its titanium skin. Otherwise it won’t be humanoid. Our intelligence is something we get from what goes on within us. If we were to get detailed instructions about what to say or do via, say, a cell phone conversation, our actions would show the intelligence of the person we talk to, not our own. Our real C3PO could look up information, just as we do, but the processes that connect that information to its actions must be internal to it.
Robots, unlike computers, have effectors that enable them to move around and to act upon people and things they encounter. They must also have detectors that carry information about their location and the nature of the things in their vicinity. Our real C3PO will need to have little microphones, cameras, chemical analyzers, and parts of its skin that can bend enough to detect differences in pressure.
Robots’ detectors are often called “sensors”, but here we must be careful. When your house cools on an October evening, your thermostat detects the fall in temperature and turns on your furnace. But no one supposes that your thermostat suffers by feeling cold. A metal coil simply contracts and causes a circuit to close. Merely detecting a change does not imply having sensations.
That does not mean that we could never build a robot that has sensations. It shows only that the project of building a robot with sensations is different from the project of building a robot with detectors.
If we did want to build a robot with real sensations, how should we proceed? When I ask my students this question, they often respond with “Why would anyone want to do that?” That’s a good question that reflects an understanding that robots, as we usually think of them, don’t feel anything, and so can’t suffer. That’s why we think they’re ideal for jobs that would be dangerous to people, like fixing damaged nuclear facilities.
But suppose we are perverse philosophers and want to make a robot that has real sensations just to show we could do it. What should we do? The usual method of producing something is to find out what causes it, and then to bring it about by producing its cause. The causes of our own sensations are not known in detail, but there is wide convergence in science on the view that sensations depend on activities in our neurons. So, our best shot at producing real sensations in a robot would be to reproduce, in the robot’s electronic parts, the same patterns of activity that take place in our neurons when we have sensations.
Such a project would undoubtedly be very expensive, so let us now suppose that in a First Generation of humanoid robots, we forego it and settle for a robot that has no sensations. It does, however, have excellent detectors, and the electronic connections between its detectors, its inner processors, and its motors enable it to do what C3PO does in Star Wars.
A question that is now likely to arise is this: When it speaks, does it understand what it’s saying?
In some cases the answer seems to be “Obviously not” — namely, cases where it uses sensation words. So, for example, if C3PO says “Take the morphine to the sick bay. Luke is in terrible pain”, we may doubt that it understands “pain”. If it can’t feel, it has never felt a pain, and so doesn’t fully understand what that word means. At best, it knows that pain, whatever that is, is something people go to great lengths to avoid.
But what about “Luke” and “sick bay”? With these words, there seems less of an obstacle to C3PO’s understanding. Of course, if C3PO called everyone “Luke”, or said “Here’s the sick bay” randomly, whether it had entered the sick bay or not, we would not take its words to be meaningful. We’d say “It utters words, but they don’t mean anything”, or even “It utters noises that sound just like words, but when it produces them, they’re just noises”.
But the C3PO of Star Wars is not like that. It utters its words appropriately. Its reports reliably correspond to what it detects, and its sentences about what it is going to do correspond to what it actually goes on to do (except, of course, when it is cleverly deceiving members of the Empire’s forces). If we had a real robot of which such things were true, we would take what it said seriously — which means that we would act upon what it said just as we do in response to what our fellow humans say. Disregarding C3PO’s words — treating them as mere noises — would cost us dearly, and would not be an attitude that came naturally to us.
Some philosophers will hesitate to agree that a robot without sensations could mean what it said, on the ground that it could never feel to such a robot that it meant what it said (since nothing ever feels any way at all to a sensationless robot).
Now, one is free to adopt a decision to use “means what it says” in a way that requires having feelings. It is not a matter for verbal decision, however, that what gives words the meanings they have is the way they fit into a highly complex network. This network relates words to things that affect detectors and to verbal reports of what is detected, arguments that involve use of the words, statements of intention to do actions of various kinds, and actions themselves. The project of building a robot that uses words that fit correctly into the same network that we use makes sense so long as the robot has detectors, speakers, and effector motors. It does not depend on whether we have also attempted the further project of endowing it with sensations.
An analogy may help us understand how a network is related to meaning. Consider, for example, a rook in chess. Rooks are usually made to look like a castle tower, but any shape would do, so long as it was easily distinguishable from the shapes of the other pieces. What makes the rook a rook is that it can be moved only in certain ways. What makes it a piece in a chess game is that it is part of a network of pieces that get moved only in accordance with the rules of chess.
Of course, the rules of language are far more complex than the rules of chess, and they include rules that relate words to things and actions. For example, you are to say “I am in the sick bay” only when you are in it, and “I am going to the sick bay” only when, barring unexpected obstacles, you then proceed to the sick bay. The analogy holds, however, for the richer set of rules: noises are meaningful words because of their place in a network of rules, just as pieces of wood are rooks because of their place in their own network of rules.
C3PO’s noises are likewise meaningful words to the extent that they play the same roles as your words. If you say something to it, and it acts just as other people would if you’d said the same thing to them, it has understood what you’ve said.
We can, then, imagine a robot that uses its words meaningfully, even if we are imagining a robot that does not have sensations. But whether we think we have endowed a robot with sensations does make a difference to how we are likely to think we should treat it. If we think it cannot feel anything, we won’t have a certain kind of qualm about exposing it to danger. Our decision will be entirely a matter of balancing benefits against repair and replacement costs.
But now let us imagine a Second Generation robot endowed with the causal mechanisms required to produce genuine sensations. Now we must consider the possibility of pain. Of course, it may be that it — or, perhaps, we should now say “he” — will have to suffer anyway. After all, we do sometimes call on people to endure high risks. But the possibility of pain forces us to consider a factor that’s additional to repair and replacement costs.
Let us now imagine that a robot does something it shouldn’t do. If we think we have a First Generation, sensationless robot it’s obvious what to do. We should send it back to the factory, just as we would a vacuum cleaner that occasionally deposited some of its sweepings on the rug. In the robot’s case, it might be difficult to figure out how to fix it, but there is no alternative to making some rearrangement of its parts.
But if we have a Second Generation robot that has sensations, another possibility is available: We may threaten to do something that will cause it to have pain. If our threat is sufficiently credible, that may be enough to stop it from misbehaving again. And threatening may be better than sending it back to the factory, because we may not know how to rearrange its parts so that it stops misbehaving while retaining all the abilities that made it useful to begin with.
Let us now stipulate that Second Generation robots have pains only from physical abuse. The only sorts of threats that would make sense would be such things as beatings, applying blowtorches and, perhaps, leaving it in a painful low-battery state for a prolonged time.
But if we could endow a robot with the causes of pains, perhaps we could also build one that had the causes of other feelings, such as remorse, discouragement, or wounded pride. We can thus imagine a Third Generation robot in which causes of these feelings are activated only in circumstances that would activate such causes in us. If we had such a robot, we might then influence its behavior by regularly causing it to have these unpleasant feelings when it misbehaved.
Now, imagine that you have lived in a world with many Third Generation robots, and have become quite accustomed to these more subtle kinds of interactions with them. Since these robots are smart, they’ll anticipate their owners’ reactions. You and your fellow humans won’t have to cause robots’ pains, or even threaten to do so, very often. Most of the time you’ll just react to good and bad behavior with smiles and frowns, and most of the time, things will go reasonably well.
If you can imagine such a world, you can imagine a world in which people treat the Third Generation robots just like they treat human beings. If robots seriously misbehaved, you’d get angry at them. Why do I think so? Who has not kicked or swatted a car or vacuum cleaner that was not working right? We chide ourselves, of course: the kick can’t be felt and if it changes how the machine works, it will likely not be for the better. But if we kicked a Third Generation robot (avoiding its head), we might not be behaving irrationally in either of these ways.
The world I’ve just asked you to imagine is more than a world in which we treat Third Generation robots in a certain way: it is a world in which it would seem quite natural and proper to do so. The consequence I am willing to draw is that Third Generation robots have everything that’s needed to properly ground the attitudes toward them that we usually have to our fellow human beings.
If that is right, then there is a further consequence. If we stopped to think about our robots, it would be evident to us that they did not make themselves. They were made in a factory, and the changes that have occurred inside them since their manufacture have depended on how they were constructed to begin with, and what has affected their detectors subsequently. It would be evident to us that it would make no sense to blame them for what they are, or for the condition that they are in at any given moment. But, even after thinking about it, it would still make sense to treat them in the way that would come naturally to us — namely, the way we treat other people today, including blaming them for what they do, and attempting to make them feel bad about improper behavior.
It is consistent for us to have the same pair of attitudes toward our fellow humans. We can recognize that, although they were made in a womb and not in a factory, they did not make themselves, and the state they are in now depends on what they were when they were born and what has happened to them since then. It does not make sense to blame them for who they are, or for the condition they are in at any given moment. It’s appropriate to blame them for what they do, if they behave badly, but never for who they are.
Fully accepting this view of ourselves requires something that is difficult for us to do — namely, to keep two attitudes in full play. These are: calm recognition of blamelessness for being in the condition that we are in when we act; and anger and blame for actions when they are immoral. But it is only by embracing both attitudes and keeping them both robustly in mind that we can properly recognize both our humanity and our position in the natural world.
[The view expressed at the end of this essay is further developed and supported in my recent Your Brain and You: What Neuroscience Means for Us, available on Amazon. A convenient link, and references to my work on other issues discussed here, can be found on my website, yourbrainandyou.com. My sources are too numerous to mention, but I should note that the chess analogy is drawn from Wilfrid Sellars. Thanks to my first reader, Maureen Ogle.]