Friday, September 7, 2007

Language of Machines

Daniel Dennett argues that we can use language, through the “intentional stance,” to describe the beliefs of people, animals, or artifacts including a thermostat, a podium, or a tree (Brainchildren 327). It is easy to construct sentences to describe the beliefs of these objects (“The thermostat believes it is 70 degrees in this room”). If the thermostat is working properly and conditions are more or less normal, we should be able to predict the temperature based on the actions of the thermostat, or we should be able to predict the actions of the thermostat by knowing the temperature in the room. We recognize the possibility of error, however. As the thermostat may be broken, we are likely to say, “According to the thermostat, . . .” If the room does not feel warmer or cooler than the thermostat indicates, then we assume all is well. If we want to know the true nature of belief, being able to describe the beliefs of a thermostat is outrageously unsatisfying. Unless the thermostat is able to describe its own beliefs using language, we are loath to even suggest it has beliefs.


But given the capacity for human language, machines might appear to have beliefs and desires similar to human beliefs and desires. In fact, if a machine could use human language in a manner indistinguishable from human use, it is difficult to see how the consciousness of the machine could be denied with any certainty. Of course, the claim that such a machine is impossible goes back at least to Descartes, who wrote, “It is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do” (CSM II 140). Surely Descartes did not imagine 21st century computer programs when he provided this early version of the Turing Test (in which a computer is held to be conscious if it can master human conversation), but so far his challenge has not been met.


In John Searle’s Chinese room argument, we are challenged to accept that even a computer that could pass the Turing Test would not prove the computer is conscious. Although he does not deny that machines could someday be conscious, a language program would not be proof of it (Searle 753-64). Our best reason for believing the machine is not conscious is that it is not similar enough to a human to be considered conscious by analogy. Even if we can’t deny beliefs and desires to a machine with certainty, we are equally ill equipped to accurately ascribe beliefs and desires to machines, or trees, or stones.

No comments: