Monday, September 17, 2007

I don't know what I was thinking

If nothing else is certain, we must know the contents of our own minds. Descartes was unable to doubt the existence of his mind, and it seems quite impossible for me to doubt the thoughts I am thinking right now. As I produce thoughts, I am aware of them, and it is impossible for me to escape them. My thoughts, formed by language, express the contents of my beliefs and desires precisely, because that is how I have intended to express them to myself. I can’t imagine I am deceiving myself or that I am an automaton. I am a thinking being immersed in my conscious life. If the language I use in thinking expresses my beliefs accurately and rationally, then this is what enables me to develop moral principles and behave in a morally responsible manner.

But what of our “unconscious” thoughts? Hume demonstrated that our belief in cause and effect seems to exist in a precognitive state. We don’t use language and reason to develop a belief in cause and effect—in at least some cases, language merely expresses what is built into us. Our moral reasoning, though, is based on careful consideration and tediously crafted arguments. Surely our language is not expressing a precognitive instinct or intuition. In Kinds of Minds, Dennett quotes Elizabeth Marshall Thomas saying, “For reasons known to dogs but not to us, many dog mothers won’t mate with their sons” (10). Dennett rightly questions why we should assume that dogs understand this behavior any better than humans understand it. It may just be an instinct, produced by evolution. If the dog had language, it might come up with an eloquent argument on why incest is wrong, but the argument would seem superfluous—just following the instinct works well enough.

By the same token, human moral arguments may do nothing more than express or at best buttress deeply held moral convictions instilled by evolution or experience. In a Discover magazine article titled “Whose Life Would You Save?” Carl Zimmer describes the work of Princeton postdoctoral researcher Joshua Green. Green uses MRI brain scans to study what parts of the brain are active when people ponder moral dilemmas. He poses various dilemmas familiar to undergraduate students of utilitarianism, the categorical imperative, or other popular moral theories.

He found that different dilemmas trigger different types of brain activity. He presented people with a number of dilemmas, but two of them illustrate his findings well enough. He used a thought experiment developed by Judith Jarvis Thompson and Phillipa Foote. Test subjects were asked to imagine themselves at the wheel of a trolley that will kill five people if left on course. If it is switched to another track, it will kill one person. Most people respond that they will switch to another track in order to save four more lives, apparently invoking utilitarian principles. In the next scenario, they are asked to imagine they can save five people only if they push one person onto the tracks to certain death. Far fewer people are willing to say they would push anyone onto the tracks, apparently invoking a categorical rule against killing innocent people. From a purely logical standpoint, the two questions should have consistent answers.

Greene found that some dilemmas seem to evoke snap judgments, which may be the product of thousands of years of evolution. He notes that in experiments by Sasrah Brosnan and Frans de Waal capuchin monkeys who were given a cucumber as a treat while other monkeys were given grapes would refuse to take the cucumbers and sometimes would throw the cucumbers at the researchers. Brosnan and De Waal concluded that the monkeys had a sense of fairness and the ability to make moral decisions without human reasoning. Humans may also make moral decisions without the benefit of reasoning. It appears evolution has created in us (at least in those who are morally developed) a strong aversion to deliberately killing innocent people. Evolution has not prepared us for other dilemmas such as whether to switch trolley tracks to reduce the total number of people killed in an accident. These dilemmas result in logical analysis and problem solving. Zimmer writes, “Impersonal moral decisions . . . triggered many of the same parts of the brain as nonmoral questions do (such as whether you should take the train or the bus to work)” (63). Moral dilemmas that require one to consider actions such as killing a baby trigger parts of the brain that Greene believes may produce the emotional instincts behind our moral judgments. This explains why most people appear to have inconsistent moral beliefs, behaving as a utilitarian in one instance and as a Kantian the next.

It may turn out that Hume was correct when he claimed, “Morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation” (Rachels 63). His claim is that we evaluate actions based on how they make us feel, and then we construct a theory to explain our choices. If the theory does not match our sentiment, however, we modify the theory—our emotional response seems to be part of our overall architecture. The work of philosophers, then, has been to construct moral theories consistent with our emotions rather than to provide guidance for our actions.


Language gives us access to our conscious thought. Language permits us to be aware of our own existence and to feel relatively assured that other minds exist as well. It is through language that we make sense of ourselves and the world. We may be deceived, though, into thinking that thought is equivalent to conscious thought. Much of what goes on in our mind is unconscious. Without our awareness, our mind attends to dangers, weighs risks, compensates for expected events, and even makes moral judgments. Evolution has provided us with a body that works largely on an unconscious level. However, humans, and perhaps some nonhuman animals, have become aware of their own thoughts, and this awareness has led to an assumption of moral responsibility. This awareness should not be taken to prove that we are aware of the biological facts that guide our moral decisions.

Stephen Stich explores the development of moral theory in his 1993 paper titled, “Moral Philosophy and Mental Representation.” In the essay, Stich claims that while most moral theories are based on establishing necessary and sufficient conditions for right and wrong actions, humans do not make mental representations based on necessary and sufficient conditions. He says, “For if the mental representation of moral concepts is similar to the mental representation of other concepts that have been studied, then the tacitly known necessary and sufficient conditions that moral philosophers are seeking do not exist” (Moral 8). As an alternative, he suggests that moral philosophers should focus on developing theories that account for how moral principles are mentally represented. He writes:
These principles along with our beliefs about the circumstances of specific cases, should entail the intuitive judgments we would be inclined to make about the cases, at least in those instances where our judgments are clear, and there are no extraneous factors likely to be influencing them. There is, of course, no reason to suppose that the principles guiding our moral judgments are fully (or even partially) available to conscious introspection. To uncover them we must collect a wide range of intuitions about specific cases (real or hypothetical) and attempt to construct a system of principles that will entail them. (8)
On this view, moral theories represent beliefs that are not only unconscious but are unavailable to the conscious mind. In order to make a determination of the content of our own moral beliefs, then, we must examine our own moral decisions and infer the content of our beliefs. In this approach, we find that humans are deciphering their own beliefs in much the same manner the Brosnan and De Waal determine the moral beliefs of capuchin monkeys. Not only does language fail to give a full accounting of our belief states, but our conscious thoughts may be an impediment to determining our actual beliefs, so that we must consider prelinguistic or nonlinguistic cues to discover what we actually believe.

1 comment:

Anonymous said...

I was a student in your summer II philosophy course this year and i have a fatal form of kidney disease. I have made out my living will as to how i wish to be handled when that time comes. We too often think too much of ourselves in times of crisis and health like that. Its only human i think to want to hang on to those you love regardless of the futility of life. Our wishes tend to be self-motivated. Most people i know would never want to be kept alive in ways such as you discussed but too many do not think living wills or advanced medical directives because we all want to believe we have plenty of time in life to get it done. On the other hand Dr's. have been known to be wrong in regards to coma patients where they believe that care should not be extended. So its a catch 22 in some cases. No one wants to loose someone they love nor face having the courts force them to take actions. I agree that life is precious but do not believe religion should dictate about physical care any more than the courts should. People need to be more educated on the importance of living will and advanced medical directives and dnr orders so as to be able to make the choices them selves if able. In the case of individuals that are unable it has to depend on quality of life and not emotional people who do not wish to loose a loved one. In that context i believe the hospitals with reason should be able to make the decisions about life sustaining treatments. Our courts are too overtaxed as it is with frivolous lawsuits and religion has been changed to dictate way to much to people. Its time people stop letting another persons interpretations of right and wrong choose for them and learn to start thinking for themselves.