Moral Psychology as Moral Education
I believe that philosophy is a tool that not only helps us better understand the human condition, but that can also help us improve ourselves and the surrounding world. To know is not enough.
I am an experimental philosopher, using the methods of the cognitive and psychological sciences to addresses philosophical problems. My research focuses on the philosophy of psychology. These interests deeply inform the classes that I teach, and although they are not commonly thought of as disciplines central to applied ethics, I believe they allow me to teach genuinely engaging applied ethics courses.
The traditional goal of both normative and applied ethics courses is, as I see it, to help students cultivate the skill of moral reasoning. The idea, then, behind discussing cases of current controversy, like abortion in bioethics or net neutrality in technology ethics, is not just to help students come to decisions about those cases, but to help them develop the intellectuals tools they’ll need to confront new dilemmas in their personal and professional lives. Understood in this way, applied ethics is an essential part of a liberal education because it aims to cultivate an outlook of open, self-critical moral reflection.
The Problem of Rationalization
There is, however, an increasing body of work in moral psychology that shows how difficult it to achieve this. Discussion isn’t enough. The evidence strongly suggests that most moral judgments are the result of fast, automatic intuitions, and not rational deliberation. Moreover, it also suggests that these intuitions are fixed early through cultural processes. Contrary to our expectations, conscious deliberation rarely determines our judgments, and often serves as a way of rationalizing the intuitions we always had. The upshot is that there is a great risk that traditional methods of teaching applied ethics only ever succeed in teaching students more sophisticated ways to rationalize their initial moral intuitions, without ever forcing them to reevaluate those intuitions, or really consider alternative points of view.
To combat this, I’ve been developing a technique of teaching recent work in moral psychology alongside more traditional texts in applied ethics. I’ve found that as students come to understand the psychology of moral deliberation, they’re able to adopt a more critical stance towards their own initial moral intuitions. I try to pair standard normative discussions of cases with empirical studies of the kinds of moral deliberation that are relevant to the case. For instance, when discussing whether international aid is a duty, I pair Peter Singer’s “Famine, Affluence, and Morality” with empirical work on intuitions behind group loyalty and intercultural perceptions. When discussing part of Norman Daniels “Just Health Care,” I pair it with empirical work on group differences in intuitions about justice and fairness.
In my courses we don’t merely read the empirical papers. When possible, I reproduce the experiments so students can participate in the study before reading about its results. Once students see how often their moral commitments are intuitions and not judgments, and how easily those intuitions are swayed by small or irrelevant factors, I find that they become much more willing to detach themselves from a commitment to their initial position. Having done that, they’re much better able to consider actual moral dilemmas from an objective standpoint. The goal isn’t to show my students that their intuitions are wrong; it is rather to show them how often our everyday moral evaluations are fast, unconscious, and automatic reactions. Seeing this, they’re much more willing to then ask, am I having the right reaction to this case?
I’m under no illusion that the method is foolproof. In fact, I’m well aware that there is also a large literature on how merely making individuals aware of their biases is ineffective as a means of overcoming them. But I have found that once I introduced this method, and as I’ve begun to refine it, the discussions in my classes have become much more of a dialogue than a debate. I see students asking each other questions and reaching out for answers together – even when they initially disagreed with each other. I think this is a sign that they are openly seeking a reasonable position, rather than just trying to offer the best justification of the one they already have. In this respect, my vision of the classroom is as of a laboratory focused on finding and exploring new and better means for achieving productive and honest dialogue.
I have been developing this approach to applied ethics in a number of courses, including bioethics, technology ethics, and professional ethics. I’m eager to both expand it and refine it further. One course I’ve begun to develop, and would like to develop further is the cognitive science of political philosophy. Just as there are stark differences over particular moral dilemmas, there are also broad and seemingly deep political differences in our society. It seems, from the outside, that it is very rare that people change their opinions about political and social policy. There is also a growing literature on the psychology of political orientation. I’d like to bring these two bodies of literature together, to see if it’s possible to use the same technique to develop more productive dialogues about questions in the political sphere.
Beyond just incorporating this method into my classes, as I’m an experimental philosopher, I’d like to study whether the method is actually effective at developing more open-minded thinking about moral concerns, and if it results in real changes of standpoint.