Imagine that you are building a person. First you put in all of the capacities this person will need to make ordinary, non-moral judgments. So you put in capacities for understanding causation, for thinking about other people's mental states, for making sense of physics, biology, sociology. Now suppose that once you've already finished all that, you want to add whatever would be necessary to allow your creation also to think about moral questions. How much more would you have to add?
One obvious answer would be: quite a bit. After all, experimental studies have shown incredibly complex patterns in people's moral judgments, and it might be thought that you would really have to put in some pretty sophisticated mechanisms to generate patterns of judgment like these.
In a recent paper that I hope will one day be regarded as a classic in the field, Fiery Cushman and Liane Young put forward a very different proposal. They suggest that it would be possible to generate some of the complex patterns we find in people's moral judgments without adding in very much at all that was specific to morality. Much of the complexity found in people's moral judgments, they suggest, could just be a by-product of their complex way of thinking about non-moral issues.
Cushman and Young illustrate this approach by looking at people's intuitions about acts and omissions. It is a well-known fact that people often see acts as being more morally blameworthy than the corresponding omissions. But why? One possible explanation would be that people have some very complicated capacity for thinking about morality, which takes this act/omission distinction into account. But there is always another possibility. It could be that people's way of thinking about morality just makes reference to the simple principle that an agent is more blameworthy if she causes harm. Then, since acts are seen as more causal than omissions, the agent who performs the act ends up being seen as more blameworthy.
In my view, this approach has the potential to explain an enormous variety of puzzling aspects of human morality. To take another example, suppose that you are offered a job selling heroin to children. Should you take it? What if you learned that there is someone right outside the door who will take the job if you don't? Most people still have the intuition that you should not accept the job offer. But why not? The children would be offered the heroin whether you took the job or not. Here again, people's causal judgments provide an elegant answer. It might be true that the children would get the heroin regardless of whether you yourself sold it to them, but if you sold it, you would actually be causing them to get it. (This is the much-studied phenomenon of causal preemption.) So if we assume that people's moral judgments are based on causal judgments, we get a simple explanation of this effect.
And I bet there's a lot more where that came from. If we can just think further about the intricacies of people's judgments of causation, intention, etc., we might be able to find all those same intricacies in people's judgments about moral questions.