An integrated penultimate draft of the entry on "Experimental Moral Philosophy" for the Stanford Encyclopedia of Philosophy is now available on my blog. Thanks to everyone who's sent me encouragement, comments, suggestions, criticisms, and citations.
So a few years ago I had a post here (Are People Actually Moral Objectivists?) reporting some data we had collected about folk views concerning the status of morality. Much discussion ensued. That research project eventually yielded a co-authored paper in Mind & Language.
More recently (and more awesomely) the intrepid experimental filmmaker Ben Coonley put together a fantastic, short film based on that paper, starring Amanda Palmer!
You may already have seen Ben Coonley's previous short films on the Knobe effect, as well as a multi-part, interactive film on happiness, both inspired by experimental philosophy research, each of which has thousands of views. This one makes three in this series (so far). Not only are they great illustrations of the experiments, but they help get folk outside the academia to think about these philosophical issues!
Don Loeb and I are writing the entry for "Experimental Moral Philosophy" for the Stanford Encyclopedia of Philosophy. This is a daunting task, given the huge amount of interesting and diverse work that's relevant. On my blog, I'm going to be posting drafts of the sections that I write (and maybe the sections that Don writes) for public comments. If you think I'm getting things totally wrong, or missing something crucial, or am just plain stupid, please let me know in the comments there or by email.
Some posts are now available (others will be added periodically):
(6) UPDATE ADDED FEBRUARY 26: a section on wellbeing
(5) UPDATE ADDED FEBRUARY 20: a section on the linguistic analogy
(4) UPDATE ADDED FEBRUARY 15: a section on emotion and affect
(3) UPDATE ADDED JULY 18: a section on intentionality attributions at the side-effect effect (AKA Knobe effect)
I'd like to alert you to an upcoming issue of The Monist, on the topic of "Virtues," which I will be editing. I hope to receive some terrific contributions, including from experimentalists.
CFP: The Monist Topic: Virtues Description: Some virtues, like courage and temperance, have been part of the philosophical tradition since its inception. Others, like filial piety and female chastity, have gone out of style. Still others, like curiosity and aesthetic good taste, are upstarts. What, if anything, can be said in general about this motley collection? Are they all dispositions to respond to reasons? Do they share characteristic components, such as affect, emotion, and trust? Are they organized into a cardinal hierarchy, or is it better to investigate them one by one, developing a comprehensive but unstructured catalogue? What would constitute an empirical test of the degree to which a given virtue is realized, and, to the extent that such tests have been conducted, what is their philosophical upshot? Contributions from various perspectives, including perspectives underrepresented in this context (experimental, feminist, Humean, pragmatist, phenomenological, etc.), are invited to address these and related questions. Deadline: July 2015
A couple years ago, I ran a study to explore the view that professional philosophers’ beliefs are colored by their subjective experiences of the world, and are therefore influenced by personality.The first article from this study, “Do Personality Effects Mean Philosophy Is Intrinsically Subjective?” will appear in the Journal of Consciousness Studies in 2013, and I wanted to share some of the results here first.
Much of the research was exploratory, so the statistical validity of some of these findings is a complicated matter. But of particular interest are two specific hypotheses that were supported by the results of the study. A great deal of literature suggests that moral disapprobation covaries with negative emotions, even when those emotions are not the result of moral considerations. In this study, the lifelong tendency toward negative affectivity, as measured by neuroticism, correlated with one type of moral disapprobation.
Other psychological research suggests that the more conscientious and the more neurotic people are, the more intensely and passionately they experience of love. In this study, the belief that love could be reduced to a neuroscientific description and reproduced electronically also correlated with those variables. This supports the hypothesis that either the subjective experience of love, or its underlying personality influences, influence philosophical belief about the experience of love.
Below are statistical correlations between the Big Five personality factors and answers to nine philosophical questions. All results are from 234 participants who indicated that they held PhDs or DPhils in philosophy. 10 of 45 factor-belief pairs, and 4 of 9 overall Big Five-belief pairs showed significant correlations.
Correlations Between Personality Factors and Philosophical Beliefs
3. Somatic Identity
4. Reductionist AI
7. Knowledge Argument
8. Embodied Cognition
9. Trolley Problem
Note. E = Extraversion; A = Agreeableness; C = Conscientiousness; N = Neuroticism; O = Openness.
* p < .05
** p < .01
***Bonferroni-corrected p < .006
In order to keep this post shorter, I’ll post the exact questions used to test each philosophical belief in the comments; the brief tags in the chart are pretty vague. More about the Big Five personality factors, and a copy of the exact Big Five questionnaire used in this study, can be found at http://tinyurl.com/7573puh.
In 2007, I analyzed data on student charitable giving at University of Zurich, broken down by major. When paying registration fees, students at Zurich could choose to give to one or both of two charities (one for foreign students, one for needy students). Among large majors, philosophy students proved the most charitable of all majors. However, philosophy majors' rates of charitable giving didn't rise over the course of their education, suggesting that their giving rates were not influenced by their undergraduate training.
I now have some similar data from University of Washington, thanks to Yoram Bauman and Elaina Rose. At UW from 1999-2002, when students registered for courses, they had the opportunity to donate to two charities: WashPIRG (a broadly left-leaning activist group) and (starting in fall 2000) "Affordable Tuition Now" (an advocacy group for reducing the costs of higher education in Washington). Bauman and Rose published an analysis of economics students' charity and they kindly shared their raw data with me for reanalysis. My analysis focuses on undergraduates under age 30.
First, looking major-by-major (based on final declared primary major at the end of the study period), we see that philosophy students are near the top. The main dependent measure is, in any given term, what percentage of students in the major gave to at least one of the two charities. Among majors with at least 1000 student enrollment terms, the five most charitable majors were:
Major: percent giving Comparative History of Ideas: 31% International Studies: 24% Philosophy: 24% Physics: 22% Anthropology: 20%
The five least charitable were:
Construction Management: 7% Business Administration: 7% Sociology: 7% Economics: 8% Accounting: 9%
These numbers compare with a 14% donation rate overall.
As reported by Bauman and Rose and also in Frey and Meier 2003 (the original source of my Zurich data), business and economics majors are among the least charitable. The surprise here (to me) is sociology. In the Zurich data, sociology majors are among the most charitable. (Hypotheses welcome.)
But what is the time course of donation? Bauman and Rose and Frey and Meier found that business and economics students were among the least charitable from the very beginning of their education and their charity rates did not decline further. Thus, they suggest, their low rates of charity are a selection effect -- an effect of who tends to choose such majors -- rather than a result of college-level economics instruction. My analysis of the Zurich data suggests the converse for philosophers: Their high rates of charity reflect a fact about who chooses philosophy rather than the power of philosophical instruction.
So here are the charity rates for non-philosophers, by year of schooling:
First year: 15% Second year: 15% Third year: 14% Fourth year: 13%
And for philosophers (looking at 1172 student semesters total):
First year: 26% Second year: 27% Third year: 21% Fourth year: 24%
So it looks like philosophers' donation rates are flat to declining, not increasing. Given the moderate-sized data set, the slight decline from 1st and 2nd to 3rd and 4th years is not statistically significant (though given the almost 70,000 data points the smaller decline among non-philosophers is statistically significant).
One reaction some people have had to my work with Josh Rust on the moral behavior of ethics professors (e.g., here and here) is this: Although some professional training in the humanities is morally improving, one reaches a ceiling in one's undergraduate education after which further training isn't morally improving -- and philosophers, or humanities professors, or professors in general, have pretty much all hit that ceiling. That ceiling effect, the objection goes, rather than the failure of ethics courses to alter real-world behavior, explains Josh's and my finding that ethicists behave on average no better than do other types of professors. (Eddy Nahmias might suggest something like this in his critical commentary on one of Josh's and my papers next week at the Society for Philosophy and Psychology.)
I don't pretend that this is compelling evidence against that position. But it does seem to be a wee bit of evidence against that position.
As some of you will know, I have an abiding interest in the moral behavior of ethics professors. I've collected a variety of evidence suggesting that ethics professors behave on average no morally better than do professors not specializing in ethics (e.g., here, here, here, here, and here). Here's another study.
Until recently, the American Philosophical Association had more or less an honor system for paying meeting registration fees. There was no serious enforcement mechanism for ensuring that people who attended the meeting -- even people appearing on the program as chairs, speakers, or commentators -- actually paid their registration fees. (Now, however, you can't get the full program with meeting room locations without having paid the fees.)
Registration fees are not exorbitant: Since at least the mid-2000s, pre-registration for APA members been $50-$60. (Fees are somewhat higher for non-members and for on-site registration. For students, pre-registration is $10 and on-site registration is $15.) According to the APA, these fees don't fully cover the costs of hosting the meetings, with the difference subsidized from other sources of revenue. Barring exceptional circumstances, people attending the meeting plausibly have an obligation to pay their registration fees. This might be especially true for speakers and commentators, since the APA has given them a podium to promulgate their ideas.
From personal experience, I believe that almost everyone appearing on the APA program attends the meeting (maybe 95%). What I've done, then, is this: I have compared published lists of Pacific APA program participants from 2006-2008 with lists of people who paid their registration fees at those meetings -- data kindly provided by the APA with the permission of the Pacific Division. (The Pacific Division meeting is the best choice for several reasons, and both of the recent Secretary-Treasurers, Anita Silvers and Dom Lopes have been generous in supporting my research.)
Let me emphasize one point before continuing: The data were provided to me with all names encrypted so that I could not determine the registration status of any particular individual. This was a condition of the Pacific Division's cooperation and of UC Riverside's review board approval. It is also very much my own preference. I am interested only in group trends.
To keep this post to manageable size, I've put further details about coding here.
Here, then, are my preliminary findings:
Overall, 76% of program participants paid their registration fees: 75% in 2006, 76% in 2007, and 77% in 2008. (The increasing trend is not statistically significant.)
74% of participants presenting ethics-related material (henceforth "ethicists": see the coding details) paid their registration fees, compared to 76% of non-ethicists, not a statistically significant difference (556/750 vs. 671/885, z = -0.8, p = .43, 95% CI for diff -6% to +3%).
* People on the main program were more likely to have paid their fees than were people whose only participation was on the group program: 77% vs. 65% (p < .001).
* Gender did not appear to make a difference: 75% of men vs. 76% of women paid (p = .60).
* People whose primary participation was in a (generally submitted and blind refereed) colloquium session were more likely to have paid than people whose primary participation was in a (generally invited) non-colloquium session on the main program: 81% vs. 74% (p = .004).
* There was a trend, perhaps not statistically significant, for faculty at Leiter-ranked PhD-granting institutions to have been less likely to have paid registration fees than students at those same institutions: Leiter-ranked faculty 73% vs. people not at Leiter-ranked institutions (presumably mostly faculty) 75% vs. students at Leiter-ranked institutions 81% (chi-square p = .11; Leiter-ranked faculty vs. students, p = .03).
* There was a marginally significant trend for speakers and commentators to have been more likely to have paid their fees than people whose only role was chairing: 76% vs. 71% (p = .097).
Ethicists differed from non-ethicists in several dimensions.
* 33% of ethicists were women vs. 18% of non-ethicists (p < .001).
* 63% of participants whose only appearance was on the group program were ethicists vs. 42% of participants who appeared on the main program (p < .001).
* Looking only at the main program, 35% of participants whose highest level of participation was in a colloquium session were ethicists vs. 49% whose highest level of participation was in a non-colloquium session (p < .001). (I considered speaking as a higher level of participation than commenting and commenting as a higher level of participation than chairing.)
* Among faculty in Leiter-ranked departments, a smaller percentage were ethicists (38%) than among participants who were not Leiter-ranked faculty (49%, p < .001). (I've found similar results in another study too.)
I addressed these potential confounds in two ways.
First, I ran split analyses. For example, I looked only at main program participants to see if ethicists were more likely to have registered than were non-ethicists (they weren't: 77% vs. 77%, p = .90), and I did the same for participants who were only in group sessions (also no difference: 65% vs. 64%, p = .95). No split analysis revealed a significant difference between ethicists and non-ethicists.
Second, I ran logistic regressions, using the following dummy variables as predictors: ethicist, group program participant, colloquium participant, student at Leiter-ranked institution, chair. In one regression, those were the only predictors. In a second regression, each variable was crossed as an "interaction variable" with ethicist. No interaction variable was significant. In the non-interaction regression, colloquium role and main program participation were both positively predictive of having registered (p < .01) and participation only as chair was negatively predictive (p < .01). Being a student at a Leiter-ranked institution was not predictive (p = .18) and -- most importantly for my analysis -- being an ethicist was also not predictive (logistic beta = .04, p = .72), confirming the main result of the non-regression analysis.
[Thanks to the Pacific Division of the American Philosophical Association for providing access to their data, anonymously encoded, on my request. However, this research was neither solicited by nor conducted on behalf of the APA or the Pacific Division.]
People's responses to hypothetical moral scenarios can vary substantially depending on the order in which those scenarios are presented (e.g., Lombrozo 2009). Consider the well-known "Switch" and "Push" versions of The Trolley Problem. In the Switch version, an out-of-control boxcar is headed toward five people whom it will kill if nothing is done. You're standing by a railroad switch, and you can divert the boxcar onto a side-track, saving the five people. However, there's one person on the side-track, who would then be killed. Many respondents will say that there's nothing morally wrong with flipping the switch, killing the one to save the five. Some will even say that you're morally obliged to flip the switch. In the Push version, instead of being able to save the five by flipping a switch, you can do so by pushing a heavy man into the path of the boxcar, killing him but saving the five as his weight slows the boxcar. Despite the surface similarity to the Switch case, most people think it's not okay to push the man.
Here's the order effect: If you present the Push case first, people are much less likely to say it's okay to flip the switch when you then later present the Switch case than if you present the Switch case first. In one study, Fiery Cushman and I found that if we presented Push first, respondents tended to rate the two cases equivalently (on a seven-point scale from "extremely morally good" to "extremely morally bad"). But if we presented Switch first, only about half the respondents rated the scenarios equivalently. Somewhat simplified: People who see Push first will say that it's morally bad to push the man, and then when they see Switch they will say it's similarly bad to flip the switch. People who see Switch first will say it's okay to flip the switch, but then when they see the Push case they don't say "Oh, I guess that's okay too". Rather, they dig in their heels and say that pushing the man is bad despite the superficial similarity to the Switch case, and thus they rate the two scenarios inequivalently.
Strikingly, Fiery and I found that professional philosophers show the same size order effects on their judgment about hypothetical scenarios as do non-philosophers. Even when we restricted our analysis to respondents reporting a PhD in philosophy and an area of specialization or competence in ethics, we found no overall reduction of the magnitude of the order effect. (This research is forthcoming in Mind & Language and has been summarized here.) The Doctrine of the Double Effect is the orthodox (but by no means universally accepted) explanation of why it might be okay to flip the switch but not okay to push the man. According to the Doctrine of the Double Effect, it's worse to harm someone as a means of bringing about a good outcome than it is to harm someone as merely a foreseen side-effect of bringing about a good outcome. Applied to the trolley case, the thought is this: If you flip the switch, the means of saving the five is diverting the boxcar to the side-track, and the death of the one person is just a foreseen side effect. However, if you push the man, killing him is the means of saving the five.
Now maybe this is a sound doctrine, soundly applied, or maybe not. But what Fiery and I did was this: At the end of our experiment, we asked our participants whether they endorsed the Doctrine of the Double Effect. Specifically we asked the following:
Sometimes it is necessary to use one person’s death as a means to saving several more people—killing one helps you accomplish the goal of saving several. Other times one person’s death is a side-effect of saving several more people—the goal of saving several unavoidably ends up killing one as a consequence. Is the first morally better, worse, or the same as the second? [Response options: ‘better’ ‘worse’ or ‘same’]
Non-philosophers' responses to this question were unrelated to the order of the presentation of the scenarios. We suspect that many of them didn't see the connection between this abstract principle and the Push and Switch scenarios presented much earlier in the questionnaire. But philosophers' responses were related to the order of presentation of the Push and Switch scenarios. Specifically, the majority of philosophers (62%) who saw the Switch scenario first endorsed the Doctrine of the Double Effect. However, the doctrine was endorsed only by a minority of philosophers (46%) who saw Push first (p = .02). What seems to have happened is this: By manipulating order of presentation, Fiery and I influenced the likelihood that respondents would rate the scenarios equivalently or inequivalently. We thereby also influenced the likelihood of our philosopher respondents' endorsing a doctrine that appears to justify inequivalent judgments about the scenarios, the Doctrine of the Double Effect. Rather than relying on stable principles to reach judgments about the cases, a certain portion of philosophers appear to have reached their scenario judgments on the basis of covert factors like order of presentation and then endorsed principles only post-hoc as a means of rationalizing their covertly influenced judgments about the specific cases.
Manipulating the order of two pairs of scenarios (a Push-Switch case and a Moral Luck case) appeared to amplify the magnitude of this effect, by pushing philosophers either generally toward or generally against endorsing inequivalency-supporting principles. With two scenario pairs ordered to favor inequivalency, we found 70% of our philosopher respondents endorsing the Doctrine of the Double Effect. With the two pairs ordered to favor equivalency, only 28% endorsed the doctrine (p < .001). This is a very large shift in opinion, given how well-known the doctrine is among philosophers and given that by this point in the questionnaire, all philosophers had viewed all versions of each scenario. We then filtered our results, looking only at respondents reporting a PhD and an area of specialization or competence in ethics, thinking that these high-grade specialists (mostly ethics professors at Leiter-ranked institutions) might have more stable opinions about the Doctrine of the Double Effect. They didn't. When the two scenario pairs were arranged to favor inequivalency, 62% of ethics PhDs endorsed the Doctrine of the Double Effect. When the two pairs were arranged to favor equivalency, 29% endorsed the doctrine (p < .05).
The simplest interpretation of our overall results, across three types of scenarios (Double Effect, Moral Luck, and Action-Omission), is that in cases like these skill in philosophy doesn't manifest as skill in consistently applying explicitly endorsed abstract principles to reach stable judgments about hypothetical scenarios; rather, it manifests more as skill in choosing principles to rationalize, post-hoc, scenario judgments that are driven by the same types of factors that drive non-philosophers' judgments.
We are organizing a workshop on "The Scope and Limits of Experimental Ethics" in Konstanz, Germany, September 20-22, 2012. We welcome contributions from all disciplines as long as the problem under investigation is of philosophical relevance. Besides our focus on ethical questions we also encourage contributions which experimentally investigate problems from other philosophical fields.
Additionally, we are open to contributions which critically discuss the role of experimental methods in philosophy in general.
Two favorite topics of X-Phi are morality and how we perceive the minds of others. In a recent paper, Liane Young, Adam Waytz and I argue that the very essence of moral judgment is mind perception. Briefly, we outline the idea that people have a psychological prototype of morality consisting of a moral dyad of two perceived minds - an agent and a patient (i.e., interpersonal harm). Grounding morality in harm is a bit contentious given recent research, but you can be the judge of whether we've made a good argument for it.
We've solicited a number of official commentaries in Psychological Inquiry, but we'd love to get some more informal feedback. Here's the paper. And here - just as a teaser - is the first figure.