Some of you may be interested that Dan Jones, author of the recent piece on free will in the New Scientist, has a blog here. Check it out.
Some of you may be interested that Dan Jones, author of the recent piece on free will in the New Scientist, has a blog here. Check it out.
I was catching up on this blog just now, and noticed that the old question about the identity of X-Phi came up in several posts and comments. Perhaps this is enough of an excuse for me to made a modest proposal. There is, of course, plenty written about the relationship between X-Phi and more traditional philosophy. I think the simplest way to approach it is to ask whether it is psychology, and if not, why not? (After all, the disciplinary identity of psychology is a lot clearer than that of philosophy.) To lay my cards on the table, I think it is: both the methods it uses and the questions it addresses are those of empirical psychology. This is not, of course, to say that X-Phi studies are necessarily philosophically uninteresting – I agree with Tamler (sort of) that such issues of relevance must be settled in piecemeal fashion. But since the two disciplines have distinct questions and methods, it is important to avoid the sort of confusion that easily arises (and has indeed arisen) from using the same or similar terms for different types of inquiry. So I will advocate that people begin to describe themselves as doing psychology of philosophy or of intuitive judgments when that’s what they do. The label ‘experimental philosophy’ and talk of ‘philosophical experiments’ has generated a lot of heat and no light. The admirable modesty of many smart X-Phiers is incompatible with the alleged continuity with the philosophical tradition.
In 1977 Nigel Morley-Bunker, an undergraduate psychology major at Pendle College (Univ. of Lancaster), attempted to find confirmation or disconfirmation for various philosophical claims about the analytic/synthetic by investigating speakers' intuitions about various sentences. He trained participants on a set of ostensibly clear cases of analytic and synthetic sentences and then tested them on a new set of sentences. He thought that he found confirmation for Quinean suspicions about the distinction. (Download Speakers_Intuitions_of_Analyticity)
Although is far from clear how (if at all) one should distinguish between experimental philosophy and philosophically interesting cognitive science that is nevertheless not experimental philosophy, Bunker's work seems to be an early instance of the former. Mr. Bunker went on to a successful career as a psychologist in the UK's National Health Service and has said that he hasn't given the topic any more thought. The work has been cited at least twice in other publications.
There are several problems with the experimental methodology used in the article, most prominent among them being that during the phase in which participants were "trained" on the analytic/synthetic distinction, no controls were placed on whether it was indeed the (non)analyticity of target sentences that was responsible for participants' judgments or whether it was their (non)apriority or (non)triviality.
In any case, I would be interested to hear readers' thoughts on whether the work should be counted an early work in XPhi and whether they know of other research that investigates the analytic/synthetic distinction.
(Thanks to Andrew Jorgensen (UCD) for bringing this work to my attention.)
Research in moral psychology has often been done either primarily from the armchair (consider Nagel, Smith, Korsgaard, etc.) or primarily in reference to empirical work (consider Kohlberg, Batson, Damasio, etc.). But a hope I share with many others is to continue the trend of blurring these into a thoroughly interdisciplinary approach.
Partly toward that goal, I've completed a draft of a relatively short entry for the Routledge Encyclopedia of Philosophy entitled "Empirical Work in Moral Psychology" (PDF). It covers both x-phi as well as empirical work not done by philosophers, and it ties them to traditional debates in philosophical moral psychology. The condensed range of topics included are: free will and moral responsibility, egoism and altruism, moral judgment and motivation, weakness and strength of will, and moral judgment and intuitions. The article is meant as a short introduction to this burgeoning field while following closely to the topics that appear in Michael Slote's REP entry "Moral Psychology."
Any feedback would be great, either in the comments thread or via email (joshdmay at gmail). Of course, if I've made a shameful error, feel free to let me know privately! I throw this initial draft out there only because I hope input will make it the best resource it can be. There are many constraints on this particular piece, so I certainly cannot accomodate all feedback. But I'll do what I can.
We're trying to determine potential interest in a lab manager (or pre or post doc position) in the Behavioral Philosophy Lab at Schreiner University. We don’t have an official position to announce yet but potential funding for this position is already in place via a recent grant. The responsibilities would be to help set up the lab and manage and analyze large amounts of data (so stats experiences or willingness to learn stats is required). Responsibilities would also include helping manage transition of the Behavioral Philosophy Lab as we move to new facilities. The new laboratory suite includes a small reception area, group testing and conference room, faculty/staff office space, individual testing areas/cubicles, and basic kitchen/lavatory facilities (old lab page is here... new one coming soon maybe with your help). The position would also afford the chance to develop independent research and to collaborate with me and with Edward Cokely (link here), as well as mentoring undergraduates and potentially teaching some classes (but only if desired & qualified). The lab is located in the small resort town of Kerrville, Texas, where cost of living are very affordable and weather is beautiful year round (if a bit hot in the summer). Ideally, the starting date would be this fall 2011 and would run from 9-12 months. Please let us know as soon as possible if you might be interested in this possible position (Email me at firstname.lastname@example.org with questions or CVs).
Eddy Nahmias and Dylan Murray have conducted some very intriguing new studies in the experimental philosophy of free will. I think they are on to something important, but it's hard to know quite how to explain their results.
In any case, I offer a brief summary here (along with some quick follow-up studies I ran to extend their findings).
The preliminary program of the Workshop "Formal Epistemology Meets Experimental Philosophy" is here. This promises to be an exciting event!
Stephan Hartmann, Chiara Liscandra, and I think that there is much value in bringing together formal models (e.g., of explanatoriness, of coherence, etc.) and experimental philosophy, and we hope that this will inspire some experimental philosophers!
In this blog (and also at Certain Doubts) there have been several discussions on whether, or to what extent, folk attributions of knowledge are sensitive to practical interests. One issue raised there is whether the folk are tracking knowledge facts predicted by a new theory or cluster of theories in epistemology known as “Pragmatic Encroachment” (McGrath, Fantl, Stanley, Hawthorne, Weatherson, among others). According to this new approach, knowledge is not purely an intellectual notion. It is infused with pragmatic considerations. Recently, Shawn Simpson and I have been running some experiments to investigate this further. In particular, we are interested in a couple of hypotheses which we think support pragmatic encroachment. (These results further support the work I have done before on the topic, which can be seen here: “Knowledge, Experiments and Practical Interests”, but they may be at conflict (perhaps) with some of the interesting work and discussion carried out by May, Sinnott-Armstrong, Hull, and Zimmerman; Buckwalter; Feltz and Zarpentine and Schaffer and Knobe. For a critical review of some this literature, see DeRose or me.)
Hypothesis 1. Imagine there are two agents in very similar situations where P is true and who (a) both have the same amount of evidence for P (or stand in the same "intellectual" relation to P), (b) both believe or accept P, (c) both have the same mistaken opinion concerning what is at stake for them concerning P, but the situations differ in what is actually at stake for the agents (one is in a high stakes situation and the other in a low stakes situation). Hypothesis H1 says there are situations that fit the constraints of the pair just described where people are more likely to attribute knowledge that P to the agent in the low stakes situation than to the agent in the high stakes situation. We gathered some evidence which suggests this hypothesis is true. It is worth noting that previous studies that probed for stakes, including my own, did not quite ensure all of (a-c) above are in place. Shawn and I think that (a-c) are important.
Hypothesis 2. The second hypothesis concerns the connection between knowledge and action. Many defenders of Pragmatic Encroachment have argued for principles such as the following: [ACTION] (for P at play) If X knows that P, then it is proper for X to act on P (Fantl and McGrath). H2 is the hypothesis that the folk attribute knowledge and appraise behavior in accordance with ACTION. Our results also support H2.
To test (H1) and (H2), we assigned one of the following two vignettes to workers on Amazon Turk living in the United States:
LOW STAKES: Peter is a college student who has entered a contest sponsored by a local bank. His task is to count the coins in a jar. The jar contains 134 coins. Peter mistakenly thinks the contest prize is one hundred dollars. In fact, the prize is just a pair of movie passes for this weekend. Peter wouldn’t want them, however, since he is leaving town this weekend. So nothing bad would happen if Peter doesn’t win the contest. After counting the coins just once, Peter concludes there are 134 coins in the jar. His friend, who also thinks the prize is one hundred dollars says to Peter “you only counted once, even if there are in fact 134 coins in the jar, you don’t know there are 134 coins in the jar. You should count them again”.
HIGH STAKES: Peter is a college student who has entered a contest sponsored by a local bank. His task is to count the coins in a jar. The jar contains 134 coins. Peter mistakenly thinks the contest prize is one hundred dollars. In fact, the prize is $10,000 which Peter really needs. He would use the money to help pay for a life-saving operation for his mother who is sick and cannot afford healthcare! So the stakes are high for Peter since if doesn’t win the contest, his mother could die. After counting the coins just once, Peter concludes there are 134 coins in the jar. His friend, who also thinks the prize is one hundred dollars says to Peter “you only counted once, even if there are in fact 134 coins in the jar, you don’t know there are 134 coins in the jar. You should count them again”.
Subjects read the following prompt:
Besides giving Peter advice about what he should do, Peter’s friend also said that Peter doesn’t know something. He said that since Peter only counted the coins once, Peter doesn’t know that there are 134 coins in the jar (even if it turns out there are 134 coins in the jar). We are interested in your opinion about this. To what extent do you agree with the following statement: “PETER KNOWS THERE ARE 134 COINS IN THE JAR”
Subjects were asked to mark their answers on a 7 point Likert scale with ‘0’= ‘strongly disagree’, 3=Neutral and 6(7)=Strongly agree. We found that there was statistically significant difference t(163)= 2.23, p= .027, between the responses to the High Stakes(M=3.058, SD=1.76) and Low Stakes scenario (M=3.68, SD=1.76), d=.35. This supports our first hypothesis (H1).
To test the second hypothesis (H2), we asked our subjects from the previous probe whether they also thought that Peter should count the pennies again. Here, they only had three options: NO, NEUTRAL and YES. Coding the NO and NEUTRAL in one category and coding the YES in a second category, we can compare the answers to the knowledge prompt above across these two groups. If people tend to act in accordance to the ACTION principle, we should see that subjects in the YES category are less likely (compared to the other group) to agree with the knowledge statement from the prompt above. In fact this is what we found. YES SHOULD (M=3.1, SD=1.69), NO/NEUTRAL(M=3.7, SD=1.9), t(163)=1.91, p=.029 (one-tailed), d=.3
These are two simple examples of the sorts of experiments Shawn and I have been running in the last couple of months. The other experiments tend to also give us results in this direction, as did the experiments I report on the paper I linked above. We can ask several questions about these modest results. (a) Do these results really support (H1) and/or (H2)? (b) Do they support the idea that folk ascriptions of knowledge pattern in the direction that pragmatic encroachment theories predict for knowledge? and (3) Do they support the epistemic claim that knowledge is sensitive to stakes in the sense of Pragmatic Encroachment? What do you guys think? Shawn and I would very much appreciate any feedback on any of these issues.
Over the past several years, experimental philosophers have presented several studies indicating that some philosophically relevant intuitions are subject to a host of undesirable biases (order effects, framing effects, and actor-observer differences among others). For example, our research suggests that heritable personality traits predict bias in some fundamental philosophically relevant intuitions (Feltz & Cokely 2008, 2009; Cokely & Feltz, 2009; Feltz, Perez, & Harris, in press; Feltz, Harris, & Perez, 2010). In response to these findings, “philosophical expertise” has been used to shield some parts of standard philosophical practice from the worries presented by experimental philosophers (e.g., Ludwig, 2007; Kauppinen 2007; Horvarth, 2010; Sosa, 2010; Williamson, 2007, 2011). One important part of the “Expertise Defense” is that philosophers are assumed to be relevantly different from the folk (e.g., as a result of their years of training) and consequently philosophers' intuitions shouldn’t display the same (or similar) biases.
But more recently, there have been serious concerns raised by experimental philosophers about the Expertise Defense. Some have used indirect strategies suggesting that philosophical expertise is unlike expertise in areas known to result in the relevant differences (e.g., in chess) (Weinberg, Gonnerman, Buckner, & Alexander, 2010 see related discussion here). Others have opted for direct strategies showing that for many important everyday behaviors (e.g., voting, returning library books, showing common courtesy) philosophers often display the same (or similar) biases as the folk (Schwitzgebel 2009; Schwitzgebel & Rust, 2010, 2009; Schwitzgebel & Cushman, in press). In a new paper (Schulz, Cokely, & Feltz, in press), we also adopt the direct strategy and present the first evidence that personality predicts persistent bias in verifiable expert intuitions about free will and moral responsibility. These results suggest that, in at least some important fundamental philosophical debates, the Expertise Defense fails.