The paper argues against the consensus view that experiments have shown a normative dimension to the folk concept of intentional action (or at least to its application).
Our preferred hypothesis is that the outcome of an action is classified as intentional only if the agent is taken to have assigned it some relative importance when deciding what to do.
We argue that the existing experiments support this hypothesis and we present data from two experiments of our own that we claim can only be explained by our hypothesis.
Our hypothesis is broader than Machery’s cost–benefit analysis account, since cost in relation to benefit is only one among various ways in which relative importance can be assigned to a side-effect. We argue that our hypothesis explains the data Mallon presents against Machery as well as Machery’s data (and Knobe’s data, Nadelhoffer’s data, our data, and more).
Over the course of the paper, we develop a methodological critique of the preceding literature. We are particularly concerned with the effect that changing one word in a vignette (e.g. changing ‘harm’ for ‘help’ in Knobe’s original experiment) has on the way the reader takes other sentences in the vignette (e.g. ‘I don’t care at all about the environment!’).
We also introduce qualitative analysis of the comments provided by respondents and reflect on the relation between these and the respondents’ actual practice in applying the concept ‘intentional’. From this, we sketch a picture of the relation between experimental philosophy and more traditional conceptual analysis.
There is an on-line discussion over at On the Human about Patricia Churchland and Christopher Suhler's recent work on nonconscious control and agency. Several experimental philosophers have already joined the conversation, so hopefully some of you will follow suit! Here is the general introduction and here is the discussion thread.
Jonathan Phillips is presenting his experimental studies of people's intuitions about freedom over at the Political Philosophy Podcast Symposium.
In an interesting twist, the symposium includes a video that lets you listen to him explain the studies while following along with a powerpoint presentation (kind of like Justin Sytsma's amazing presentation on intuitions about consciousness). Readers then have a chance to write in with questions, objections, etc.
Definitely something worth checking out.
Consider the following case:
Tanya lives in a small, newly created country in Eastern Europe. Perhaps the most important issue in the region is the treatment of a disenfranchised minority that lives throughout the country. Tanya truly dislikes the minority and wants to further damage them if she can. While public opinion concerning the minority varies greatly, the government has taken the side of the minority. Consequently, a ban has been placed on any action or public speech that is intended to hurt the disenfranchised minority. In other words, the government has made laws against hurting the minority, but Tanya wishes she could hurt them.
Now ask yourself: 'To what extent do these laws diminish Tanya's freedom?'
Once you have decided on the answer to this question, consider a very similar case with one important difference: Tanya wants to help the disenfranchised minority.
Tanya lives in a small, newly created country in Eastern Europe. Perhaps the most important issue in the region is the treatment of a disenfranchised minority that lives throughout the country. Tanya truly cares about the minority and really wants to help them if she can. While public opinion concerning the minority varies greatly, the government has sided against the minority. Consequently, a ban has been placed on any action or public speech that is intended to help the disenfranchised minority. In other words, the government has made laws against helping the minority, but Tanya wishes she could help them.
Now ask yourself the same question again: 'To what extent do these laws diminish Tanya's freedom?'
During an experiment I conducted in which participants were presented with these two cases, I discovered an very interesting result. Participants thought that Tanya's freedom was much more diminished in the second case than in the first. In other words, subjects thought that people's freedom was much more diminished when they were prevented from doing something morally good than when they were prevented from doing something morally bad. After noticing this interesting result, I conducted two other studies which further confirmed the interesting effect found in the first survey.
I would like to know how other people interested in experimental philosophy might explain this effect. Specifically, what is it about the folk concept of freedom that it elicits this result? Any and all suggestions are welcome.
I propose one possible explanation and a survey all the experiments in the full paper, here: Freedom: Morality and Folk Intuitions
First, let me congratulate Eddy, Thomas, and the other organizers of the Workshop on Experimental Philosophy that was held at the outset of this year's meeting of the SPP. It was a terrific event, and I was delighted to have the opportunity to participate in it.
My talk at the workshop focused on "Intuitions of Negligence"--those intuitive judgments of reasonableness that jurors are asked to render every day in courtrooms across the country about cases involving unintentional harm. As I remarked, these judgments are quite interesting from a cognitive science/experimental philosophy perspective. First, jurors are not given much guidance on how to decide whether given conduct is negligent. Rather, they are simply told to consult their own sense of how a reasonably prudent person would have acted under the circumstances. New York's pattern jury instructions are typical in this regard:
"Negligence is lack of ordinary care. It is a failure to use that degree of care that a reasonably prudent person would have used under the same circumstances. Negligence may arise from doing an act that a reasonably prudent person would not have done under the circumstances, or, on the other hand, from failing to do an act that a reasonably prudent person would have done under the same circumstances."
There are minor variations across jurisdictions: some states use the phrase "reasonably careful person" instead of "reasonably prudent person," for example. But the fact remains that, in virtually all American jurisdictions, juries are not given much guidance on the negligence issue, beyond being asked to consider what a reasonably prudent (or careful) person would have done in the same situation.
Second, American juries are generally not required to explain or justify their intuitions of negligence. Instead, they are simply asked to render a yes-or-no verdict, without accompanying reasons. (The trial judge or a reviewing court may override this determination, of course, but the dominant trend in American law is to permit juries wide latitude in making this decision.) While this practice has been criticized, it arguably reflects a sound appreciation of certain inherent limitations of human psychology. In many cognitive domains, intuitive judgments are guided by unconscious principles, but these principles are difficult or impossible to recover after the fact through ordinary processes of reflection or introspection. As a result, an individual's post hoc explanations of her judgments are often misleading, unreliable, or altogether inaccurate. Yet the judgments themselves often appear on reflection to be sound, and the computations supporting them are often surprisingly complex and sophisticated. The locus of certitude, therefore, is correctly located in the intuitive judgments themselves, rather than their accompanying justifications. The ability of ordinary language users to judge whether a novel expression in their language is acceptable or unacceptable is one obvious illustration of this phenomenon, but there are many other familiar examples throughout the cognitive sciences. Brian Scholl offered some nice examples at the workshop, drawn from the study of visual perception.
Much recent work in moral psychology, to which many readers of this blog have contributed, suggests that ordinary moral cognition may fall into the same general pattern. The unconscious computational character of intuitive moral judgment, however, must be shown and not merely asserted. This is what I have sought to do in a new paper on moral grammar and intuitive jurisprudence, which is forthcoming in a volume on moral judgment and decision making in the Psychology of Learning and Motivation series.
The paper, which is a slightly revised and reformatted draft of the version that was recently posted on Legal Theory Blog, incorporates my remarks at the workshop on how a five-variable "moral calculus of risk" can be used to predict and explain moral intuitions in a wide variety of cases involving unintentional harm. Spurred on by some interesting methodological discussions that appeared recently on this blog, Leiter Reports (see, e.g., Jason Stanley's posts here and here), and Savage Minds (see here), the paper also attempts to integrate and serve as a bridge of sorts between experimental philosophy and more traditional philosophy (Descartes, Hume, Kant, Mill, Brentano, etc.), along with relevant work in linguistics and cognitive science (Chomsky, Fodor, Rey, Spelke, etc.), jurisprudence (Bentham, Terry, Salmond, Cardozo, etc.), anthropology (Durkheim, Gluckman, Geertz, etc.), and other disciplines. The paper is therefore somewhat ambitious, and I would welcome comments, criticisms, or suggestions from interested readers. In addition, if any graduate students, post-doc's, or others would be interested in collaborating to gather data on the 14 new trolley problems in Table 7, you should feel free to contact me at firstname.lastname@example.org.
Esfand Nafisi recently wrote to me with the following message:
Hi. My name is Esfand Nafisi. I am a 2nd year law student at Northwestern University with a B.S. in psychology. Since watching Professor Stich's series on moral theory and cognition, I have developed a great interest in experimental philosophy, especially as it relates to law.
I plan on spending the bulk of the next year and a half focusing on the role experimental philosophy might play in resolving difficult legal questions concerning things like culpability, intent, etc. Before I begin in earnest, I thought it might be useful to see if there are any experimental philosophers out there who have any thoughts on the topic they'd like to share. I'm looking for ideas, collaborators, instruction, whatever.
I think this project has enormous potential, and I would encourage experimental philosophers either to put up suggestions in the comments section here or to write to Esfand directly at esfand.nafisi at gmail.com.
In a series of well-written and well-researched articles, Gregory Mitchell--the Sheila McDevit professor of Law at Florida State University--has launched an assualt on the behavioral law and economics movement (or as he calls it, "legal decision theory"). Legal decision theorists typcially rely on empirical research from social psychology--especially the research on heuristics and biases--to undermine some of the foundational assumptions of traditional law and economics (especially the rational actor models of human psychology and decision-making so prevalent among economists). According to Mitchel, legal decision theorists all-too-often make sweeping claims about human rationality (or lack thereof) that often go well beyond the data that have been collected. On his view, much more caution is in order. Conceding that many of the recent developments in social psychology give us reason for being suspicious of many of the main tenents of law and economics, Mitchell nevertheless thinks that legal decision theorists overstate their case--a trend that he believes may unfortunately threaten to undermine the long-term credibility of empirical research among legal theorists. Mitchell points out a number of problems with the research that is relied on by legal decision theorists--many of which are relevant to the work being done in experimental philosophy. For example, small sample sizes, the near exclusive reliance on between-subject studies, using the rational or right choice as the null-hypothesis, the overstating of the significance of "statistical significance," the dearth of meta-analyses, making inferences about individual differences based on group differences, etc. Nearly all of the worries that Mitchell expresses about legal decision theory are worries that apply equally to the kinds of studies that we experimental philosophers have relied on so far. As such, I think we would all do well to pay attention to Mitchell's important work in this area.
In a number of areas of philosophy one might be tempted to put forward what I am going to call an "as if" theory in an effort to respond to skeptical arguments. An “as if” theory has the following form:
Even if we have good evidence and/or arguments to the effect that humans lack some property or capacity x, it is nevertheless in our interest to continue believing and/or acting as if x is a property or capacity that we do not lack.
Take, for example, the suggestion that even if humans happen not to be "metaphysically" free--we may be better off living under the general illusion that we are. Both David Velleman's "epistemic freedom"(2001) and Saul Smilansky's "illusionism" (2000) come to mind. It is easy enough to imagine similar stories being told in other areas as well. In the wake of John Doris' attack on robust character traits via what he calls situationalism (2002), for instance, it would be easy enough for a virtue theorist with consequentialist tendencies to argue that we should continue acting as if our character traits were more robust than the empirical data suggest they actually are. Consider another possible “as if” theory--even if it turns out that harsh penalties do not deter violent crime (indeed, even if it turns out that harsher penalties make matters worse!), we are nevertheless better off as a society pretending that harsher penalties do in fact reduce the amount of violent crime.
“As if” theorists have an easy was of shielding themselves from the impact of skeptical arguments. Indeed, they can essentially grant the skeptical premises while at the same time arguing that we can avoid the potentially negative social implications of accepting these skeptical premises by simply pretending that these skeptical premises are false. Hence, even if humans are descriptively unfree or even if events are entirely determined (or entirely random for that matter) or even if many (if not most) of the springs of action are beyond (or below or above) the folds of consciousness or even if our belief in moral objectivity is false or even if there is no God (or gods), it is still to our advantage to maintain certain illusions about the contrary being the case. In this respect, “as if” theories allow us to respond to the “real” threat of skeptical concerns along roughly Humean lines—i.e., we accept the premises and conclusions of skeptical arguments at face value while in our studies. Having done so, we nevertheless eventually find ourselves once again playing backgammon with our friends and engaging in other “mundane” affairs—living as if all of those skeptical arguments were a distant bad dream. On this view, we may naturally have a preference for certain socially adaptive fictions and fantasies. Hence, another benefit of “as if” theories is that they can be coupled with evolutionary explanations for why humans prefer the illusions that we do. And they also receive some empirical support from the research into the positive societal upshots of self-aggrandizement and other forms of cognitive biases. It turns out that people are generally better off--socially speaking--if they are somewhat out of touch with the truth about their own physical and mental limitations. If so, this gives us all the more reason to consider the possibility that even if we lack some property or capacity x, perhaps we really would better off pretending that we nevertheless have x after all.
The problems with self-deception writ large notwithstanding, does anyone think that the “as if” argumentative strategy is an effective one? I haven't really thought it through myself--I am really just curious to see what others think--either about some of the examples I have discussed or others that I have overlooked.
I realize this post is not really something directly relevant to experimental philosophy--but given the recent Supreme Court ruling concerning the execution of juveniles, I figure it is topical enough to merit attention. Plus, I am presently too busy with dissertation writing to post anything more philosophically substantive in nature!
Studies show that a majority of Americans believe that harsh legal sanctions--e.g., the death penalty, manditory-minimum sentences, three strikes and you're out laws, etc--deter crime. Studies also show that these beliefs may very well be false. Assuming, for the sake of argument, that harsh penalties do not reduce crime, why should we be beholden to the intuitions of average Americans? Of course, even if we agree that we should not be beholden, what other choice do we have? Legislators are supposed to cater to our interests--which are in turn determined to a large extent by our beliefs. As a result, no legislator who wanted to be either elected or reelected would suggest that we should be "softer on crime"--even though it may turn out that "being softer on crime" would ultimately reduce crime. For instance, if we spent more money on preventative measures such as improved primary and secondary education and better funded community outreach programs as well as on better rehabilitative programs such as drug treatment, vocational training, and anger management, we might see a reduction in violent crime. Yet, these kinds of programs are typically unpopular with the "average Joe"--who thinks these are just liberal attempts to coddle criminals. Hence, there is virtually no chance that the only kinds of programs that might actually reduce the number of violent crimes in this country will be adopted--while obscene amounts of money continue to be spent on constructing prisons (indeed, some states spend nearly as much on their "criminal justice systems" as on their educational systems!). How is this pernicious cycle to be broken? More specifically, isn't this an area where philosophers--along with criminologists, sociologists,and psychologists--ought to be doing more byway of educating the public? Peter Singer once famously suggested that philosophers were finally "back on the job"--i.e., that philosophers are finally starting to take a more active role in public policy issues. Should this be part of our job qua philosophers? If so, what is the best way of living up to our civil duties and obligations? If not, whose job is it? More importantly, to the extent that we do not join the public fray concerning issues that we examine in the comfort of our studies, why should the"average Joe" care much about what we have to say?
I suppose this post is really about my struggle to figure out how to make philosophy relevant to more "pedestrian" concerns--something many (if not most) philosophers frequently fail to do. In some areas of philosophy--e.g., contemporary analytic metaphysics, epistemology, or the philosophy of language--the reasons for this are quite clear. But in other areas--e.g., social political theory, legal theory, and ethics--it seems less excusable. Indeed, it is telling that one area of philosophy that is often treated with derision among contemporary analytic philosophers is "applied ethics"--an area that is purportedly less rigorous or scholarly.