Philosophers of mind typically distinguish experiential states or subjective experiences (like seeing red, feeling pain, or guilt) from intentional states (like believing or wanting) on the basis of their purportedly obvious phenomenal character. But if subjective experiences really are distinctive because they possess an obvious and unmistakable phenomenal character—in other words, because there is “something it is like to occupy them”—then presumably philosophers and non-philosophers will categorize the same mental states as subjective experiences. Specifically, if philosophers have identified a certain type of mental state as a paradigmatic subjective experience, then we should expect ordinary people to identify the same type of mental state as a subjective experience as well. But do they?
One method experimental philosophers and cognitive scientists have been using to get at this question of how people categorize mental states draws from philosophical thought experiments, which have long asked us to consider what mental states we would be willing to attribute to other entities. The method assumes that if ordinary people categorize a mental state as an experience insofar as it possesses an unavoidable phenomenal character, then we should expect this to be reflected in their attributions of phenomenal states to other entities. As Sytsma and Machery (2010) write, “we should expect the folk to deny that an entity, be it a simple organism, a simple robot, or a zombie, that lacks phenomenal consciousness can see red just as readily as they deny that it can be in pain” (302).
In a new paper, Sytsma & Machery (S&M) examine how philosophers and the folk attribute different experiential states to a non-human entity: a simple robot named Jimmy [the “S&M robot”] in just this way. In each of their studies, S&M present experimental participants with vignettes in which the S&M robot differentiates boxes on the basis of visual or olfactory cues, receives a high-voltage shock, or meets with a violent, competitor bot. For each scenario, they ask participants whether the robot (e.g.) saw red, smelled bananas, felt pain, or felt angry. What S&M discover is that participants are sometimes willing to ascribe subjective experiences to a robot, but that these ascriptions vary both within and across modalities (they are willing to attribute seeing red and smelling isoamyl acetate to the robot, but not feeling pain or smelling vomit). In the paper, as Justin Sytsma has recently, succinctly stated, S&M claim that (i) non-philosophers do not distinguish subjective experiences by their common possession of a manifest phenomenal character, and (ii) that perhaps the folk recognize an orthogonal distinction, between mental states that are valenced—such as feeling pain, smelling vomit, or experiencing anxiety—and states which are not valenced—such as seeing red and having a belief.
We greatly admire the work of S&M. And, we think that S&M have indeed found strong evidence that philosophers and non-philosophers may have different concepts of subjective experience. However we question their hypothesis that the ordinary concept can be explained in terms of valence. Recently Mark Phelan and I have been conducting some experiments suggesting that people are perfectly willing to attribute such paradigmatically affective states as smelling vomit and feeling guilt to a robot, so long as these are appropriately related to the function the robot was designed to perform.
What’s more, our experiments demonstrate that the effect is not explained in virtue of the possibility that people assume a robot with a distinctive function to be more complex (we independently varied complexity and function in our studies, and found the same asymmetry for function for a relatively simple as well as a relatively complex robot). Neither were we able to detect the predicted relationship between state attribution and positive or negative judgments people make regarding state valence. On the basis of our results, we go on to argue that S&M’s original findings are explained because their test vignettes lead participants to draw specific conclusions about the function of the robots discussed in their studies, and not because the folk recognize a discontinuity between valenced and non-valenced states.
Read more about these findings in a preliminary write-up of our results here. Comments very welcome!