How The Good Place illustrates an unorthodox theory in moral psychology

On The Good Place, people are judged on the morality of virtually everything they do. This might seem unfair, but what really counts as moral and non-moral? Evidence from psychology suggests that there may be no clear distinction between moral and non-moral cases.

If you’re like most people, you probably don’t consider buying a tomato to be a moral decision. Which is why it was so startling, near the end of Season 3 of the TV series The Good Place, when the main characters discovered that everyone on Earth was being judged for their tomato purchases, among countless other things, in order to determine whether they were good enough to be admitted into the Good Place, the show’s version of Heaven. As Michael explains in the Season 3 episode “Chidi Sees the Time Knife”:

“These days just buying a tomato at a grocery store means that you are unwittingly supporting toxic pesticides, exploiting labor, contributing to global warming.”

In other words, what the main characters learned is that the modern world is so ethically fraught that even seemingly mundane actions are full of moral consequences. But it also astutely illustrated an unorthodox view in cognitive science: that there is no real difference between what is “moral” and what isn’t[1].

To understand this view, compare the tomato-buying decision to the decision to lie to a close friend. Before you started reading this article, you probably thought that buying a tomato wasn’t a moral decision, but that lying to a close friend was. Consider another pair of examples. Most people would consider the rule “You should not send emails threatening people’s lives” to be a moral norm: It’s wrong to threaten people with harm. On the other hand, most people would consider the rule “You should not send emails in all caps” to be just a convention: It’s a bad idea or maybe impolite to break this rule, but it isn’t wrong in a moral sense.

The intuition behind both of these pairs of examples is that there is a moral domain and a non- moral domain. That is, there is a class of norms and decisions that apply to what is right and wrong, and another class of norms and decisions that only apply to conventions and preferences, and there is a clear separation between these two domains. This intuition is shared by many psychologists who study moral judgment and it is implicitly baked into their theories of moral psychology.

But as Michael explained on The Good Place, this simple dichotomy between the moral and non-moral domains isn’t really so simple. Even if you personally don’t consider buying a tomato to be a moral decision, there surely is someone else who does. One person’s mindless decision is another person’s agonizing moral dilemma.

People disagree on what counts as moral

Psychologists who study how people understand concepts like “game”, “cup”, and “love” have known for a long time that people don’t represent these concepts as definitions with rigid boundaries. Instead, people think of concepts in terms of family resemblance, in which individual members of a category have features in common with other members but have no set of unifying properties[2] [3]. Additionally, people disagree (sometimes with themselves) about whether some instances are members of a category like “cup” depending on factors like context[4] [5].

Evidence suggests that moral concepts may be no different. Just as people with different backgrounds can disagree about whether stock car racing is a sport, people with different backgrounds can disagree about what qualifies as a moral violation. This is exactly what multiple studies have found. For example, in one influential study, Jonathan Haidt, Silvia Koller, and Maria Dias[6] collected moral judgments from 180 adult subjects spanning multiple cities and socioeconomic classes. Specifically, they collected data from 30 subjects each from a wealthier and a poorer neighborhood in each of three cities: Philadelphia in the United States, and Porto Alegre and Recife in Brazil.

They presented each subject with a set of stories that described a situation designed to be offensive but victimless. For example, one of the stories read:

“A family's dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog's body and cooked it and ate it for dinner.”

Subjects answered several questions about each story, including how wrong they thought the action in the story was. The researchers found what you might expect to find if moral concepts are like other concepts: different groups of people disagreed. Specifically, they found that people from lower socioeconomic classes were more likely to judge these victimless actions as wrong. For example, for the dog story above, 80% of the subjects from the poorer neighborhood in Philadelphia thought the family should be stopped or punished for what they did, compared to just 10% of the subjects in the wealthier neighborhood. The differences were smaller for other stories and in other cities, but overall, there was a consistent and statistically significant effect of socioeconomic class.

It's important to note that the disagreement wasn’t that some subjects thought that the eating the dog was wrong and others thought that it was right. Instead, the subjects who didn’t think it was wrong simply didn’t see it as a moral question. The disagreement was about whether to classify this case as moral or non-moral.

Results like these don’t necessarily mean that morality is subjective. But as cognitive scientist Stephen Stich1 has pointed out, these aren’t the sorts of results we would expect to find if there was a well-defined moral domain that clearly distinguishes moral cases from non-moral cases. If there’s this much disagreement, then maybe there is no sharp dividing line between the moral and non-moral domains.

Our moral reasoning toolkit

If there was a fundamental difference between moral and non-moral cases, we might expect to find that people rely on a qualitatively different sort of reasoning when they engage in moral reasoning when compared with non-moral reasoning. And while it’s true that people tend to be swayed by emotion when making moral judgments[7], studies have shown that people’s judgments on apparently moral and non-moral questions are influenced by similar factors.

For example, in one experiment, psychologists Fiery Cushman and Liane Young[8] presented 20 subjects with a set of moral scenarios in which a person is harmed. For example, in one of the moral scenarios, someone is driving down the road with a cord hanging out of their car which knocks someone off a cliff, killing them. They also presented 20 different subjects with a set of analogous non-moral scenarios. In the non-moral scenarios, someone takes the same action as in the moral scenario but merely has an effect on an object.  In the non-moral scenario analogous to the example above, the cord knocks a rock off a cliff.

Both scenarios were compared with an alternative version in which the person failed to act. For example, in the alternative version of the driver scenario, the driver could have slowed down and used the cord to pull the person up from the cliff (or to stop the rock from falling off the cliff) but did not. In all cases, subjects made judgments about how causally responsible the person in the scenario was for the action, how intentional their action was, and how wrong their action was.

The results for the moral scenarios revealed a well-known action/omission distinction. Specifically, subjects judged that the driver who knocked someone off the cliff committed a worse moral violation than the driver who failed to act, even though, in both cases, someone died. Additionally, the subjects rated the driver who knocked someone off as more causally responsible for the person’s death than the driver who failed to act (an average of about two points on a seven-point scale). This second result is notable because the subjects who read the non-moral scenarios about rocks exhibited the same disparity: they judged that the driver who knocked the rock off the cliff to be more causally responsible for the rock falling off the cliff than the driver who failed to prevent the rock from falling.

The researchers found a similar result when examining the role of means and side effects. Subjects judged that harming someone was more morally wrong when it was the means of achieving a goal than when it was a side effect (imagine pushing a bystander in front of a reckless motorcyclist to prevent the motorcycle from running over a group of children versus throwing a backpack at the motorcycle, causing it to careen off-course and hit a bystander). They also found that subjects judged such outcomes to be more intentional. People made the same pattern of judgments for intentionality for the non-moral scenarios.

What these results may suggest is that people don’t cognitively distinguish between moral and non-moral judgments. Instead, they make judgments about causality and intentionality using a single reasoning toolkit that affect their moral and non-moral judgments. Perhaps later, once people decide that a case is a moral one, people engage other faculties like emotion, but initially, the mind does not draw a distinction.

Degrees of moral category membership?

When the characters on The Good Place learn that people on Earth are being judged for every mundane decision, they are incited to try to reform the process for evaluating people. Their primary focus is fairness: It doesn’t seem fair that buying a tomato sprayed with pesticides should condemn you to an eternity of torture.

But does it mean that buying a tomato isn’t a moral decision? If you believe that harming people or the environment is a moral consideration, and that pesticides, exploitative labor, and global warming can harm people or the environment, then perhaps buying a tomato, as Michael points out, is a moral decision. The only question to consider, then, is to what extent buying a tomato contributes to those outcomes.

Perhaps it is best to think of the moral domain more like other concepts, which have fuzzy boundaries and graded membership: some instances are better examples than others. Just like you might use a mug as a bowl for cereal in a pinch (and in that situation consider it a bowl), it might be most accurate to view moral questions the same way. For example, you might say that whether or not to hit someone unprovoked is definitely a moral question, whether or not to buy a tomato is a fringe case, and whether or not to go to bed early is a bit like using a mug as a bowl: it’s not a moral question, but it might count as one depending on the context.

I mentioned earlier that the view that there is no difference between the moral and non-moral domains is unorthodox and not shared by all cognitive scientists. But as I’ve explained, this view does clarify some puzzling aspects of how people make moral judgments. While this view does clarify some things, if it is correct, hopefully it doesn’t mean that every decision we make is further condemning us to an eternity of torture.

  

[1] Stich, S. (2018). The moral domain. In K. Gray & J. Graham (Eds.), Atlas of moral psychology. 547-555. New York: The Guilford Press.

[2] Goldstone, R. L, Kersten, A., & Carvalho, P. F. (2012). Concepts and categorization. In Handbook of Psychology, Second Edition.

[3] Rosch, E., & Mervis, C. B. (1975). Family resemblances: Studies in the internal structure of categories. Cognitive Psychology, 7(4), 573-605.

[4] Labov, W. (1973). The boundaries of words and their meanings. New ways of analyzing variation in English.

[5] McCloskey, M. E., & Glucksberg, S. (1978). Natural categories: Well defined or fuzzy sets?. Memory & Cognition, 6(4), 462-472.

[6] Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog?. Journal of Personality and Social Psychology, 65(4), 613-628.

[7] Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834.

[8] Cushman, F., & Young, L. (2011). Patterns of moral judgment derive from nonmoral psychological representations. Cognitive Science, 35(6), 1052-1075.

article author(s)

facebook