White lies and black lies: What they have in common and how they differ

Black lies, or telling a lie to gain a personal benefit, are universally condemned. In contrast, white lies, or telling a lie to please another person, are seen as an innocent part of everyday interactions. Does that mean that white lies have no negative consequences? We discuss the origins and consequences of black lies and white lies, and point out the potentially ugly side of white lies.     

Image from www.pexels.com under CC0 liscence

In everyday life, people sometimes tell “black lies”, and sometimes “white lies”. For both types of lies (or deceptioni), the deceiver communicates misleading information to another person or group namely the deceived [1]. However, a large difference exists between black lies and white lies: With black lies, the deceiver tries to gain something at the cost of the deceived. In other words, the deceiver exploits the deceived out of self-interest. A classic example is the notorious used car dealer, who lies to customers about the state of the cars that are for sale. Regarding white lies, the picture looks different: The deceiver lies to please the deceived by using affiliative deception. For example, most of us have told a friend that their new hair-cut looks great to please and not irritate the friend, while secretly disliking the hair-cut. Such deception out of affiliative motives means to lie in order to deepen a relationship, or to please the deceived by saying what they would presumably like to hear.

Obviously, the deception in the two examples above stems from very different motives, and therefore is usually met with condemnation in case of black lies, versus affiliation in case of white lies. But are white lies thus desirable and without harm? In this article, we highlight that white lies can cause harm precisely because people use them to foster relationships and affiliation. More specifically, when people want to affiliate with others, they tend to agree with all questions and statements of others. Thereby, affiliation biases response behavior, even on neutral questions and even when nothing can be gained from the response. Such a response bias can distort responses to health surveys, public policy questionnaires, or eyewitness interrogations; in other words, white lies can cause harm by undermining the effectiveness of public policy or by incriminating innocent others. To support this argument, we first review the underlying motives of back lies versus white lies, and then illustrate how research on the prevention of black lies might also be used to prevent the negative consequences of white lies.

Black lies

Deception is so common that not only humans, but even animals engage in it. For instance, while apes often simply take food from weaker counterparts, they have also been shown to employ deception. When they can steal food by reaching through opaque instead of see-through tunnels, they often reach for the opaque tunnels so that their competitors cannot detect their actions [2, 3]. These cases of deception are exploitative, as the deceiving apes strategically mislead their counterparts for personal benefits (e.g., tasty food). But what determines whether or not people (and apes) engage in such black lies?

An obvious factor that influences whether people deceive is whether they think that they will get caught [4]. For example, low chances of being detected increase deception of taxpayers [5, 6]. Such behavior is rational, as being detected reduces the gain one can expect. Imagine the used car dealer who considers lying about a car’s history of accidents to charge a higher price for the car (i.e., the gain). If the lie is easily detected (e.g., if the car has bumps and scratches), the likelihood to sell the car decreases. Consequently, the salesman will be honest.

Image from www.pexels.com under CC0 liscence

A common assumption is that, rationally, deceiving for higher, rather than lower gains is more beneficial. Surprisingly however, this is not what psychological research finds. People seem to cheat equally often when both high and low gains are at play [7, 8, 9]. Research suggests that cheating does not only depend on materialistic gains, but also on psychological costs that deception inflicts on the deceiver. On the one hand, deceiving for larger gains is more attractive than deceiving for smaller gains. On the other hand, deceiving for larger gains carries larger psychological costs [110]. Psychological costs – the internal discomfort that people experience when doing something against their beliefs or values – depend on the magnitude of a lie [11]. In one experiment, participants were paid according to the outcome they secretly rolled with a die. When asked about their outcome, they were more likely to commit ‘smaller’ deceptions (i.e., reporting 5 instead of 4) than ‘bigger’ ones (reporting 6 instead of 1). Thus, it seems that the psychological cost of telling a lie increase with the magnitude of the lie [11].

The psychological costs of a lie are closely linked to what one thinks about oneself. In general, people want to think that they are honest. Telling big lies and deceiving others is incompatible with this image [12]. Telling somewhat smaller lies that are ‘almost true’ is easier to reconcile with a positive image of oneself [11, 13]. Because generating a plausible justification for one’s lie (e.g., “I almost rolled a 6 with my die”) is often a crucial part of deception, limiting people’s ability to come up with explanations for their lying increases honesty [14]. Furthermore, measures that highlight that one wants to be a good person increase subsequent honesty. For example, signing on top of a self-report form (e.g., tax returns) increases the attention to the moral self. Consequently, people cheat less when completing the form [15]. In other words, the human desire to view oneself as a moral person can be utilized to deter deception.

Taken together, people try to exploit others with black lies. Whether people engage in black lies depends on whether something can be gained through the deception, whether they will get caught, and whether psychological costs occur. Yet, there is another important factor: the relationship with the deceived. Research shows that cheating socially distant others is more acceptable [16]. However, people more frequently deceive close others [17]. A possible explanation is that there is often more to gain from deceiving close others, and more to lose from revealing unpleasant truths. However, we suggest that this is not the only reason. Deception is not only driven by exploitative motives but can also result from affiliative motives. Therefore, we next discuss cases of deception that result from the motivation to forge a positive relationship with the deceived or to please the deceived.

White lies

When motivated to affiliate with or to please the deceived, the deceiver tries to infer the intentions and attitudes of the deceived and communicates information accordingly [18]. In some cases, this is very easy. When telling a friend that their haircut looks great, it is obvious that this answer would please the friend, and the truth would not. Also in other instances, people feel compelled to tell white lies. People who respond to market research or health behavior questionnaires also often try to give responses that please the person asking the question [19, 20]. Just like in the case of the friend’s haircut, it is easy to anticipate that the market researcher would like to hear that you like their product, and that a health researcher would like to hear that you eat vegetables.

Yet, telling white lies is not always that simple. How do responders in market research and health surveys more generally infer which answers are expected or desired by the interviewer? Research shows that deceivers rely on the rules and principles of communication to make such judgments. According to these principles, agreement (rather than disagreement) is expected in most everyday communication [21]. We all intuitively understand this. For example, imagine you want to invite a friend to your home for a spaghetti dinner. To be sure that your friend likes spaghetti, you ask, “Do you like spaghetti?”, expecting a “Yes!”. Note that, if you assume that your friend doesn’t like broccoli but you wanted to double-check, you would ask the question differently: “You don’t like broccoli, do you?”, expecting a “No!”. If we expect “Yes!” answers, we typically ask more common, positive questions (“Do you…?”), whereas if we expect “No!”, we ask less common, negative questions (“You don’t…?”).

Image from www.pexels.com under CC0 liscence

Responders who seek to give expected answers follow the same logic. Positive questions [21] signal that “Yes” is an expected or appropriate answer. Thus, people who want to please the person asking the question typically resort to general agreement, for example by giving more “yes”-responses [22, 23]. Thereby, a focus on the assumed intention of the person asking the question results in an affirmative response bias [24]. Importantly, such an affirmative response bias can unintentionally result in deception from purely affiliative motives. In fact, recent research highlights the affiliative motive behind such deceptive communication [25]. In these studies, participants agreed more with survey questions when affiliation was primed, for example by imagining that a friend, who is a likely target of affiliation, will read the answers. When primed with affiliation, participants were more likely to agree with different kinds of questions such as “If my brother or sister fails, I feel responsible” or “I value being in good health above everything” [26]. Interestingly, participants showed the same tendency to agree when the questions were about an average person (e.g., “The average person values being in good health above everything”). In these cases, agreement conveys no desirable or positive information about the deceiver, and there is no benefit to be gained from such responding. Nevertheless, the deceiver responds in a way that matches the expected answer of the deceived, to please the deceived. Remarkably, deception in these instances occurs without any expectation of receiving a favor in return. People usually tell white lies because they expect something from their counterpart (e.g., a positive relationship), but when primed with affiliation, people deceive even in cases when nothing can be expected from the counterpart (i.e., the researcher).

The consequences of black lies and white lies

Now, we discuss how black lies and white lies affect the deceiver, the deceived, and third parties. Unsurprisingly, the literature shows that exploitative deception for one’s own benefit harms or terminates the relationship between the deceiver and the deceived [27]. A customer who discovers the used car dealer’s lies would probably leave immediately. In contrast, affiliative deception often entails positive interpersonal outcomes [28]. In the case of pro-social lies that solely benefit others, observers of the deception like and trust the deceiver more than they like and trust an honest person [29]. An example of such a pro-social lie would be to tell one’s boss that a colleague did great work, even if the work was only mediocre. In such cases, telling the truth for the sake of being truthful is seen as selfish and less moral [29].

Are white lies then without any harm? Even when the motives behind deception are purely affiliative, the deceived as well as third parties might face negative consequences. In the case of survey responding, the deceived researcher may find false results, which could negatively affect policies for third parties such as the general public [25]. Ultimately, false research results waste taxpayer money when, for example, public health programs to combat obesity or smoking build on such false results. Even more concerning might be cases of affiliative deception in eyewitness interviews. In such situations, witnesses might want to help and please the interrogator. When assuming that affirmation is the desired answer [23], witnesses in doubt might respond with yes to questions, because they assume that this is what the interrogator expects. In these cases, innocent third parties might be convicted of crimes, although the deceiver had only affiliative intentions [30].

Given its negative consequences, much research has been dedicated to the detection of black lies – see [31] for a comprehensive approach. Concerning the detection of white lies, research has focused on the detection of socially desirable responding [20, 24]. However, white lies also occur on neutral items that convey no desirable information [25]. More research is needed to study how such deception can be detected. One way to address the problem of white lies in surveys could be using implicit measures that do not rely on explicit verbal questions, or phrasing questions in neutral ways so that people cannot infer which answer might be desired by the person asking the question. Nevertheless, if you want to know whether your friend truly likes your new haircut, even from a neutral question like “What do you think of my haircut?” will your friend probably still infer that you want them to say something positive. 

Regarding the personal consequences for the deceiver, we have discussed how telling black lies can be psychologically costly [11]. Research has not yet investigated whether telling white lies also bears psychological costs for the deceiver. For example, does giving a certain response in a survey to please a researcher also threaten the deceiver’s honest self-image? Unlike in the cases of deception due to exploitative motives, the perceived benefits of affiliative deception may outweigh its costs. The positive feeling that results from thinking that one has pleased the researcher might be stronger than the psychological costs of having lied. Otherwise, it would be difficult to understand why people engage in affiliative deception at all. 

Whereas black lies are in many instances prevented through deterrence and punishment (e.g., termination of a relationship), white lies are in contrast assumed to be a necessary lubricant of daily interactions that does not need to be prevented. After all, why prevent small flattering lies about a friend’s haircut? However, we have pointed out above that some white lies can indeed have negative consequences, for example by distorting the results of research in the public interest. To prevent such white lies in order to facilitate public policy, considering the above-mentioned psychological costs of deception could offer one possibility to prevent deception. Deceivers might think that their affirmative answers please market or health researchers, without inflicting any costs on the deceived. Thus, these researchers might, at the onset of their studies, make explicit that their research is hurt even by well-meant deceptive answers. Thereby, deceivers might become aware of the costs of their deception. People might then refrain from affiliative deception, thereby giving researchers more accurate answers.

Another way to prevent exploitative deception is to highlight people’s moral image of themselves (e.g., by signing before providing information) [15]. The question arises whether this measure could also reduce affiliative deception. Thereby, emphasizing the moral self when responding to questionnaires could lead to more accurate answers because people rather focus on their ethicality than on what kind of answer please the researcher. Doing so could also increase accuracy in market research or public health surveys.

Deception is part of everyday human interaction. Although the used car dealer who is lying to clients might at first sight not have much in common with the survey participant who mostly responds with “yes”, we argue that both cases are examples of deception. Importantly, whereas we all immediately understand the harm in telling black lies, also telling white lies can ultimately be harmful.

Notes

i We treat lying as one special case of deception, in which an untruth is stated whereas deception can also take other forms, such as physically hiding something. However, there is an ongoing debate about their relation [32].

References

1 Gneezy, U. (2005). Deception: The role of consequences. The American Economic Review, 95, 384-394.

2 Hare, B., Call, J, & Tomasello, M. (2006). Chimpanzees deceive a human competitor by hiding. Cognition, 101, p. 495-514.

3 Melis, A. P., Call, J., & Tomasello, M. (2006). Chimpanzees (Pan troglodytes) conceal visual and auditory information from others. Journal of Comparative Psychology, 120, 154–162.

4 Nagin, D. S., & Pogarsky, G. (2001). Integrating celerity, impulsivity, and extralegal sanction threats into a model of general deterrence: Theory and evidence. Criminology, 39, 865-892.

5 Polinsky, A.M., & Shavell, S. (2000). The economic theory of public enforcement of law. Journal of Economic Literature, 38, 45–76.

6 Fischer, C. M., Wartick, M., & Mark, M. M. (1992). Detection probability and taxpayer compliance: A review of the literature. Journal of Accounting Literature, 11, 1-46.

7 Conrads, J., Irlenbusch, B., Rilke, R. M., Schielke, A., & Walkowitz, G. (2014). Honesty in tournaments. Economics Letters, 123, 90–93.

8 Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11, 525–547.

9 Kriss, P. H., Nagel, R., & Weber, R. A. (2013). Implicit vs. explicit deception in ultimatum games with incomplete information. Journal of Economic Behavior & Organization, 93, 337-346.

10 Charness, G., & Dufwenberg, M. (2006). Promises and partnership. Econometrica 74, 1579–1601.

11 Hilbig, B. E, & Hessler, C. M. (2013). What lies beneath: How the distance between truth and lie drives dishonesty. Journal of Experimental Social Psychology, 49, 263-266.

12 Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–644.

13 Shalvi, S., Dana, J., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organizational Behavior and Human Decision Processes, 115, 181–190.

14 Shalvi, S., Eldar, O. & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological Science, 23, 1264-1270.

15 Shu, L. L., Mazar, N., Gino, F., Ariely, D., & Bazerman, M. H. (2012). Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proceedings of the National Academy of Sciences, 109, 15197-15200.

16 Mealy, M., Stephan, W., & Urrutia, I. C. (2007). The acceptability of lies: A comparison of Ecuadorians and Euro-Americans. International Journal of Intercultural Relations, 31, 689-702.

17 Whitty, M. T., Buchanan, T., Joinson, A. N., & Meredith, A. (2012). Not all lies are spontaneous: An examination of deception across different modes of communication. Journal of the American Society for Information Science and Technology, 63, 208-216.

18 Echterhoff, G. (2010). Shared reality: Antecedents, processes, and consequences. Social Cognition, 28, 273–276.

19 Page, M. M. (1981). Demand compliance in laboratory experiments. In J. T. Tedeschi (Ed)., Impression management theory and social psychological research (pp. 57-82). New York: Academic Press.

20 Schwarz, N. (1999). Self-reports: how the questions shape the answers. American Psychologist, 54, 93-105.

21 Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics: Vol. 3. Speech acts (pp. 41-58). New York: Academic Press.

22 Smith, P. B. (2004). Acquiescent response bias as an aspect of cultural communication style. Journal of Cross-Cultural Psychology, 35, 50-61.

23 Van Herk, H., Poortinga, Y. H., & Verhallen, T. M. (2004). Response styles in rating scales evidence of method bias in data from six EU countries. Journal of Cross-Cultural Psychology, 35, 346-360.

24 Ones, D. S., Viswesvaran, C., & Reiss, A. D. (1996). Role of social desirability in personality testing for personnel selection: The red herring. Journal of Applied Psychology, 81, 660-679.

25 Steinmetz, J., & Posten, A.-C. (2017). Physical temperature affects response behavior. Journal of Experimental Social Psychology, 70, 294-300.

26 Singelis, T. M. (1994). The measurement of independent and interdependent self-construals. Personality and Social Psychology Bulletin, 20, 580-591.

27 McCornack, S. A., & Levine, T. R. (1990). When lies are uncovered: Emotional and relational outcomes of discovered deception. Communications Monographs, 57, 119-138.

28 Levine, E. E., & Schweitzer, M. E. (2014). Are liars ethical? On the tension between benevolence and honesty. Journal of Experimental Social Psychology, 53, 107-117.

29 Levine, E. E., Schweitzer, M. E. (2015). Prosocial lies: When deception breeds trust. Organizational Behavior and Human Decision Processes, 126, 88-106.

30 Posten, A.-C., & Steinmetz, J. (2018). Temperatur und Zustimmung – ein bisher unbekannter Zusammenhang: Einblicke für Praxis und Forschung. Praxis der Rechtspsychologie.

31 Abouelenien, M., Pérez-Rosas, V., Mihalcea, R., & Burzo, M. (2014). Deception detection using a multimodal approach. In Proceedings of the 16th International Conference on Multimodal Interaction, 58-65.

32 Stokke, A. (2013). Lying, Deceiving, and Misleading. Philosophy Compass, 8, 348-359.