Why do we so often ignore the influence of situations on behavior?
Other possible explanations for the error have also emerged. For one, it might seem obvious that if we can’t see the situational forces at work then we would default to assuming that a guy who jumps ahead of us at the pharmacy is not a nice person. After all, we can’t know that he is racing home to give life-saving medicine to his sick daughter; all we can see is his upending of the social order. On this view, it’s not that we would necessarily default to viewing his behavior as imbued with his intentions, just that we are blinded to how the situation has constrained his behavior. Another possible idea for why the FAE happens is known as the just-world hypothesis. Lerner (1977) suggested that people are motivated to view the world such that it rewards people for good behavior and punishes them for bad behavior—that people get what they deserve in life. It is relatively common to hear people being blamed for being overweight, when physiological circumstances outside their control might contribute more than their apparent laziness (Gaesser, 2002). Here we commit the FAE because we assume that they’ve gotten what they deserve—that the situation (in this case a combination of environmental and physiological [non-conscious] factors) has no role in causing their obesity. A further possibility, originally raised by Gilbert, Pelham, and Krull (1988), is that the error is committed automatically, and only if we are willing or capable of expending effort to consider the influence of the situation, do we then correct our initial assumption. This possibility is quite compatible with the evidence from neuroscience. However, each of these ideas about why we ignore the situation doesn’t quite capture the original research on the FAE, which revealed that even when subjects are told directly that the situation caused the behavior, they often still explain the behavior in terms of dispositions.
In fact, the first work to demonstrate the FAE was designed to show that people can be swayed easily by situations (Jones and Harris, 1967). In Jones and Harris’seminal work, people read essays purportedly by American students that were for or against then Cuban leader, Fidel Castro. Being pro-Castro in the States was deeply taboo at the height of the cold war, and Jones and Harris felt sure that their readers would see through this ruse, and never believe that any right-minded student would willingly support Fidel Castro. In one condition, readers were told that the essay writers were free to choose their pro- or anti-Castro positions. They read the essays, and then judged whether the writers themselves were in fact pro- or anti-Castro. Jones and Harris rightly surmised that their subjects would impute that pro-Castro essay writers were pro-Castro in their private views, and that anti-Castro essay writers were not so in favor of Cuban policy. In the experiment’s second condition, however, different subjects were told that the writers’positions were chosen by a coin-flip. That is, the situation the (fake) writers had found themselves in led directly to their support for naked Communism. Jones and Harris’intrepid subjects were asked about the writers’true feelings about Castro, and this time the scientists intuited that the subjects would reason that the writers were simply following orders; that their essays extolling the virtues of forced rationing were written so because they had been given no choice. The experiment’s big surprise was that the results were just the same as in the first, freely chosen, condition: pro-Castro writers were viewed just as pro-Castro as the people who were free to choose their own topic. Participants failed to make use of the information regarding the strong situational forces on behavior, and erroneously inferred that the stance of the essay fully reflected the stance of the writer. Something unexpected was going on, and Jones and Harris had stumbled upon what became a bedrock finding in social psychology. Writing ten years later, Lee Ross coined the phrase the “ fundamental attribution error” (1977) to describe both this result, and the many replications that had upheld the primary finding. Ned Jones, co-author of the original study, found the phrase misleading but was also miffed that he had not the forethought to create it himself (Gilbert, 1998b).