The perverse incentives that stand as a roadblock to scientific reform

Of course, it is easy to rationalize the value of small N studies and conceptual replications.  Small N studies are quick and easy to do.  They incur little cost and therefore have small consequences if the findings don’t work out.  Given our tolerance of tossing null effects in the wastebasket there is really no incentive for running larger studies.  And, given that the modal research one finds in journals like the Journal of Personality and Social Psychology is the multi-study package of 5 or more, small N studies, conceptual replications have become the gold standard.  Unfortunately, it is a package of irreproducibility.

Counter-Intuitive Findings

The third ideal in psychological science is to be the creative destroyer of widely held assumptions. One of the primary routes to success in psychology, for example, is to be surprising.  The best way to be surprising is to be the counter-intuitive innovator—identifying ways in which human behavior is irrational, unpredictable, or downright startling (Ross, Lepper, & Ward, 2010).  Now that media mentions and alternative metrics, like number of twitter followers are being used to evaluate scholars, it seems the incentive to publish click-bait worthy research is only increasing.

In one respect it is hard to argue with this motive.  We hold those scientists who bring unique discoveries to their field in the highest esteem.  And, every once in a while, someone actually does do something truly innovative. In the mean time, we get caught up in the pursuit of cutesy counter-intuitiveness all under the hope that our predilection will become the next big innovation.  To be clear, it is really cool when researchers identify something counter-intuitive about human behavior. But the singular pursuit of such goals often leads us to ignore the enduring questions of the human dilemma. 

The Tyranny of the Clean Narrative

The last piece of the incentive structure is quite possibly the most insidious because everyone pushes it—authors, reviewers, and editors alike.  To be successful your research must provide a clean narrative.  The research story must have a consistent beginning, middle, and end.  This means, the intro should correspond perfectly to the method section, which correlates perfectly with the findings, which all have to be statistically significant and confirm the hypotheses stated in the introduction.  The powerful incentive for a clean narrative promotes many of the questionable research practices we use.  We HARK (Hypothesizing After the Results are Known) so as to make a clean narrative.  We throw out null findings or are told to throw out null findings by reviewers and editors in order to achieve a clean narrative.  We avoid admitting to failures to replicate, again, because it would undermine a clean narrative.

The bias towards a clean narrative is especially prominent at our most prestigious journals.  Our top journals envision themselves as the repository for impressive new discoveries.  New discoveries cannot possess blemishes.  Prioritizing a clean narrative leads reviewers and editors to act as gatekeepers and mistakenly recommend against publishing studies that have null effects.  For that matter, when we as researchers fail to put together a consistent package of studies we usually self-select the paper into a lower tier journal because we know it won’t be received well at the top outlets. That means that our most honest science is most likely in our “worst” journals because they have a tendency to be more forgiving of messy narratives. 


In sum, these four pillars of perverse incentives stand strong against efforts to make our science more transparent and reproducible.  Arguments against these changes, by their nature conservative arguments to keep the status quo, only help to perpetuate a system that has rewarded individuals and individual careers, but has undermined the integrity and reliability of our science.  Reporting only statistically significant findings results in a literature that does not represent the truth. Pushing small N, conceptual replications aids and abets the hiding of inelegant findings that don’t conform perfectly to the theories we test.  Overvaluing counter-intuitive findings undermines the development of cumulative knowledge that might be relied on for social policy.  Policing studies so that they only report “clean findings” and thus have a clean narrative further promotes a depiction of science that is too good to be true.

Reasons for Pessimism

For some, the open science movement and the efforts of specific journals to change the parameters of the publication process give rise to the hope that our scientific products will become more reliable (Vazire, 2014).  I am pessimistic about our ability to change the existing system for one overarching reason.  The proposed changes of the open science and reproducibility movement are largely perceived as punitive.  They do not provide an alternative, compelling reward structure, but instead instigate a largely corrective check on existing practices. 

article author(s)