When science selects for fraud

 

Natural selection is the process by which populations change as individual organisms succeed or fail to adapt to their environments. One of the more famous examples of natural selection is the peppered moth of England. Before the Industrial Revolution, these moths were lightly colored, allowing them to blend in with the light gray bark of the average tree. During the Industrial Revolution, extreme pollution painted the trees of England black with soot. To adapt, peppered moths evolved dark, soot-colored wings (Hagen, 1993).

Natural selection is also an apt metaphor for how human cultures form and thrive. The scientific community, broadly speaking, selects for a number of personality traits, making those traits more common among scientists than the general population. Often, we select for positive traits, which help the community produce good research. Yet our incentive structure, the constant competition for publications, grants and tenure, also selects for negative traits. What happens when individuals feel they have to resort to questionable research practices or even fraud in order to survive in science?

First, let us list those good traits. The scientific community seems to select for curiosity. Not every scientist is driven by a deep desire to understand the natural world, but so many are. How boring would endless conferences, lab meetings, and lectures be if one didn’t delight in asking questions and figuring out answers?  It also promotes a certain kind of analytical thinking. Those who can spot a hidden influence on their data or design a carefully controlled experiment are more likely to succeed. And it prefers perseverance. Just ask the researchers who work late into the night running gels, observing mice, or analyzing data.

Yet our scientific environment may also encourage negative traits. Of particular relevance right now is the way the scientific community selects for fraud. If asked, most scientists would say that the ideal scientist is honest, open-minded, and able to accept being wrong. But we do not directly reward these attributes. Instead, success - publication of papers, grant funding, academic positions and tenure, the approbation of our peers - is too often based on a specific kind of result: a positive result.   Publication bias, which is especially bad in the social (Peplow, 2014) and biomedical sciences (Hopewell et al., 2009), does not simply hide important null results, it also distorts the incentives for researchers.

This reward system does give an advantage to those with good scientific intuitions, which is a reasonable thing to select for. It also gives an advantage to risk-takers, those willing to risk their careers on being right. The risk averse? They have two options: to drop out of scientific research, or to commit fraud in order to ensure positive results. This was the choice made by Diederik Stapel when he made up data for his psychology experiments and by Marc Hauser when he fabricated and manipulated results. It is the choice Jens Förster is alleged to have made, and it’s the path secretly taken by many other scientists, according to research conducted by Daniele Fanelli in 2009.

In a meta-analysis of 21 surveys, which all asked scientists about their research practices, Fanelli found that an average of 1.97% of scientists admitted to fabricating or modifying data or results at least once in their careers.  14.12% claimed that they’d observed colleagues doing the same. And when they were asked about questionable research practices - a term which includes behaviors such as “dropping data points based on a gut feeling” and “changing the design, methodology or results of a study in response to pressures from a funding source” - more than a third of scientists admitted to these behaviors, with 72% reporting that their colleagues engaged in them.

These numbers may not be any worse than in the business world (EY, 2013), but in a community whose fundamental purpose is the collection of accurate and testable knowledge, they are a problem - and a problem that is bigger than Stapel or Hauser.  They result from an environment where the ethical have difficulty finding support while the unethical are elevated to the tops of their fields.

But this environment, like all ecosystems, can change. What if instead of rewarding results, we rewarded the methods that produce them? What if papers were accepted based on the strength of their design, and not the novelty or size of their results?  What if we handed out grants, awards and faculty positions to those conducting their research rigorously and openly, to those who meticulously detail their process and made an effort to help colleagues, regulators, and members of the public understand and critique their results? What kind of scientists - what kind of science - would we be nurturing?

We can censure the individuals who commit fraud, but this is like punishing the peppered moth for its dirty wings. As long as success in the scientific community is measured by results and not process, we will continue to select for those willing to violate process in order to ensure results. Our species, the scientists, need to change our environment if we want to evolve past fraud.

References:

EY, 2013. 12th global fraud survey. Retrieved from the Web, October 2014, http://www.ey.com/GL/en/Services/Assurance/Fraud-Investigation---Dispute-Services/Global-Fraud-Survey---a-place-for-integrity

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLOS One. Retrieved from the Web, October 2014, http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738

Hagen, J.B. (1993). Kettlewell and the peppered moths reconsidered. Retrieved from the Web, October 2014, http://www1.umn.edu/ships/db/kettlewell.pdf

Hopewell, S., Loudon, K., Clarke, M.J., Oxman, A.D., & Dickersin, K. (2009). Publication bias in clinical trials due to statistical significance or direction of trail results. The Cochrane Library1, 1-26.

Peplow, M. (2008). Social sciences suffer from severe publication biasNature. Retrieved from the Web, October 2013, http://www.nature.com/news/social-sciences-suffer-from-severe-publication-bias-1.15787