Pre-registration watch part 1: Detecting deception
In this blog post, I will report on the experiences of social psychologists, such as myself, with committing ourselves to detailed descriptions of the hypotheses and other specifics of experiments before we run them. In part 1, I briefly cover some basics about this so-called pre-registration and then I will discuss my own experience with a pre-registration in the online journal Frontiers about unconscious deception detection.
The idea of pre-registering a study is simple: authors who decide they are going to run a study will state their hypotheses, method, and analysis plan before they run that study. This way of doing research has been argued for by several authors in several journals (e.g., Brandt et al., 2014; Cumming, 2014). Some people might say that this process requires authors to do more work. However, my experience in my field of social psychology is that it is not – for my pre-registration, I did exactly the same as I do otherwise, but simply in a different order!
Not just another blog about solid science.
Pre-specifying important aspects of an experiment prior to data collection is one very promising possibility to help limit the mistakes researchers sometimes make (either knowingly or unknowingly) and this practice could help make the research we do more solid. Because an increasing number of people are advocating in favor of pre-registration, and as more and more people are pre-registering their studies, I will discuss experiences with this method here. Instead of reiterating the pros and cons of pre-registration, my aim is to cover true stories about researchers and their studies that they pre-registered (but for an example of an online discussion, I would like to refer you to the discussion here. For an attest to the overwhelming support for pre-registration, see here, and for a counterargument see here).
What does one have to do to pre-register?
In the simplest case, a scientist could make a project page on, for instance, “The Open Science Framework,” (see: https://osf.io/getting-started/). There he or she can upload predictions, an analysis plan, materials, and what not. Thinking about all these aspects in detail beforehand can greatly enhance the value of the research. The online project can also be shared with others, for instance the co-authors, and these co-authors can add to the project as well.
When ready to run the experiment, the current version of the project can be frozen by literally two clicks of the mouse. This means there is a ‘read only’ copy of the study that you can always refer back to. Working in this online project and adding files is still possible after the project is frozen. For instance, the data that is collected can be added later. However, because the predictions are stated in the frozen project, these cannot be altered. The good thing about this is that now the researcher has also protected herself from the temptation to change predictions after she has seen the data (“I predicted this all along, really!”). Note that in all the steps above, the project does not have to be publicly visible to others, although this option is available too.
An IPA project: Accepted for publication no matter the outcome.
In the case of the registered report I made for Frontiers, pre-registering happened in a slightly different manner. It happened via an appropriately termed ‘IPA’: In Principle Acceptance. Next to having the same acronym as hoppy pale ale, this also entails that once acceptance is granted, the registered study will be published if run and analyzed as specified in the plan. You can find much more information on the guidelines for an IPA and how they came about here. Before attaining acceptance, the registered report is reviewed by experts in the field just like any other paper.
In my experience, this review stage is where the magic happened. All reviewers (in our case, there were four) became part of a collaborative effort to enhance the proposed study. As the comments and suggestions from reviewers came in, the experiment evolved into its current form – improving significantly with every suggestion. Note that if the experiment had been run like we had in mind without pre-registering, it never would have been what it is now. My reviewers had many comments and suggestions, for instance to add an extra measure. Having their expert opinion and support for my experiment enhanced my confidence in that we would be running a valuable study. I am grateful for the contributions of the reviewers, and for the openness of Frontiers. Reviewers’ names are disclosed on accepted papers if they allow it, and in this way they can also receive due credit (you will find the names of three of my reviewers below the abstract here).
Unconscious deception detection?!
The main question of the above-mentioned registered report is both straightforward and bold: Do people intuitively know that a liar is not to be trusted (even though they don’t know that they are being lied to)? And can we, as researchers, measure this intuition by hooking our participants up to an apparatus that measures physiological responses while they watch a liar?
Now, you might think there is something to it, because, for instance, you recall a time that someone was talking to you and your gut feeling told you that something was not quite right. Or you might think that this is a little out there, which is also fine. Nevertheless, as described in the introduction of the registered report, we believe there are theoretical grounds to suspect that there might be something to it.
Even though researchers might disagree with each other about some of the predictions that are made, this is exactly where pre-registering an experiment adds to building our scientific knowledge base. The authors of a registered report are bound to perform and analyze the experiment as specified, and time will tell what will happen. In my eyes there are two things that can happen: 1) daring to put our hypotheses out there will be acknowledged because the data suggest there is something to them, or 2) daring to put our hypotheses out there will be acknowledged because the data suggest there is nothing to them. In both cases, we have learned something. The idea that is put forward in our registered report might spark new ideas in others – where in a different universe it would have ended up in a dusty file drawer.
Inevitably, this post ends with a cliffhanger: Will the pre-registered hypothesis be confirmed? Fortunately, the beauty of pre-specifying an experiment is that at the stage of pre-registering you can find out whether others think the idea is worth investigating as well. In my next post I will discuss another pre-registered experiment in more detail, and my aim is to aid the researchers by spreading the word about their courage to put their hypotheses out there.
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … van ’t Veer, A. E. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. doi:10.1016/j.jesp.2013.10.005
Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25, 7–29. doi:10.1177/0956797613504966
- Culture (5)
- Current Events (1)
- Gender (1)
- Meaning Making (4)
- Other (2)
- Race & Ethnicity (3)
- Solid Science (2)