Reform from the Bottom Up

In recent months social psychologists have focused an increasing amount of attention on the soundness of their scientific methods. Although the problems we face are troubling, I believe that the renewed attention they are getting is a very positive trend because a self-critical approach is essential to ensuring the continuing health of the discipline. If, as a scientific community, we were to ignore problems as they became apparent, then our entire endeavor would be undermined. The question, then, is not whether we need to be improving the state of our science, but how we can do so most effectively.

One set of reforms that has been at the forefront of recent debate has taken issue with the problem of False-Positive Psychology (FPP), which I’ve written about previously here. Published just last year in Psychological Science, the three authors (Joe Simmons, Leif Nelson, and Uri Simonsohn; pictured) make the case that researchers often make methodological choices that, although they may seem benign, can dramatically inflate the chance that they will generate a false-positive result – or, in other words, they are p-hacking (i.e., engaging in various methodological maneuvers that assure that the target significance level of p < .05 is reached). (Read the paper here; for more discussion see 1, 2, 3, 4.)

 

Despite the attention they have received, the FPP recommendations (or any similar ones) have not been widely adopted. Although, anecdotally, they have inspired some change at the individual level (one example), not much has happened at the institutional level. Journals do not yet require the transparency that would help fix the p-hacking problem. Why not? In a basic sense, change is hard. In addition to simple inertia, there are obvious benefits to the researcher who engages in p-hacking, including more significant results, less data to collect and less associated cost, and more publications. The cost, however, is that these publications are much more likely to be filled with false-positive results. So while the researcher may benefit on some level in the short run, in the long run the science suffers. And, ultimately, I strongly believe that, personal motivations notwithstanding, the vast majority of researchers are deeply committed to the pursuit of the truth.

On the surface, the easiest solution would be for journals to adopt a policy that requires researchers submitting articles to be transparent in the reporting of their methods. For example, researchers should be encouraged to determine their sample sizes in advance; but, if they peeked at the data then added subjects, they should be required to report it. The trouble is that it’s not necessarily that easy.

For one thing, there is a great deal of research that is currently ongoing. So even if a researcher were to adopt the FPP recommendations today, they would still be sitting on a pile of unpublished research that does not necessarily adhere to the new standard. They would either have to discard the research or submit it with the knowledge that their voluntary transparency would likely preclude publication based on current standards that value clean, consistent results. Moreover, there is no unequivocal consensus in the field as to which policies should be adopted. Some practices are unquestionably problematic, but others are more controversial. While I find the FPP recommendations reasonable, not everyone agrees. One could, for instance, require transparency to the extent that hypotheses and data analysis strategies would have to be formally registered in advance. If different journals were to institute different policies things could get very confusing very quickly, so the journals are – perhaps understandably – reluctant to lead the reforms. (Note: Eric Eich, the editor-in-chief of Psychological Science recently put out what seems to be a very encouraging proposal for important new initiatives that hopefully the journal will adopt in some form. More on that next week, but you can check it out here.)

If the changes are not going to come from the top down, then they will have to rise from the bottom up. Already, as a result of the renewed attention paid to these issues, the discussion of best practices and necessary reforms is underway. Professional societies such as the Society for Personality and Social Psychology and the Association for Psychological Science are hosting events on these topics at their annual conferences. The discussions are taking place among colleagues, in departmental meetings and colloquia, and over the internet through email, blogs, and listservs.

The authors of the False-Positive Psychology paper themselves have just written a short piece for the SPSP newsletter in which they make a case for transparency, addressing many of the objections to their proposals. It’s short, funny, and compelling; you should read it if you haven’t already. But more importantly, they encourage researchers to take a step beyond simply discussing the problem.

To those researchers who are on board with their recommendations they say:

 There is no need to wait for everyone to catch-up with your desire for a more transparent science. If you did not p-hack a finding, say it, and your results will be evaluated with the greater confidence they deserve.

If you determined sample size in advance, say it.

If you did not drop any variables, say it.

If you did not drop any conditions, say it.

They even offer a template for doing so, using just 21 words: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.” That’s pretty concise. Since they posted the piece, people have suggested improvements to make the statement clear and even shorter (see here).

The goal is to change the field's norms from the bottom up. Although not everyone may agree right now, "a small but energetic choir" may bring the rest along. Will it work? It's certainly possible that it will be ignored and that too few people will clearly signal the transparency of their research methods for the change to take hold broadly. But it can't hurt to try. Reading papers that carry the "21 words" may send an important message to those who are wary or reluctant to reform. For researchers it will make it clear that these changes are already underway, which may prompt them to get on board in order to improve the science they are doing, or at least to make sure that they are not left behind. And for reviewers seeing the "21 words" not only allows them to evaluate those submissions with greater confidence, but also to ask for greater transparency from those submissions in which they are absent. With a little time, norms may change. We are a field that is built on conventions and social norms -- there's nothing inherently meaningful about a significance level of p = .05 -- so there's no reason not to add transparency to the list of things that we expect of responsible scientists, and hopefully it will eventually take for granted.