The Torture of Straw Men: A Critical Impression of Devezer et al., “The Case for Formal Methodology in Scientific Reform”

NB. This is a revised version of an earlier blog post that contained hyperbole, an unfortunate phrase involving family members, and reference to sensitive political opinions. I am grateful to everyone who suggested improvements, which I have incorporated to the best of my ability. In addition, I have made a series of more substantial changes, because I could see how the overall tone was needlessly confrontational. Indeed, parts of my earlier post were interpreted as a personal attack on Devezer et al., and although I have of course denied this, it is entirely possible that my snarky sentences were motivated by a desire to “retaliate” for what I believed was an unjust characterization of my own position and that of a movement with which I identify. I hope the present version is more mature and balanced.  

Tldr; In a withering critique on the methodological reform movement, Devezer and colleagues attack and demolish several extreme claims. However, I believe these claims are straw men, and it seems to me that the Devezer et al. paper leaves unaddressed the real claims of the reform movement (e.g., “be transparent”; “the claim that a finding generalizes to other contexts is undercut when that finding turns out not to replicate in these other contexts”). Contrary to Devezer et al., I will argue that it is possible to provide a statistical underpinning for the idea that data-dredging differs from foresight, both in the frequentist and in the Bayesian paradigm. I do, however, acknowledge that it is unpleasant to see one’s worldview challenged, and the Devezer et al. paper certainly challenges mine. Readers are invited to make up their own mind.  

Prelude

As psychology is slowly following in the footsteps of medicine and making preregistration of empirical studies the norm, some of my friends remain decidedly unimpressed. It is with considerable interest that I read their latest assault paper, “The case for formal methodology in scientific reform”. Here is the abstract:

“Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.”

Ouch. It is hard not to take this personally. Over the years, I have advocated claims that are similar to the ones that find themselves on the Devezer et al. chopping board. Similar — but not the same. In fact, I am not sure anybody advocates the claims as stated, and I am therefore inclined to believe that all three claims may be straw men. A less confrontational way of saying this is that I fully agree with the main claims from Devezer et al. as stated in the abstract. As I will outline below, the real claims from the reform movement are almost tautological statements about how we can collect empirical evidence for scientific hypotheses.

Now before proceeding I should emphasize that I am probably the least objective person to discuss this work. Nobody enjoys seeing their academic contributions called into question, and nobody likes to reexamine opinions that form the core of one’s scientific outlook. However, I believe my response may nonetheless be of interest. 

It does appear that the specific topic is increasingly difficult to debate, as both parties appear relatively certain that they are correct and the other is simply mistaken. The situation reminds me of “the dress: some people see it as white and gold, other as black and blue. A discussion between the white-and-gold camp and the black-and-blue camp is unlikely to result in anything other than confusion and frustration. That said, I do believe that in this particular case consensus is possible — I know that in specific experimental/modeling scenarios, the Devezer et al. authors and I would probably agree on almost everything. 

Before we get going, a remark about tone. The Devezer et al. paper uses robust and direct language to express their dislike of the methodological reform movement and my own work specifically — at least this was my initial take-away, but I may be wrong. The tone of my reply will be in the same robust spirit (although much less robust than in the initial version of this post). In the interest of brevity, I have only commented on the issues that I perceive to be most important.

The Need for a Formal Approach

In the first two pages, Devezer et al. bemoan the lack of formal rigor in the reform movement. They suggest that policies are recommended willy-nilly, and lack a proper mathematical framework grounded in probability theory. There is a good reason, however, for this lack of rigor: the key tenets are so simple that they are almost tautological. Embedding them in mathematical formalism may easily give the impression of being ostentatious. For example, key tenets are “do not cheat”, “be transparent”, and “the claim that a finding generalizes to other contexts is undercut when that finding turns out not to replicate in these other contexts”.  

If Devezer et al. have managed to create a mathematical formalism that goes against these fundamental norms, then it is not the norms that are called into question, but rather the assumptions that underpin their formalism.

Assessing Claim 1: “Reproducibility is the Cornerstone of, or a Demarcation Criterion for, Science”

The authors write:

“A common assertion in the methodological reform literature is that reproducibility is a core scientific virtue and should be used as a standard to evaluate the value of research findings (Begley and Ioannidis, 2015; Braude, 2002; McNutt, 2014; Open Science Collaboration, 2012, 2015; Simons, 2014; Srivastava, 2018; Zwaan et al., 2018). This assertion is typically presented without explicit justification, but implicitly relies on two assumptions: first, that science aims to discover regularities about nature and, second, that reproducible empirical findings are indicators of true regularities. This view implies that if we cannot reproduce findings, we are failing to discover these regularities and hence, we are not practicing science.”

I believe the authors slip up in the final sentence: “This view implies that if we cannot reproduce findings, we are failing to discover these regularities and hence, we are not practicing science.” There is no such implication. Reproducibility is A cornerstone of science. It is not THE cornerstone; after all, many phenomena in evolution, geophysics, and astrophysics do not lend themselves to reproducibility (the authors give other examples, but the point is the same). In the words of geophysicist Sir Harold Jeffreys:

“Repetition of the whole of the experiment, if undertaken at all, would usually be done either to test a suggested improvement in technique, to test the competence of the experimenter, or to increase accuracy by accumulation of data. On the whole, then, it does not seem that repetition, or the possibility of it, is of primary importance. If we admitted that it was, astronomy would no longer be regarded as a science, since the planets have never even approximately repeated their positions since astronomy began.” (Jeffreys, 1973, p. 204).

But suppose a psychologist wishes to make the claim that their results from the sample generalize to the population or to other contexts; this claim is effectively a claim that the result will replicate. If this prediction about replication success turns out to be false, this undercuts the initial claim. 

Thus, the real claim of the reform movement would be: “Claims about generalizability are undercut when the finding turns out not to generalize.” There does not seem a pressing need to formalize this statement in a mathematical system. However, I do accept that the reform movement may have paid little heed to the subtleties that the authors identify.

Assessing the Interim Claim “True Results are Not Necessarily Reproducible”

The authors state:

“Much of the reform literature claims non-reproducible results are necessarily false. For example, Wagenmakers et al. (2012, p.633) assert that “Research findings that do not replicate are worse than fairy tales; with fairy tales the reader is at least aware that the work is fictional.” It is implied that true results must necessarily be reproducible, and therefore non-reproducible results must be “fictional.”

I don’t believe this was what I was implying. As noted above, the entire notion of a replication is alien to fields such as evolution, geophysics, and astrophysics. What I wanted to point out is that when empirical claims are published concerning the general nature of a finding (and in psychology we invariably make such claims), and these claims are false, this is harmful to the field. This statement seems unobjectionable to me, but I can understand that the authors do not consider it nuanced: it clearly isn’t nuanced, but it was made in a specific context, that is, the context in which the literature is polluted and much effort is wasted in a hopeless attempt to build on phantom findings. I know PhD students who were unable to “replicate” phantom phenomena and had to leave academia because they were deemed “poor experimenters”. This is a deep injustice that the authors seem to neglect — how can we prevent such personal tragedies in the future? I have often felt that methodologists and mathematical psychologists are in a relatively luxurious position because they are detached somewhat from what happens in the trenches of data collection. Such a luxurious position may make one insensitive to the academic excesses that gave birth to the reform movement in the first place.

The point that the authors proceed to make is that there are many ways to mess up a replication study. Models may be misspecified, sample sizes may be too low — these are indeed perfectly valid reasons why a true claim may fail to replicate, and it is definitely important to be aware of these. However, I do not believe that this issue is underappreciated by the reform movement. When a replication study is designed it is widely recognized that, ideally, a lot of effort goes into study design, manipulation checks, incorporating feedback from the original researcher, etc. In my experience (admittedly based on my limited experience with replication studies), the replication experiment undergoes much more scrutiny than the original experiment. I have been involved with one Registered Replication Report (Wagenmakers et al., 2016) and have personally experienced the intense effort that is required to get the replication off the ground. 

The authors conclude:

“It would be beneficial for reform narratives to steer clear of overly generalized sloganeering regarding reproducibility as a proxy for truth (e.g., reproducibility is a demarcation criterion or non-reproducible results are fairy tales).”

Given the specific context of a failing science, I am willing to stand by my original slogan statement. In general, the reform movement has probably been less nuanced that it could be; on the other hand, I believe there was a sense of urgency and a legitimate fear that the field would shrug and go back to business as usual. 

Assessing the Interim Claim “False Results Might be Reproducible”

In this section the authors show how a false result can reproduce — basically, this happens when everybody is messing up their experiments in the same way. I agree this is a real concern. The authors mention “the inadvertent introduction of an experimental confound or an error in a statistical computation have the potential to create and reinforce perfectly reproducible phantom effects.” This is true, it is important to be mindful of this, but it also seems trivial to me. In fact, the presence of phantom effects is what energized the methodological reform movement in the first place. The usual example is ESP. Meta-analyses usually show compelling evidence for all sorts of ESP phenomena, but this carries little weight. The general statement that meta-analyses may just reveal a common bias is well-known but presented for instance in van Elk et al. (2015).

Assessing Claim 2: “Using Data More Than Once Invalidates Statistical Inference”

The authors start:

“A well-known claim in the methodological reform literature regards the (in)validity of using data more than once, which is sometimes colloquially referred to as double-dipping or data peeking. For instance, Wagenmakers et al. (2012, p.633) decry this practice with the following rationale: “Whenever a researcher uses double-dipping strategies, Type I error rates will be inflated and p values can no longer be trusted.” The authors further argue that “At the heart of the problem lies the statistical law that, for the purpose of hypothesis testing, the data may be used only once.”

They take issue with this claim and go on to state:

“The phrases double-dipping, data peeking, and using data more than once do not have formal definitions and thus they cannot be the basis of any statistical law. These verbally stated terms are ambiguous and create a confusion that is non-existent in statistical theory.”

I disagree with several claims here. First, the terms “double-dipping”, “data peeking”, and “using data more than once” do not originate from the methodological reform movement. They are much older terms that come from statistics. Second, the more adequate description of the offending process is “You should not test a hypothesis using the same data that inspired it”. This is the very specific way in which double-dipping is problematic, and I do not believe the authors address it. Third, it is possible to formalize the process and show statistically why it is problematic. For instance, Bayes’ rule tells us that posterior model probability is proportional to prior model probability (unaffected by the data) times marginal likelihood (i.e., predictive performance for the observed data). When data are used twice in the Bayesian framework, this means that there is a double update: first an informal update to increase the prior model probability such that it becomes a candidate for testing, and then a formal update based on the marginal likelihood. The general problem has been pointed out by several discussants of the Aitkin 1991 article on posterior Bayes factors.

Thus, the core idea is that when data have suggested a hypothesis, those data no longer constitute a fair test of that hypothesis (e.g., De Groot, 1956/2014). Upon close inspection, the data will always spawn some hypothesis, and that data-inspired hypothesis will always come out looking pretty good. Similar points have been made by C.S. Peirce, Borel, Neyman, Feynman, Jeffreys and many others. When left unchecked, a drug company may feel tempted to comb through a list of outcome measures and analyze the result that looks most compelling. This kind of cherry-picking is bad science, and the reason why the field of medicine nowadays insists on preregistered protocols.

The authors mention that the detrimental effects of post-hoc selection and cherry-picking can be counteracted by conditioning to obtain the correct probability distribution. This is similar to correcting for multiple comparisons. However, as explained by De Groot (1956/2014) with a concrete example, the nature of the exploratory process is that the researcher approaches the data with the attitude “let us see what we can find”. This attitude brings about a multiple comparisons problem with the number of comparisons unknown. In other words, in an exploratory setting it is not clear what exactly one ought to condition on. How many hypotheses have been implicitly tested “by eye” when going over the data?  

I remain convinced that “testing” a hypothesis given to you by the data is incorrect statistical practice, both from the Bayesian angle (it is incorrect do enter the likelihood twice) and from the frequentist angle (multiple comparisons require a correction, which is undefined in exploratory work). The fact that other forms of double dipping may sometimes be allowed, when corrected for appropriately, is true but does not go to the core of the problem.

Assessing the Interim Claim “Preregistration is Not Necessary for Valid Statistical Inference”

In this section the authors mention that researchers may preregister a poor analysis plan. I agree this is possible. Unfortunately preregistration is not a magic potion that transforms scientific frogs into princes. The authors also state that preregistering assumption checks violates the preregistration protocol. I am not sure how that would be the case.

The authors continue and state:

“Nosek et al. (2018, p. 2602) suggest that compared to a researcher who did not preregister their hypotheses or analyses, “preregistration with reported deviations provides substantially greater confidence in the resulting statistical inferences.” This statement has no support from statistical theory.”

I believe it does. At a minimum, the preregistration contains information that is useful to assess the prior plausibility of the hypothesis, the prior distributions that were deemed appropriate, as well as the form of the likelihood. It may be argued that these terms could all be assessed after the fact, but this requires a robot-like ability to recall past information and an angel-like ability to resist the forces of hindsight bias and confirmation bias. 

In sum, I believe it is poor practice to pretend (implicitly or explicitly) that a result was obtained by foresight when in reality it was obtained by data-dredging. This intuition may be supported with mathematical formalism, but little formalism is needed to achieve this (although the differences between the frequentist and the Bayesian formalisms are interesting and subject to ongoing debate, especially in philosophy).

My point of view has always been that science should be transparent and honest. Unfortunately, humans (and that includes researchers) are not impervious to hindsight bias and motivated reasoning, and this is why preregistration (or a method such as blinded analyses, e.g., Dutilh et al., in press) can help. I remain surprised that this modest claim can be taken as controversial. When it comes to statistics in exploratory projects, I am all in favor, as long as the exploratory nature of the endeavour is clearly acknowledged.

Postscriptum: A Pet Peeve

In Box 1, the authors bring up the issue of the “true model” and state: “In model selection, selecting the true model depends on having an M-closed model space, which means the true model must be in the candidate set”.

This statement is a tautology, but its implication is that inference ought to proceed differently according to whether we find ourselves in the M-closed scenario (with the true model in the set) or in the M-open scenario (with the true model not in the set). However, this distinction has no mathematical basis that I am able to discern. Bayes’ rule is not grounded on the assumption of realism. Both Bruno de Finetti and Harold Jeffreys explicitly denied the idea that models could ever be true exactly. The fact that there is no need for a true model assumption is also evident from the prequential principle advocated by Phil Dawid: all that matters for model selection is predictive adequacy. Basically, the statistical models may be seen as rival forecasting systems confronted with a stream of incoming data. To evaluate the relative performance of such forecasting systems does not require that the forecaster somehow is identical to Nature. 

Not everybody agrees of course. There is a long list of reputable Bayesian statisticians who believe the M-open, M-closed distinction is critical. In statistics proper, many cases of “the dress” exist as well. My current go-to argument is that the Bayes factor can be viewed as a specific form of cross-validation (e.g., Gneiting & Raftery, 2017). If cross-validation does not depend on the true-model assumption then neither does the Bayes factor, at least not in the sense of quantifying relative predictive success.

 

References

Aitkin, M. (1991). Posterior Bayes factors. Journal of the Royal Statistical Society Series B (Methodological), 53, 111-142.

De Groot, A. D. (1956/2014). The meaning of “significance” for different types of research. Translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas. Acta Psychologica, 148, 188-194.

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2020). The case for formal methodology in scientific reform. 

Dutilh, G., Sarafoglou, A., & Wagenmakers, E.-J. (in press). Flexible yet fair:  Blinding analyses in experimental psychology. Synthese.

Gneiting, T., & Raftery, E. A. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102, 359-378.

Jeffreys, H. (1973). Scientific inference (3rd ed.). Cambridge: Cambridge University Press. 

van Elk, M., Matzke, D., Gronau, Q. F., Guan, M., Vandekerckhove, J., & Wagenmakers, E.-J. (2015). Meta-analyses are no substitute for registered replications: A skeptical perspective on religious priming. Frontiers in Psychology, 6:1365.

Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Jr., Albohn, D. N., Allard, E. S., Benning, S. D., Blouin-Hudon, E.-M., Bulnes, L. C., Caldwell, T. L., Calin-Jageman, R. J., Capaldi, C. A., Carfagno, N. S., Chasten, K. T., Cleeremans, A., Connell, L., DeCicco, J. M., Dijkstra, K., Fischer, A. H., Foroni, F., Hess, U., Holmes, K. J., Jones, J. L. H., Klein, O., Koch, C., Korb, S., Lewinski, P., Liao, J. D., Lund, S., Lupiáñez, J., Lynott, D., Nance, C. N., Oosterwijk, S., Özdoǧru, A. A., Pacheco-Unguetti, A. P., Pearson, B., Powis, C., Riding, S., Roberts, T.-A., Rumiati, R. I., Senden, M., Shea-Shumsky, N. B., Sobocko, K., Soto, J. A., Steiner, T. G., Talarico, J. M., van Allen, Z. M., Vandekerckhove, M., Wainwright, B., Wayand, J. F., Zeelenberg, R., Zetzer, E. E., Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11, 917-928.

About The Authors

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.