Powered by JASP

The Torture of Straw Men: A Critical Impression of Devezer et al., “The Case for Formal Methodology in Scientific Reform”

NB. This is a revised version of an earlier blog post that contained hyperbole, an unfortunate phrase involving family members, and reference to sensitive political opinions. I am grateful to everyone who suggested improvements, which I have incorporated to the best of my ability. In addition, I have made a series of more substantial changes, because I could see how the overall tone was needlessly confrontational. Indeed, parts of my earlier post were interpreted as a personal attack on Devezer et al., and although I have of course denied this, it is entirely possible that my snarky sentences were motivated by a desire to “retaliate” for what I believed was an unjust characterization of my own position and that of a movement with which I identify. I hope the present version is more mature and balanced.  

Tldr; In a withering critique on the methodological reform movement, Devezer and colleagues attack and demolish several extreme claims. However, I believe these claims are straw men, and it seems to me that the Devezer et al. paper leaves unaddressed the real claims of the reform movement (e.g., “be transparent”; “the claim that a finding generalizes to other contexts is undercut when that finding turns out not to replicate in these other contexts”). Contrary to Devezer et al., I will argue that it is possible to provide a statistical underpinning for the idea that data-dredging differs from foresight, both in the frequentist and in the Bayesian paradigm. I do, however, acknowledge that it is unpleasant to see one’s worldview challenged, and the Devezer et al. paper certainly challenges mine. Readers are invited to make up their own mind.  

Prelude

As psychology is slowly following in the footsteps of medicine and making preregistration of empirical studies the norm, some of my friends remain decidedly unimpressed. It is with considerable interest that I read their latest assault paper, “The case for formal methodology in scientific reform”. Here is the abstract:

“Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.”

Ouch. It is hard not to take this personally. Over the years, I have advocated claims that are similar to the ones that find themselves on the Devezer et al. chopping board. Similar — but not the same. In fact, I am not sure anybody advocates the claims as stated, and I am therefore inclined to believe that all three claims may be straw men. A less confrontational way of saying this is that I fully agree with the main claims from Devezer et al. as stated in the abstract. As I will outline below, the real claims from the reform movement are almost tautological statements about how we can collect empirical evidence for scientific hypotheses.

Now before proceeding I should emphasize that I am probably the least objective person to discuss this work. Nobody enjoys seeing their academic contributions called into question, and nobody likes to reexamine opinions that form the core of one’s scientific outlook. However, I believe my response may nonetheless be of interest. 

It does appear that the specific topic is increasingly difficult to debate, as both parties appear relatively certain that they are correct and the other is simply mistaken. The situation reminds me of “the dress: some people see it as white and gold, other as black and blue. A discussion between the white-and-gold camp and the black-and-blue camp is unlikely to result in anything other than confusion and frustration. That said, I do believe that in this particular case consensus is possible — I know that in specific experimental/modeling scenarios, the Devezer et al. authors and I would probably agree on almost everything. 

Before we get going, a remark about tone. The Devezer et al. paper uses robust and direct language to express their dislike of the methodological reform movement and my own work specifically — at least this was my initial take-away, but I may be wrong. The tone of my reply will be in the same robust spirit (although much less robust than in the initial version of this post). In the interest of brevity, I have only commented on the issues that I perceive to be most important.

The Need for a Formal Approach

In the first two pages, Devezer et al. bemoan the lack of formal rigor in the reform movement. They suggest that policies are recommended willy-nilly, and lack a proper mathematical framework grounded in probability theory. There is a good reason, however, for this lack of rigor: the key tenets are so simple that they are almost tautological. Embedding them in mathematical formalism may easily give the impression of being ostentatious. For example, key tenets are “do not cheat”, “be transparent”, and “the claim that a finding generalizes to other contexts is undercut when that finding turns out not to replicate in these other contexts”.  

If Devezer et al. have managed to create a mathematical formalism that goes against these fundamental norms, then it is not the norms that are called into question, but rather the assumptions that underpin their formalism.

Assessing Claim 1: “Reproducibility is the Cornerstone of, or a Demarcation Criterion for, Science”

The authors write:

“A common assertion in the methodological reform literature is that reproducibility is a core scientific virtue and should be used as a standard to evaluate the value of research findings (Begley and Ioannidis, 2015; Braude, 2002; McNutt, 2014; Open Science Collaboration, 2012, 2015; Simons, 2014; Srivastava, 2018; Zwaan et al., 2018). This assertion is typically presented without explicit justification, but implicitly relies on two assumptions: first, that science aims to discover regularities about nature and, second, that reproducible empirical findings are indicators of true regularities. This view implies that if we cannot reproduce findings, we are failing to discover these regularities and hence, we are not practicing science.”

I believe the authors slip up in the final sentence: “This view implies that if we cannot reproduce findings, we are failing to discover these regularities and hence, we are not practicing science.” There is no such implication. Reproducibility is A cornerstone of science. It is not THE cornerstone; after all, many phenomena in evolution, geophysics, and astrophysics do not lend themselves to reproducibility (the authors give other examples, but the point is the same). In the words of geophysicist Sir Harold Jeffreys:

“Repetition of the whole of the experiment, if undertaken at all, would usually be done either to test a suggested improvement in technique, to test the competence of the experimenter, or to increase accuracy by accumulation of data. On the whole, then, it does not seem that repetition, or the possibility of it, is of primary importance. If we admitted that it was, astronomy would no longer be regarded as a science, since the planets have never even approximately repeated their positions since astronomy began.” (Jeffreys, 1973, p. 204).

But suppose a psychologist wishes to make the claim that their results from the sample generalize to the population or to other contexts; this claim is effectively a claim that the result will replicate. If this prediction about replication success turns out to be false, this undercuts the initial claim. 

Thus, the real claim of the reform movement would be: “Claims about generalizability are undercut when the finding turns out not to generalize.” There does not seem a pressing need to formalize this statement in a mathematical system. However, I do accept that the reform movement may have paid little heed to the subtleties that the authors identify.

Assessing the Interim Claim “True Results are Not Necessarily Reproducible”

The authors state:

“Much of the reform literature claims non-reproducible results are necessarily false. For example, Wagenmakers et al. (2012, p.633) assert that “Research findings that do not replicate are worse than fairy tales; with fairy tales the reader is at least aware that the work is fictional.” It is implied that true results must necessarily be reproducible, and therefore non-reproducible results must be “fictional.”

I don’t believe this was what I was implying. As noted above, the entire notion of a replication is alien to fields such as evolution, geophysics, and astrophysics. What I wanted to point out is that when empirical claims are published concerning the general nature of a finding (and in psychology we invariably make such claims), and these claims are false, this is harmful to the field. This statement seems unobjectionable to me, but I can understand that the authors do not consider it nuanced: it clearly isn’t nuanced, but it was made in a specific context, that is, the context in which the literature is polluted and much effort is wasted in a hopeless attempt to build on phantom findings. I know PhD students who were unable to “replicate” phantom phenomena and had to leave academia because they were deemed “poor experimenters”. This is a deep injustice that the authors seem to neglect — how can we prevent such personal tragedies in the future? I have often felt that methodologists and mathematical psychologists are in a relatively luxurious position because they are detached somewhat from what happens in the trenches of data collection. Such a luxurious position may make one insensitive to the academic excesses that gave birth to the reform movement in the first place.

The point that the authors proceed to make is that there are many ways to mess up a replication study. Models may be misspecified, sample sizes may be too low — these are indeed perfectly valid reasons why a true claim may fail to replicate, and it is definitely important to be aware of these. However, I do not believe that this issue is underappreciated by the reform movement. When a replication study is designed it is widely recognized that, ideally, a lot of effort goes into study design, manipulation checks, incorporating feedback from the original researcher, etc. In my experience (admittedly based on my limited experience with replication studies), the replication experiment undergoes much more scrutiny than the original experiment. I have been involved with one Registered Replication Report (Wagenmakers et al., 2016) and have personally experienced the intense effort that is required to get the replication off the ground. 

The authors conclude:

“It would be beneficial for reform narratives to steer clear of overly generalized sloganeering regarding reproducibility as a proxy for truth (e.g., reproducibility is a demarcation criterion or non-reproducible results are fairy tales).”

Given the specific context of a failing science, I am willing to stand by my original slogan statement. In general, the reform movement has probably been less nuanced that it could be; on the other hand, I believe there was a sense of urgency and a legitimate fear that the field would shrug and go back to business as usual. 

Assessing the Interim Claim “False Results Might be Reproducible”

In this section the authors show how a false result can reproduce — basically, this happens when everybody is messing up their experiments in the same way. I agree this is a real concern. The authors mention “the inadvertent introduction of an experimental confound or an error in a statistical computation have the potential to create and reinforce perfectly reproducible phantom effects.” This is true, it is important to be mindful of this, but it also seems trivial to me. In fact, the presence of phantom effects is what energized the methodological reform movement in the first place. The usual example is ESP. Meta-analyses usually show compelling evidence for all sorts of ESP phenomena, but this carries little weight. The general statement that meta-analyses may just reveal a common bias is well-known but presented for instance in van Elk et al. (2015).

Assessing Claim 2: “Using Data More Than Once Invalidates Statistical Inference”

The authors start:

“A well-known claim in the methodological reform literature regards the (in)validity of using data more than once, which is sometimes colloquially referred to as double-dipping or data peeking. For instance, Wagenmakers et al. (2012, p.633) decry this practice with the following rationale: “Whenever a researcher uses double-dipping strategies, Type I error rates will be inflated and p values can no longer be trusted.” The authors further argue that “At the heart of the problem lies the statistical law that, for the purpose of hypothesis testing, the data may be used only once.”

They take issue with this claim and go on to state:

“The phrases double-dipping, data peeking, and using data more than once do not have formal definitions and thus they cannot be the basis of any statistical law. These verbally stated terms are ambiguous and create a confusion that is non-existent in statistical theory.”

I disagree with several claims here. First, the terms “double-dipping”, “data peeking”, and “using data more than once” do not originate from the methodological reform movement. They are much older terms that come from statistics. Second, the more adequate description of the offending process is “You should not test a hypothesis using the same data that inspired it”. This is the very specific way in which double-dipping is problematic, and I do not believe the authors address it. Third, it is possible to formalize the process and show statistically why it is problematic. For instance, Bayes’ rule tells us that posterior model probability is proportional to prior model probability (unaffected by the data) times marginal likelihood (i.e., predictive performance for the observed data). When data are used twice in the Bayesian framework, this means that there is a double update: first an informal update to increase the prior model probability such that it becomes a candidate for testing, and then a formal update based on the marginal likelihood. The general problem has been pointed out by several discussants of the Aitkin 1991 article on posterior Bayes factors.

Thus, the core idea is that when data have suggested a hypothesis, those data no longer constitute a fair test of that hypothesis (e.g., De Groot, 1956/2014). Upon close inspection, the data will always spawn some hypothesis, and that data-inspired hypothesis will always come out looking pretty good. Similar points have been made by C.S. Peirce, Borel, Neyman, Feynman, Jeffreys and many others. When left unchecked, a drug company may feel tempted to comb through a list of outcome measures and analyze the result that looks most compelling. This kind of cherry-picking is bad science, and the reason why the field of medicine nowadays insists on preregistered protocols.

The authors mention that the detrimental effects of post-hoc selection and cherry-picking can be counteracted by conditioning to obtain the correct probability distribution. This is similar to correcting for multiple comparisons. However, as explained by De Groot (1956/2014) with a concrete example, the nature of the exploratory process is that the researcher approaches the data with the attitude “let us see what we can find”. This attitude brings about a multiple comparisons problem with the number of comparisons unknown. In other words, in an exploratory setting it is not clear what exactly one ought to condition on. How many hypotheses have been implicitly tested “by eye” when going over the data?  

I remain convinced that “testing” a hypothesis given to you by the data is incorrect statistical practice, both from the Bayesian angle (it is incorrect do enter the likelihood twice) and from the frequentist angle (multiple comparisons require a correction, which is undefined in exploratory work). The fact that other forms of double dipping may sometimes be allowed, when corrected for appropriately, is true but does not go to the core of the problem.

Assessing the Interim Claim “Preregistration is Not Necessary for Valid Statistical Inference”

In this section the authors mention that researchers may preregister a poor analysis plan. I agree this is possible. Unfortunately preregistration is not a magic potion that transforms scientific frogs into princes. The authors also state that preregistering assumption checks violates the preregistration protocol. I am not sure how that would be the case.

The authors continue and state:

“Nosek et al. (2018, p. 2602) suggest that compared to a researcher who did not preregister their hypotheses or analyses, “preregistration with reported deviations provides substantially greater confidence in the resulting statistical inferences.” This statement has no support from statistical theory.”

I believe it does. At a minimum, the preregistration contains information that is useful to assess the prior plausibility of the hypothesis, the prior distributions that were deemed appropriate, as well as the form of the likelihood. It may be argued that these terms could all be assessed after the fact, but this requires a robot-like ability to recall past information and an angel-like ability to resist the forces of hindsight bias and confirmation bias. 

In sum, I believe it is poor practice to pretend (implicitly or explicitly) that a result was obtained by foresight when in reality it was obtained by data-dredging. This intuition may be supported with mathematical formalism, but little formalism is needed to achieve this (although the differences between the frequentist and the Bayesian formalisms are interesting and subject to ongoing debate, especially in philosophy).

My point of view has always been that science should be transparent and honest. Unfortunately, humans (and that includes researchers) are not impervious to hindsight bias and motivated reasoning, and this is why preregistration (or a method such as blinded analyses, e.g., Dutilh et al., in press) can help. I remain surprised that this modest claim can be taken as controversial. When it comes to statistics in exploratory projects, I am all in favor, as long as the exploratory nature of the endeavour is clearly acknowledged.

Postscriptum: A Pet Peeve

In Box 1, the authors bring up the issue of the “true model” and state: “In model selection, selecting the true model depends on having an M-closed model space, which means the true model must be in the candidate set”.

This statement is a tautology, but its implication is that inference ought to proceed differently according to whether we find ourselves in the M-closed scenario (with the true model in the set) or in the M-open scenario (with the true model not in the set). However, this distinction has no mathematical basis that I am able to discern. Bayes’ rule is not grounded on the assumption of realism. Both Bruno de Finetti and Harold Jeffreys explicitly denied the idea that models could ever be true exactly. The fact that there is no need for a true model assumption is also evident from the prequential principle advocated by Phil Dawid: all that matters for model selection is predictive adequacy. Basically, the statistical models may be seen as rival forecasting systems confronted with a stream of incoming data. To evaluate the relative performance of such forecasting systems does not require that the forecaster somehow is identical to Nature. 

Not everybody agrees of course. There is a long list of reputable Bayesian statisticians who believe the M-open, M-closed distinction is critical. In statistics proper, many cases of “the dress” exist as well. My current go-to argument is that the Bayes factor can be viewed as a specific form of cross-validation (e.g., Gneiting & Raftery, 2017). If cross-validation does not depend on the true-model assumption then neither does the Bayes factor, at least not in the sense of quantifying relative predictive success.

 

References

Aitkin, M. (1991). Posterior Bayes factors. Journal of the Royal Statistical Society Series B (Methodological), 53, 111-142.

De Groot, A. D. (1956/2014). The meaning of “significance” for different types of research. Translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas. Acta Psychologica, 148, 188-194.

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (2020). The case for formal methodology in scientific reform. 

Dutilh, G., Sarafoglou, A., & Wagenmakers, E.-J. (in press). Flexible yet fair:  Blinding analyses in experimental psychology. Synthese.

Gneiting, T., & Raftery, E. A. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102, 359-378.

Jeffreys, H. (1973). Scientific inference (3rd ed.). Cambridge: Cambridge University Press. 

van Elk, M., Matzke, D., Gronau, Q. F., Guan, M., Vandekerckhove, J., & Wagenmakers, E.-J. (2015). Meta-analyses are no substitute for registered replications: A skeptical perspective on religious priming. Frontiers in Psychology, 6:1365.

Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Jr., Albohn, D. N., Allard, E. S., Benning, S. D., Blouin-Hudon, E.-M., Bulnes, L. C., Caldwell, T. L., Calin-Jageman, R. J., Capaldi, C. A., Carfagno, N. S., Chasten, K. T., Cleeremans, A., Connell, L., DeCicco, J. M., Dijkstra, K., Fischer, A. H., Foroni, F., Hess, U., Holmes, K. J., Jones, J. L. H., Klein, O., Koch, C., Korb, S., Lewinski, P., Liao, J. D., Lund, S., Lupiáñez, J., Lynott, D., Nance, C. N., Oosterwijk, S., Özdoǧru, A. A., Pacheco-Unguetti, A. P., Pearson, B., Powis, C., Riding, S., Roberts, T.-A., Rumiati, R. I., Senden, M., Shea-Shumsky, N. B., Sobocko, K., Soto, J. A., Steiner, T. G., Talarico, J. M., van Allen, Z. M., Vandekerckhove, M., Wainwright, B., Wayand, J. F., Zeelenberg, R., Zetzer, E. E., Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11, 917-928.

About The Authors

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.


Straw Men Revised

Last week’s post contained hyperbole, an unfortunate phrase involving family members, and reference to sensitive political opinions. I am grateful to everyone who suggested improvements, which I have incorporated to the best of my ability. In addition, I have made a series of more substantial changes to that blog post, because I could see how the overall tone was needlessly confrontational. Indeed, parts of my early post were interpreted as a personal attack on Devezer et al., and although I have of course denied this, it is entirely possible that some of my more snarky sentences were motivated by a desire to “retaliate” for what I believed was an unjust characterization of my position and that of a movement with which I identify. I hope the present version is more mature and balanced. You can find the revised blog post here.

About The Authors

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.


SpaceX Spaceship SN10 Landing

In the past months, SpaceX was quickly moving forward with development of their interplanetary rocket – Spaceship. The last two prototypes even went ahead with a test flight to ~10 km. The test flight was, in both cases, highly successful, apart from ending in an RUD (rapid unscheduled disassembly) during the landing. That was not unexpected since the previous prototypes had a low chance for successful landing, according to Elon Musk. Nevertheless, many people (and we) are wondering whether the next prototype (SN10), scheduled to attempt the test flight and landing procedure in the upcoming weeks, will finally stick the landing.

A recent twitter poll of almost 40,000 people estimated the probability of SN10 successfully landing at 77.5% (after removing people who abstained from voting).

A much higher chance that Elon’s own estimate of ~60% which is comparable to the Metaculus prediction market based on 295 predictions that converged to 56% median probability of successful landing.

Here, we also try to predict whether the next Starship, SN10, will successfully land. As all statisticians, we start by replacing a difficult problem with a simpler one — instead of landing, we will predict whether the SN10 will successfully fire at least two of its engines as it approaches landing. Since the rocket engine can either fire up or malfunction, we approximate the engine firing up as a binomial event with probability θ. Starship prototypes have 3 rocket engines, out of which 2 are needed for successful landing. However, in previous landing attempts, SpaceX tried lighting up only 2 engines — both of which are required to fire up successfully. Now, in order to improve their landing chances, SpaceX decided to try lighting up all 3 engines and shutting down 1 of them if all fire successfully1. We will therefore approximate the successful landing as observing 2 successful binomial events out of 3 trials. 

To obtain the predictions, we will use Bayesian statistics and specify a prior distribution for the binomial probability parameter θ, an engine successfully firing up. Luckily, we can easily obtain the prior distribution from the two previous landing attempts:

  • The first Starship prototype attempting landing, SN8, managed to fire both engines, however, crashed due to low oxygen pressure resulting. That resulted in insufficient trust and way too fast approach to the landing site. Video [here].
  • The second Starship prototype attempting landing, SN9, did not manage to fire the second engine which, again, resulted in an RUD on approach. Video [here].

Adding an additional assumption of the events being independent, we can summarize the previous firing up attempts with beta(4, 2) distribution — corresponding to observing 3 successful and 1 unsuccessful event. In JASP, we can use the Learn Bayes module to plot our prior distribution for θ

and generate predictions for 3 future events. Since the prior distribution for θ is beta and we observe binomial events, the distribution of number of future successes based on 3 observations follows a beta-binomial(3, 4, 2) distribution. We obtain a figure depicting the predicted number of successes from JASP and we further request the probability of observing at least two of them. Finally, we arrive at an optimistic prediction of 71% chance of observing at least 2 of the engines fire up on the landing approach. Of course, we should treat our estimate as a higher bound on the actual probability of successful landing. There are many other things that can go wrong (see SpaceX’s demonstration [here]) that we did not account for (in contrast to SpaceX, we are not trying to do a rocket science here). 

We can also ask how much does trying to fire up all 3 engines instead of 2 (as in previous attempts) increase the chance of successful landing. For that, we just need to obtain the probability of observing at 2 successful events based on 2 observations = 48% (analogously from beta-binomial(2, 4, 2) distribution), and subtract it from the previous estimate of 71%. That is a 23% higher chance of landing when trying to use all 3 instead of only 2 engines.

About The Authors

František Bartoš

František Bartoš is a Research Master student in psychology at the University of Amsterdam.

 


Strong Public Claims May Not Reflect Researchers’ Private Convictions

This post is an extended synopsis of van Doorn, J., van den Bergh, D., Dablander, F., Derks, K., van Dongen, N.N.N., Evans, N. J., Gronau, Q. F., Haaf, J.M., Kunisato, Y., Ly, A., Marsman, M., Sarafoglou, A., Stefan, A., & Wagenmakers, E.‐J. (2021), Strong public claims may not reflect researchers’ private convictions. Significance, 18, 44-45. https://doi.org/10.1111/1740-9713.01493. Preprint available on PsyArXiv: https://psyarxiv.com/pc4ad

Abstract

How confident are researchers in their own claims? Augustus De Morgan (1847/2003) suggested that researchers may initially present their conclusions modestly, but afterwards use them as if they were a “moral certainty”1. To prevent this from happening, De Morgan proposed that whenever researchers make a claim, they accompany it with a number that reflects their degree of confidence (Goodman, 2018). Current reporting procedures in academia, however, usually present claims without the authors’ assessment of confidence.

Questionnaire

Here we report the partial results from an anonymous questionnaire on the concept of evidence that we sent to 162 corresponding authors of research articles and letters published in Nature Human Behaviour (NHB). We received 31 complete responses (response rate: 19%). A complete overview of the questionnaire can be found in online Appendices B, C, and D. As part of the questionnaire, we asked respondents two questions about the claim in the title of their NHB article: In your opinion, how plausible was the claim before you saw the data?  and In your opinion, how plausible was the claim after you saw the data?. Respondents answered by manipulating a sliding bar that ranged from 0 (i.e., “you know the claim is false”) to 100 (i.e., “you know the claim is true”), with an initial value of 50 (i.e., “you believe the claim is equally likely to be true or false”).

Results

Figure 1 shows the responses to both questions. The blue dots quantify the assessment of prior plausibility. The highest prior plausibility is 75, and the lowest is 20, indicating that (albeit with the benefit of hindsight) the respondents did not set out to study claims that they believed to be either outlandish or trivial. Compared to the heterogeneity in the topics covered, this range of prior plausibility is relatively narrow. 

From the difference between prior and posterior odds we can derive the Bayes factor, that is, the extent to which the data changed researchers’ conviction. The median of this informal Bayes factor is 3, corresponding to the interpretation that the data are 3 times more likely to have occurred under the hypothesis that the claim is true than under the hypothesis that the claim is false.

Figure 1. All 31 respondents indicated that the data made the claim in the title of their NHB article more likely than it was before. However, the size of the increase is modest. Before seeing the data, the plausibility centers around 50 (median = 56); after seeing the data, the plausibility centers around 75 (median = 80). The gray lines connect the responses for each respondent.

Concluding Comments

The authors’ modesty appears excessive. It is not reflected in the declarative title of their NHB articles, and it could not reasonably have been gleaned from the content of the articles themselves. Empirical disciplines do not ask authors to express the confidence in their claims, even though this could be relatively simple. For instance, journals could ask authors to estimate the prior/posterior plausibility, or the probability of a replication yielding a similar result (e.g., (non)significance at the same alpha level and sample size), for each claim or hypothesis under consideration, and present the results on the first page of the article. When an author publishes a strong claim in a top-tier journal such as NHB, one may expect this author to be relatively confident. While the current academic landscape does not allow authors to express their uncertainty publicly, our results suggest that they may well be aware of it. Encouraging authors to express this uncertainty openly may lead to more honest and nuanced scientific communication (Kousta, 2020). 

References

De Morgan, A. (1847/2003). Formal Logic: The Calculus of Inference, Necessary and Probable. Honolulu: University Press of the Pacific.

Goodman, S. N. (2018). How sure are you of your result? Put a number on it. Nature, 564, 7.

Kousta, S. (Ed.). (2020). Editorial: Tell it like it is. Nature Human Behavior, 4, 1.

About The Authors

Johnny van Doorn

Johnny van Doorn is a PhD candidate at the Psychological Methods department of the University of Amsterdam.

 


Preprint: Bayesian Estimation of Single-Test Reliability Coefficients

This post is a synopsis of  Pfadt, J. M., van den Bergh, D., Sijtsma, K., Moshagen, M., & Wagenmakers, E.-J. (in press). Bayesian estimation of single-test reliability coefficients. Multivariate Behavioral Research. Preprint available at https://psyarxiv.com/exg2y

 

Abstract

Popular measures of reliability for a single-test administration include coefficient α, coefficient λ2, the greatest lower bound (glb), and coefficient ω. First, we show how these measures can be easily estimated within a Bayesian framework. Specifically, the posterior distribution for these measures can be obtained through Gibbs sampling – for coefficients α, λ2, and the glb one can sample the covariance matrix from an inverse Wishart distribution; for coefficient ω one samples the conditional posterior distributions from a single-factor CFA-model. Simulations show that – under relatively uninformative priors – the 95% Bayesian credible intervals are highly similar to the 95% frequentist bootstrap confidence intervals. In addition, the posterior distribution can be used to address practically relevant questions, such as “what is the probability that the reliability of this test is between .70 and .90?”, or, “how likely is it that the reliability of this test is higher than .80?”. In general, the use of a posterior distribution highlights the inherent uncertainty with respect to the estimation of reliability measures.

Overview

Reliability analysis aims to disentangle the amount of variance of a test score that is due to systematic influences (i.e., true-score variance) from the variance that is due to random influences (i.e., error-score variance; Lord & Novick, 1968).
When one estimates a parameter such as a reliability coefficient, the point estimate can be accompanied by an uncertainty interval. In the context of reliability analysis, substantive researchers almost always ignore uncertainty intervals and present only point estimates. This common practice disregards sampling error and the associated estimation uncertainty and should be seen as highly problematic. In this preprint, we show how the Bayesian credible interval can provide researchers with a flexible and straightforward method to quantify the uncertainty of point estimates in a reliability analysis.

Reliability Coefficients

Coefficient α, coefficient λ2, and the glb are based on classical test theory (CTT) and are lower bounds to reliability. To determine the error-score variance of a test, the coefficients estimate an upper bound for the error variances of the items. The estimators differ in the way they estimate this upper bound. The basis for the estimation is the covariance matrix Σ of multivariate observations. The CTT-coefficients estimate error-score variance from the variances of the items and true-score variance from the covariances of the items.
Coefficient ω is based on the single-factor model. Specifically, the single-factor model assumes that a common factor explains the covariances between the items (Spearman, 1904). Following CTT, the common factor variance replaces the true-score variance and the residual variances replace the error-score variance.

A straightforward way to obtain a posterior distribution of a CTT-coefficient is to estimate the posterior distribution of the covariance matrix and use it to calculate the estimate. Thus, we sample the posterior covariance matrices from an inverse Wishart distribution (Murphy, 2007; Padilla & Zhang, 2011).
For coefficient ω we sample from the conditional posterior distributions of the parameters in the single-factor model by means of a Gibbs sampling algorithm (Lee, 2007).

Simulation Results

The results suggest that the Bayesian reliability coefficients perform equally well as the frequentist ones. The figure below depicts the simulation results for the condition with medium correlations among items. The endpoints of the bars are the average 95% uncertainty interval limits. The 25%- and 75%-quartiles are indicated with vertical line segments.

Example Data Set

The below figures show the reliability results of an empirical data set from Cavalini (1992) with eight items and sample size of n = 828, and n = 100 randomly chosen observations. Depicted are posterior distributions of estimators with dotted prior densities and 95% credible interval bars. One can easily acknowledge the change in the uncertainty of reliability values when the sample size increases.

For example, from the posterior distribution of λ2 we can conclude that the specific credible interval contains 95% of the posterior mass. Since λ2 = .784, 95% HDI [.761, .806], we are 95% certain that λ2 lies between .761 and .806. Yet, how certain are we that the reliability is larger than .80? Using the posterior distribution of coefficient λ2, we can calculate the probability that it exceeds the cutoff of .80: p(λ2 > .80 | data) = .075.

Conclusion

The Bayesian reliability estimation adds an essential measure of uncertainty to simple point-estimated coefficients. Adequate credible intervals for single-test reliability estimates can be easily obtained applying the procedures described in the preprint, and as implemented in the R-package Bayesrel. Whereas the R-package addresses substantive researchers who have some experience in programming, we admit that it will probably not reach scientists whose software experiences are limited to graphical user interface programs such as SPSS. For this reason we have implemented the Bayesian reliability coefficients in the open-source statistical software JASP (JASP Team, 2020). Whereas we cannot stress the importance of reporting uncertainty enough, the question of the appropriateness of certain reliability measures cannot be answered by the Bayesian approach. No single reliability estimate can be generally recommended over all others. Nonetheless, practitioners are faced with the decision which reliability estimates to compute and report. Based on a single test administration the procedure should involve an assessment of dimensionality. Ideally, practitioners report multiple reliability coefficients with an accompanying measure of uncertainty, that is based on the posterior distribution.

References

This post is a synopsis of  Pfadt, J. M., van den Bergh, D., Sijtsma, K., Moshagen, M., & Wagenmakers, E.-J. (in press). Bayesian estimation of single-test reliability coefficients. Multivariate Behavioral Research. Preprint available at https://psyarxiv.com/exg2y

 

About The Authors

Julius M. Pfadt

Julius M. Pfadt is PhD student at the Research Methods group at Ulm University

Preprint: Expert Agreement in Prior Elicitation and its Effects on Bayesian Inference

This post is an extended synopsis of Stefan, A. M., Katsimpokis, D., Gronau, Q. F. & Wagenmakers, E.-J. (2021). Expert agreement in prior elicitation and its effects on Bayesian inference. Preprint available on PsyArXiv: https://psyarxiv.com/8xkqd/

Abstract

Bayesian inference requires the specification of prior distributions that quantify the pre-data uncertainty about parameter values. One way to specify prior distributions is through prior elicitation, an interview method guiding field experts through the process of expressing their knowledge in the form of a probability distribution. However, prior distributions elicited from experts can be subject to idiosyncrasies of experts and elicitation procedures, raising the spectre of subjectivity and prejudice. In a new pre-print, we investigate the effect of interpersonal variation in elicited prior distributions on the Bayes factor hypothesis test. We elicited prior distributions from six academic experts with a background in different fields of psychology and applied the elicited prior distributions as well as commonly used default priors in a re-analysis of 1710 studies in psychology. The degree to which the Bayes factors vary as a function of the different prior distributions is quantified by three measures of concordance of evidence: We assess whether the prior distributions change the Bayes factor direction, whether they cause a switch in the category of evidence strength, and how much influence they have on the value of the Bayes factor. Our results show that although the Bayes factor is sensitive to changes in the prior distribution, these changes rarely affect the qualitative conclusions of a hypothesis test. We hope that these results help researchers gauge the influence of interpersonal variation in elicited prior distributions in future psychological studies. Additionally, our sensitivity analyses can be used as a template for Bayesian robustness analyses that involves prior elicitation from multiple experts.

Different experts – different priors?

The goal of a prior elicitation effort is to formulate a probability distribution that represents the subjective knowledge of an expert. This probability distribution can then be used as a prior distribution on parameters in a Bayesian model. Parameter values the expert deems plausible receive a higher probability density, parameter values the expert deems implausible receive a lower probability density. Of course, most of us know from personal experience that experts can differ in their opinions. But to what extent will these differences influence elicited prior distributions? Here, we asked six experts from different fields in psychology about plausible values for small-to-medium effect sizes in their field. Below, you can see the elicited prior distribution for Cohen’s d for all experts alongside with their respective fields of research.

As can be expected, no two elicited distributions are exactly alike. However, the prior distributions, especially the distributions of Expert 2-5, are remarkably similar. Expert 1 deviated from the other experts in that they expected substantially lower effect sizes. Expert 6 displayed less uncertainty than the other experts.

Different priors – different hypothesis testing results?

After eliciting prior distributions from experts, the next question we ask is: To what extent do differences in priors influence the results of Bayesian hypothesis testing? In other words, how sensitive is the Bayes factor to interpersonal variation in the prior? This question addresses a frequently voiced concern about Bayesian methods: Results of Bayesian analyses could be influenced by arbitrary features of the prior distribution.

To investigate the sensitivity of the Bayes factor to the interpersonal variation in elicited priors, we applied the elicited prior distributions to a large number of re-analyses of studies in psychology. Specifically, for elicited priors on Cohen’s d, we re-analyzed t-tests from a database assembled by Wetzels et al. (2011) that contains 855 t-tests from the journals Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition. In each test, we used the elicited priors as prior distribution on Cohen’s d in the alternative model.

What does it mean if a Bayes factor is sensitive to the prior? Here, we used three criteria: First, we checked for all combinations of prior distributions how often a change in priors led to a change in the direction of the Bayes factor. We recorded a change in direction if the Bayes factor showed evidence for the null model (i.e., BF10 < 1) for one prior and evidence for the alternative model (i.e., BF10 > 1) for a different prior. Agreement was conversely defined as both Bayes factors being larger or smaller than one. As can be seen below, agreement rates were generally high for all combinations of prior distributions.

As a second sensitivity criterion, we recorded changes in the evidence category of the Bayes factor. Often, researchers are interested in whether a hypothesis test provides strong evidence in favor of the alternative hypothesis (e.g., BF10 > 10), strong evidence in favor of the null hypothesis (e.g., BF10 < 1/10), or inconclusive evidence (e.g., 1/10 < BF10 < 10). Thus, they classify the Bayes factor as belonging to one of three evidence categories. We recorded whether different priors led to a change in these evidence categories, that is, whether one Bayes factor would be classified as strong evidence, while a Bayes factor using a different prior would be classified as inconclusive evidence or strong evidence in favor of the other hypothesis. From the figure below, we can see that overall the agreement of Bayes factors with regard to evidence category is slightly lower than the agreement with regard to direction. However, this can be expected since evaluating agreement across two cut-points will generally result in lower agreement than evaluating agreement across a single cut-point.

As a third aspect of Bayes factor sensitivity we investigated changes in the exact Bayes factor value. The figure below shows the correspondence of log Bayes factors for all experts and all tests in the Wetzels et al. (2011) database. What becomes clear is that Bayes factors are not always larger or smaller for one prior distribution compared to another, but that the relation differs per study. In fact, the effect size in the sample determines which prior distribution yields the highest Bayes factor in a study. Sample size has an additional effect, with larger sample sizes leading to more pronounced differences between Bayes factors for different prior distributions.

Conclusions

The sensitivity of the Bayes factor has often been a subject of discussion in previous research. Our results show that the Bayes factor is sensitive to the interpersonal variability between elicited prior distributions. Even for moderate sample sizes, differences between Bayes factors with different prior distributions can easily range in the thousands. However, our results also indicate that the use of different elicited prior distributions rarely changes the direction of the Bayes factor or the category of evidence strength. Thus, the qualitative conclusions of hypothesis tests in psychology rarely change based on the prior distribution. This insight may increase the support for informed Bayesian inference among researchers who were sceptical that the subjectivity prior distributions might determine the qualitative outcomes of their Bayesian hypothesis tests.

References:

Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., & Wagenmakers, E. –J. (2011). Statistical evidence in experimental psychology: An empirical comparison using 855 t tests. Perspectives on Psychological Science, 6(3), 291–298. https://doi.org/10.1177/1745691611406923
Stefan, A., Katsimpokis, D., Gronau, Q. F., & Wagenmakers, E.-J. (2021). Expert agreement in prior elicitation and its effects on Bayesian inference. PsyArXiv Preprint. https://doi.org/10.31234/osf.io/8xkqd

Icons made by Freepik from www.flaticon.com

About The Authors

Angelika Stefan

Angelika is a PhD candidate at the Psychological Methods Group of the University of Amsterdam.

Dimitris Katsimpokis

Dimitris Katsimpokis is a PhD student at the University of Basel.

Quentin F. Gronau

Quentin is a PhD candidate at the Psychological Methods Group of the University of Amsterdam & postdoctoral fellow working on stop-signal models for how we cancel and modify movements and on cognitive models for improving the diagnosticity of eyewitness memory choices.

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

 


« Previous Entries

Powered by WordPress | Designed by Elegant Themes