Powered by JASP

Compensatory Control and Religious Beliefs: A Registered Replication Report Across Two Countries

This post is an extended synopsis of Hoogeveen, S., Wagenmakers, E.-J., Kay, A. C., & van Elk, M. (in press). Compensatory Control and Religious Beliefs: A Registered Replication Report Across Two Countries. Comprehensive Results in Social Psychology. https://doi.org/10.1080/



Compensatory Control Theory (CCT) suggests that religious belief systems provide an external source of control that can substitute a perceived lack of personal control. In a seminal paper, it was experimentally demonstrated that a threat to personal control increases endorsement of the existence of a controlling God. In the current registered report, we conducted a high-powered (N = 829) direct replication of this effect, using samples from the Netherlands and the United States (US). Our results show moderate to strong evidence for the absence of an experimental effect across both countries: belief in a controlling God did not increase after a threat compared to an affirmation of personal control. In a complimentary preregistered analysis, an inverse relation between general feelings of personal control and belief in a controlling God was found in the US, but not in the Netherlands. We discuss potential reasons for the replication failure of the experimental effect and cultural mechanisms explaining the cross-country difference in the correlational effect. Together, our findings suggest that experimental manipulations of control may be ineffective in shifting belief in God, but that individual differences in the experience of control may be related to religious beliefs in a way that is consistent with CCT.

What Makes Science Transparent? A Consensus-Based Checklist

This post is a synopsis of Aczel et al. (2019). A consensus-based transparency checklist. Nature Human Behaviour. Open Access: https://www.nature.com/articles/s41562-019-0772-6.
The associated Shiny app is at http://www.shinyapps.org/apps/

How can social scientists make their work more transparent? Sixty-three editors and open science advocates reached consensus on this topic and created a checklist to help authors document various transparency-related aspects of their work.

Preprint: BFpack — Flexible Bayes Factor Testing of Scientific Theories in R

This post is a synopsis of Mulder, J., Gu, X., Olsson-Collentine, A., Tomarken, A., Böing-Messing, F., Hoijtink, H., Meijerink, M., Williams, D. R., Menke, J., Fox, J.-P., Rosseel, Y., Wagenmakers, E.-J., & van Lissa, C. (2019). BFpack: Flexible Bayes factor testing of scientific theories in R. Preprint available at https://arxiv.org/pdf/1911.07728.pdf


“There has been a tremendous methodological development of Bayes factors for hypothesis testing in the social and behavioral sciences, and related fields. This development is due to the flexibility of the Bayes factor for testing multiple hypotheses simultaneously, the ability to test complex hypotheses involving equality as well as order constraints on the parameters of interest, and the interpretability of the outcome as the weight of evidence provided by the data in support of competing scientific theories. The available software tools for Bayesian hypothesis testing are still limited however. In this paper we present a new R-package called BFpack that contains functions for Bayes factor hypothesis testing for the many common testing problems. The software includes novel tools (i) for Bayesian exploratory testing (null vs positive vs negative effects), (ii) for Bayesian confirmatory testing (competing hypotheses with equality and/or order constraints), (iii) for common statistical analyses, such as linear regression, generalized linear models, (multivariate) analysis of (co)variance, correlation analysis, and random intercept models, (iv) using default priors, and (v) while allowing data to contain missing observations that are missing at random.”

Overview of BFpack Functionality

A Variety of BFpack Test Questions

Example Applications

The preprint discusses seven application examples and illustrates each with R code. The examples concern (1) the t-test; (2) a 2-way ANOVA; (3) a test of equality of variances; (4) linear regression (with missing data) in fMRI research; (5) logistic regression in forensic psychology; (6) measures of association in neuropsychology; and (7) intraclass correlation.


Mulder, J., Gu, X., Olsson-Collentine, A., Tomarken, A., Böing-Messing, F., Hoijtink, H., Meijerink, M., Williams, D. R., Menke, J., Fox, J.-P., Rosseel, Y., Wagenmakers, E.-J., & van Lissa, C. (2019). BFpack: Flexible Bayes factor testing of scientific theories in R. Preprint available at https://arxiv.org/pdf/1911.07728.pdf

About The Author

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

Crowdsourcing Hypothesis Tests: The Bayesian Perspective

This post is a synopsis of the Bayesian work featured in Landy et al. (in press). Crowdsourcing hypothesis tests: Making transparent how design choices shape research results. Psychological Bulletin. Preprint available at https://osf.io/fgepx/; the 325-page supplement is available at https://osf.io/jm9zh/; the Bayesian analyses can be found on pp. 238-295.


“To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.”

Preprint: Practical Challenges and Methodological Flexibility in Prior Elicitation

This post is an extended synopsis of Stefan, A. M., Evans, N. J., & Wagenmakers, E.-J. (2019). Practical challenges and methodological flexibility in prior elicitation. Manuscript submitted for publication. Preprint available on PsyArXiv: https://psyarxiv.com/d42xb/



It is a well-known fact that Bayesian analyses require the specification of a prior distribution, and that different priors can lead to different quantitative, or even qualitative, conclusions. Because the prior distribution can be so influential, one of the most frequently asked questions about the Bayesian statistical framework is: How should I specify the prior distributions? Here, we take a closer look at prior elicitation — a subjective Bayesian method for specifying (informed) prior distributions based on expert knowledge — and examine the practical challenges researchers may face when implementing this approach for specifying their prior distributions. Specifically, our review of the literature suggests that there is a high degree of methodological flexibility within current prior elicitation techniques. This means that the results of a prior elicitation effort are not solely determined by the expert’s knowledge, but also heavily depend on the methodological decisions a researcher makes in the prior elicitation process. Thus, it appears that prior elicitation does not completely solve the issue of prior specification, but instead shifts influential decisions to a different level. We demonstrate the potential variability resulting from different methodological choices within the prior elicitation process in several examples, and make recommendations for how the variability in prior elicitation can be managed in future prior elicitation efforts.

A Breakdown of “Preregistration is Redundant, at Best”

In this sentence-by-sentence breakdown of the paper “Preregistration is Redundant, at Best”, I argue that preregistration is a pragmatic tool to combat biases that invalidate statistical inference. In a perfect world, strong theory sufficiently constrains the analysis process, and/or Bayesian robots can update beliefs based on fully reported data. In the real world, however, even astrophysicists require a firewall between the analyst and the data. Nevertheless, preregistration should not be glorified. Although I disagree with the title of the paper, I found myself agreeing with almost all of the authors’ main arguments.

« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes