Powered by JASP

The Principle of Predictive Irrelevance, or Why Intervals Should Not be Used for Model Comparison Featuring a Point Null Hypothesis

This post summarizes Wagenmakers, E.-J., Lee, M. D., Rouder, J. N., & Morey, R. D. (2019). The principle of predictive irrelevance, or why intervals should not be used for model comparison featuring a point null hypothesis. Manuscript submitted for publication. Preprint available on PsyArXiv: https://psyarxiv.com/rqnu5


The principle of predictive irrelevance states that when two competing models predict a data set equally well, that data set cannot be used to discriminate the models and –for that specific purpose– the data set is evidentially irrelevant. To highlight the ramifications of the principle, we first show how a single binomial observation can be irrelevant in the sense that it carries no evidential value for discriminating the null hypothesis \theta = 1/2 from a broad class of alternative hypotheses that allow \theta to be between 0 and 1. In contrast, the Bayesian credible interval suggest that a single binomial observation does provide some evidence against the null hypothesis. We then generalize this paradoxical result to infinitely long data sequences that are predictively irrelevant throughout. Examples feature a test of a binomial rate and a test of a normal mean. These maximally uninformative data (MUD) sequences yield credible intervals and confidence intervals that are certain to exclude the point of test as the sequence lengthens. The resolution of this paradox requires the insight that interval estimation methods –and, consequently, p values— may not be used for model comparison involving a point null hypothesis.

Preprint: Teaching Good Research Practices: Protocol of a Research Master Course

This post is an extended synopsis of Sarafoglou A., Hoogeveen S., Matzke D., & Wagenmakers, E.-J. (in press). Teaching Good Research Practices: Protocol of a Research Master Course. Preprint available on PsyArXiv: https://psyarxiv.com/gvesh/


The current crisis of confidence in psychological science has spurred on field-wide reforms to enhance transparency, reproducibility, and replicability. To solidify these reforms within the scientific community, student courses on open science practices are essential. Here we describe the content of our Research Master course “Good Research Practices” which we have designed and taught at the University of Amsterdam. Supported by Chambers’ recent book The 7 Deadly Sins of Psychology, the course covered topics such as QRPs, the importance of direct and conceptual replication studies, preregistration, and the public sharing of data, code, and analysis plans. We adopted a pedagogical approach that (1) reduced teacher-centered lectures to a minimum; (2) emphasized practical training on open science practices; (3) encouraged students to engage in the ongoing discussions in the open science community on social media platforms. In this course, we alternated regular classes with classes organized by students. For each of these, an example is given below. In addition, Table 1 displays a selection of further topics discussed in the course.

Powered by WordPress | Designed by Elegant Themes