Powered by JASP


In the previous post I discussed the famous de Finettti (1974) preface, containing the iconic statement “PROBABILITY DOES NOT EXIST”. As mentioned in that post, many statisticians and philosophers of science believe that, together with Frank Ramsey, de Finetti was the first real subjectivist. Fellow subjectivist Dennis Lindley, for instance, always expressed a fawning admiration for de Finetti, calling him “the great genius of probability” (Lindley, 2000, p. 336).

But was de Finetti really the first subjectivist? I am not sure, especially after reading An Essay on Probabilities and on Their Application to Life Contingencies and Insurance Offices, published by Augustus de Morgan in 1838 (!). Here is the cover of the book:


Together with Frank Ramsey, the Italian “radical probabilist” Bruno de Finetti is widely considered to be the main progenitor and promoter of the idea that probability is inherently subjective. According to this view, all we can do is specify our prior beliefs and then ensure that they remain coherent, that is, free from internal inconsistencies. And the only way to ensure such coherence is to update those beliefs in light of new data through the use of Bayes’ rule.

Dennis Lindley once stated that a decent study of de Finetti would take a statistician one or two years (but that it would be worth the investment). Recently I decided to bite the bullet and order the reprint of de Finetti’s standard work “Theory of Probability”. After browsing the book I must say that it looks much less daunting than I had anticipated; perhaps this is because I have already accepted the main Bayesian premise, or because I am used to read work by Harold Jeffreys. At any rate, de Finetti’s writing is clear and lively, and I look forward to studying its contents in more detail.

Workshop “Design and Analysis of Replication Studies”, January 23-24

The Center of Reproducibility Science (CRS) in Zurich opens the new year by organizing a workshop “Design and Analysis of Replication Studies”. The goal of this workshop is to have “a thorough methodological discussion regarding the design and the analysis of replication studies including specialists from different fields such as clinical research, psychology, economics and others.”

I quite look forward to attending this workshop. The speakers include a former PhD student (Don van Ravenzwaaij), current collaborators (some of whom I’ve never met in person), and a stray statistician who is intelligent, knowledgeable, and nonetheless explicitly un-Bayesian; in other words, a complete and utter enigma. Also, this workshop forced me to consider again the Bayesian perspective on quantifying replication success. Previously, in work with Josine Verhagen and Alexander Ly, we had promoted the “replication Bayes factor”, in which the posterior distribution from the original study is used as the prior distribution for testing the effect in the replication study. However, this setup can be generalized considerably, as indicated in my workshop abstract below:

Poisson Regression in Labor Law

R code for the reported analyses is available at https://osf.io/sfam7/.

My wife Nataschja teaches labor law at Utrecht University. For one of her papers she needed to evaluate the claim that “over the past 35 years, the number of applications processed by the AAC (Advice and Arbitration Committee) has decreased”. After collecting the relevant data Nataschja asked me whether I could help her out with a statistical analysis. Before diving in, below are the raw data and the associated histogram:


Gegevens <- data.frame(
  Jaar      = seq(from=1985,to=2019),
  Aanvragen = c(6,3,4,3,6,3,2,4,0,2,3,1,3,3,2,7,0,1,2,4,2,1,
# NB: “Gegevens” means “Data”, “Jaar” means “Year”, and “Aanvragen” means “Applications”


NB. “Aantal behandelde aanvragen” means “number of processed applications”.

Based on a visual inspection most people would probably conclude that there has indeed been a decrease in the number of processed applications over the years, although that decrease is due mainly to the relatively high numbers of processed applications in the first five years (more on this later).

Below I will describe the analyses that I conducted without the benefit of knowing a lot about the subject area. Indeed, I also didn’t know much about the analysis itself. In experimental psychology, the methodologist feeds on a steady diet: a t-test for breakfast, a correlation for lunch, and an ANOVA for dinner, interrupted by the occasional snack of a contingency table. After some thought, I felt that this data set cried out for Poisson regression — the dependent variable are counts, and “year” is the predictor of interest. By testing whether we need the predictor “year”, we can more or less answer Nataschja’s question directly. Poisson regression has not yet been added to JASP, and this is why I am presenting R code here (the complete code is at https://osf.io/sfam7/).

Unpacking the Disagreement: Guest Post by Donkin and Szollosi

This post is a response to the previous post A Breakdown of “Preregistration is Redundant, at Best”.

We were delighted to see how interested people were in the short paper we wrote on preregistration with our co-authors (now published at Trends in Cognitive Science – the revised version of which has been uploaded). First, a note on the original title. As EJ correctly reconstructed in his review, we initially gave the provocative title “Preregistration is redundant, at best” in an effort to push-back against the current idolizing attitude towards preregistration. What we meant by redundancy was simply that preregistration is not diagnostic of good science (we tried to bring out this point more clearly in the revision, now titled “Is preregistration worthwhile?”). Many correctly noted that this can be said of any one method of science. Our argument is that we should not promote and reward any one method, but rather good arguments and good theory (or, rather, acts that move us in the direction of good theory).

Based on EJ’s post, it seems that we agree in many ways with proponents of preregistration (e.g., that there’s room and need for improvement in the behavioral and social sciences). However, there remains much we disagree on. In the following we try to (start to) articulate some of the points of disagreement in order to identify why we, ultimately, reach such different conclusions.

Powered by WordPress | Designed by Elegant Themes