Rosenkrantz on Severity and the Problem of Old Evidence

In the previous post I discussed the problem of old evidence and wrote “It is highly likely that my argument is old, or even beside the point. I am not an expert on this particular problem.” Sure enough, Andrew Fowlie kindly attended me to the following book chapter by Roger Rosenkrantz: Rosenkrantz, R. D. (1983). Why Glymour is a Bayesian. In…

read more

The Problem of Old Evidence

  To my shame and regret, I only recently found the opportunity to read the book “Bayesian philosophy of science” (BPS), by Jan Sprenger and Stephan Hartmann. It turned out to be a wonderful book, both in appearance, typesetting, and in contents. The book confirmed many of my prior beliefs ;- ) but it also made me think about the…

read more

Bayesian Inference in Three Minutes

Recently I was asked to introduce Bayesian inference in three minutes flat. In 10 slides, available at https://osf.io/68y75/, I made the following points: Bayesian inference is “common sense expressed in numbers” (Laplace) We start with at least two rival accounts of the world, aka hypotheses. These hypotheses make predictions, the quality of which determines their change in plausibility: hypotheses that…

read more

A Presentation on Transparency in Science and Statistics

This month I gave a 45-minute presentation “Transparency in Science and Statistics” for the Italian Reproducibility Network. This presentation reflects my recent thinking on the topic. Important themes include “how to use a Ulysses contract to avoid fooling yourself (and others)”, “how to reveal uncertainty that often remains hidden”, “what is model-myopia (and how to avoid it)”, and “can Mertonian…

read more

Redefine Statistical Significance XX: A Chat on P-values with ChatGPT

TLDR: ChatGPT rocks It has been more than two years since doing a post for the “Redefine Statistical Significance” series (https://www.bayesianspectacles.org/redefine-statistical-significance-xix-monkey-business/). In this series, Quentin Gronau and I demonstrated through countless examples that p-values just below .05 (“reject the null”!) should be interpreted with great caution, as such p-values provide –at best– only weak evidence against the null (see Benjamin…

read more

Bayes Factors for Those who Hate Bayes Factors, Part III: The Coherence Plot

Coherence Revisited The previous post gave a demonstration of Bayes factor coherence. Specifically, the post considered a test for a binomial parameter , pitting the null hypothesis against the alternative hypothesis (i.e., the uniform distribution from 0 to 1). For fictitious data composed of 5 successes and 5 failures, the Bayes factor equals about 2.71 in favor of . We…

read more

Bayes Factors for Those who Hate Bayes Factors, Part II: Lord Ludicrus, Vampire Count of Incoherence, Insists on a Dance

Image by Doremi/Shutterstock.com TLDR; The Last Dance This post demonstrates how Bayes factors are coherent in the sense that the same result obtains regardless of whether the data are analyzed all at once, in batches, or one at a time. The key point is that this coherence arises because Bayes factors are relatively sensitive to the prior distribution. Ironically, the…

read more

Rejoinder – No Evidence for Nudging After Adjusting for Publication Bias

The Datacolada post “Meaningless Means: The Average Effect of Nudging is d = 0.43” critiques the recent PNAS meta-analysis on nudging and our commentary “No Evidence for Nudging After Adjusting for Publication Bias” (Maier et al., 2022) for pooling studies that are very heterogeneous. The critique, in fact, echoes many Twitter comments which raised the heterogeneity question immediately after our…

read more