Powered by JASP

How to Evaluate a Subjective Prior Objectively

The Misconception

Gelman and Hennig (2017, p. 989) argue that subjective priors cannot be evaluated by means of the data:

“However, priors in the subjectivist Bayesian conception are not open to falsification (…), because by definition they must be fixed before observation. Adjusting the prior after having observed the data to be analysed violates coherence. The Bayesian system as derived from axioms such as coherence (…) is designed to cover all aspects of learning from data, including model selection and rejection, but this requires that all potential later decisions are already incorporated in the prior, which itself is not interpreted as a testable statement about yet unknown observations. In particular this means that, once a coherent subjectivist Bayesian has assessed a set-up as exchangeable a priori, he or she cannot drop this assumption later, whatever the data are (think of observing 20 0s, then 20 1s, and then 10 further 0s in a binary experiment)”

Similar claims have been made in the scholarly review paper by Consonni at al., 2018 (p. 628): “The above view of “objectivity” presupposes that a model has a different theoretical status relative to the prior: it is the latter which encapsulates the subjective uncertainty of the researcher, while the model is less debatable, possibly because it can usually be tested through data.”

The Correction

Statistical models are a combination of likelihood and prior that together yield predictions for observed data (Box, 1980; Evans, 2015). The adequacy of these predictions can be rigorously assessed using Bayes factors (Wagenmakers, 2017; but see the blog post by Christian Robert, further discussed below). In order to evaluate the empirical success of a particular subjective prior distribution, we require multiple subjective Bayesians, or a single “schizophrenic” subjective Bayesian that is willing to entertain several different priors.

Did Alan Turing Invent the Bayes factor?

The otherwise excellent article by Consonni et al. (2018), discussed last week, makes the following claim:

“…the initial use of the BF can be attributed both to Jeffreys and Turing who introduced it independently around the same time (Kass & Raftery, 1995)” (Consonni et al., 2018, p. 638)

This claim recently resurfaced on Twitter as well:

But is this really true? (more…)

“Prior Distributions for Objective Bayesian Analysis”

The purpose of this blog post is to call attention to the paper “Prior Distributions for Objective Bayesian Analysis”, authored by Guido Consonni, Dimitris Fouskakis, Brunero Liseo, and Ioannis Ntzoufras (NB: Ioannis is a member of the JASP advisory board!). The paper –published in the journal “Bayesian Analysis”— provides a comprehensive overview of objective Bayesian analysis, with an emphasis on model selection and linear regression.

Book Review of “Bayesian Probability for Babies”

“Bayesian Probability for Babies” is a book that explains Bayes’ rule through a simple story about cookies. I battle-tested the book on my two-year old son Theo (admittedly no longer a baby), and he seemed somewhat intrigued by the idea of candy-covered cookies, although the more subtle points of the story must have eluded him. Theo gives the book three out of five stars: the cookies are a good idea but the book has no dinosaurs.

Progesterone in Women with Bleeding in Early Pregnancy: Absence of Evidence, Not Evidence of Absence

Available on https://psyarxiv.com/etk7g/, this is a comment on a recent article in the New England Journal of Medicine (Coomarasamy et al., 2019). A response by the authors will follow at a later point.

A recent trial assessed the effectiveness of progesterone in preventing miscarriages.1 The number of live births was 74.7% (1513/2025) in the progesterone group and 72.5% (1459/2013) in the placebo group (p=.08). The authors concluded: “The incidence of adverse events did not differ significantly between the groups.”

Powered by WordPress | Designed by Elegant Themes