Bayesian Inference in Three Minutes

Recently I was asked to introduce Bayesian inference in three minutes flat.

In 10 slides, available at https://osf.io/68y75/, I made the following points:

  1. Bayesian inference is “common sense expressed in numbers” (Laplace)
  2. We start with at least two rival accounts of the world, aka hypotheses.
  3. These hypotheses make predictions, the quality of which determines their change in plausibility: hypotheses that predicted the observed data relatively well receive a boost in credibility, whereas hypotheses that predicted the observed data relatively poorly suffer a decline.
  4. “Today’s posterior is tomorrow’s prior” (Lindley) – the cycle of knowledge updating and Bayesian learning never ends.
  5. When we learn, we (ought to) do so using Bayes’ rule: new knowledge equals old knowledge times a predictive updating factor. 
  6. We use Bayes’ rule in order to avoid internal inconsistencies (i.e., inference that is silly, farcical, or ridiculous – pick your favorite term). When there are no internal inconsistencies the system is called coherent.
  7. Be coherent! (Lindley, de Finetti, and –implicitly– all Bayesians)

 

About the author

Eric-Jan Wagenmakers

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.