# PROBABILITY DOES NOT EXIST (Part III): De Finetti’s 1974 Preface (Part I)

In an earlier blogpost I complained that the reprint of Bruno de Finetti’s masterpiece “Theory of Probability” concerns the 1970 version, and that the famous preface to the 1974 edition is missing. This blogpost provides an annotated version of this preface (de Finetti, 1974, pp. x-xiv). As the preface spans about four pages, it will take several posts to cover it all. Below, the use of italics is always as in the original text.

### De Finetti’s Preface [Annotated]

“Is it possible that in just a few lines I can achieve what I failed to achieve in my many books and articles? Surely not. Nevertheless, this preface affords me the opportunity, and I shall make the attempt. It may be that misunderstandings which persist in the face of refutations dispersed or scattered over some hundreds of pages can be resolved once and for all if all the arguments are pre-emptively piled up against them.”

# Book Review of “Bayesian Statistics the Fun Way”

The subtitle says it all: “Understanding statistics and probability with Star Wars, Lego, and rubber ducks”. And the author, Will Kurt, does not disappoint: the writing is no-nonsense, the content is understandable, the examples are engaging, and the Bayesian concepts are explained clearly. Here are some of the book’s features that I particularly enjoyed:
(more…)

# Concerns About the Default Cauchy Are Often Exaggerated: A Demonstration with JASP 0.12

Contrary to most of the published literature, the impact of the Cauchy prior width on the t-test Bayes factor is seen to be surprisingly modest. Removing the most extreme 50% of the prior mass can at best double the Bayes factor against the null hypothesis, the same impact as conducting a one-sided instead of a two-sided test. We demonstrate this with the help of the “Equivalence T-Test” module, which was added in JASP 0.12.

We recently revised a comment on a scholarly article by Jorge Tendeiro and Henk Kiers (henceforth TK). Before getting to the main topic of this post, here is the abstract:

Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component –the Bayes factor– that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of eleven ‘issues’ of NHBT. We believe that several issues identified by Tendeiro and Kiers are of central importance for elucidating the complementary roles of hypothesis testing versus parameter estimation and for appreciating the virtue of statistical thinking over conducting statistical rituals. But although we agree with many of their thoughtful recommendations, we believe that Tendeiro and Kiers are overly pessimistic, and that several of their ‘issues’ with NHBT may in fact be conceived as pronounced advantages. We illustrate our arguments with simple, concrete examples and end with a critical discussion of one of the recommendations by Tendeiro and Kiers, which is that “estimation of the full posterior distribution offers a more complete picture” than a Bayes factor hypothesis test.

# A Primer on Bayesian Model-Averaged Meta-Analysis

This post is an extended synopsis of a preprint that is available on PsyArXiv: https://psyarxiv.com/97qup/

### Abstract

Meta-analysis is the predominant approach for quantitatively synthesizing a set of studies. If the studies themselves are of high quality, meta-analysis can provide valuable insights into the current scientific state of knowledge about a particular phenomenon. In psychological science, the most common approach is to conduct frequentist meta-analysis. In this primer, we discuss an alternative method, Bayesian model-averaged meta-analysis. This procedure combines the results of four Bayesian meta-analysis models: (1) fixed-effect null hypothesis, (2) fixed-effect alternative hypothesis, (3) random-effects null hypothesis, and (4) random-effects alternative hypothesis. These models are combined according to their plausibilities in light of the observed data to address the two key questions “Is the overall effect non-zero?” and “Is there between-study variability in effect size?”. Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.
(more…)