Powered by JASP

PROBABILITY DOES NOT EXIST (Part III): De Finetti’s 1974 Preface (Part I)

In an earlier blogpost I complained that the reprint of Bruno de Finetti’s masterpiece “Theory of Probability” concerns the 1970 version, and that the famous preface to the 1974 edition is missing. This blogpost provides an annotated version of this preface (de Finetti, 1974, pp. x-xiv). As the preface spans about four pages, it will take several posts to cover it all. Below, the use of italics is always as in the original text.

De Finetti’s Preface [Annotated]

“Is it possible that in just a few lines I can achieve what I failed to achieve in my many books and articles? Surely not. Nevertheless, this preface affords me the opportunity, and I shall make the attempt. It may be that misunderstandings which persist in the face of refutations dispersed or scattered over some hundreds of pages can be resolved once and for all if all the arguments are pre-emptively piled up against them.”


Book Review of “Bayesian Statistics the Fun Way”

The subtitle says it all: “Understanding statistics and probability with Star Wars, Lego, and rubber ducks”. And the author, Will Kurt, does not disappoint: the writing is no-nonsense, the content is understandable, the examples are engaging, and the Bayesian concepts are explained clearly. Here are some of the book’s features that I particularly enjoyed:

Concerns About the Default Cauchy Are Often Exaggerated: A Demonstration with JASP 0.12

Contrary to most of the published literature, the impact of the Cauchy prior width on the t-test Bayes factor is seen to be surprisingly modest. Removing the most extreme 50% of the prior mass can at best double the Bayes factor against the null hypothesis, the same impact as conducting a one-sided instead of a two-sided test. We demonstrate this with the help of the “Equivalence T-Test” module, which was added in JASP 0.12.

We recently revised a comment on a scholarly article by Jorge Tendeiro and Henk Kiers (henceforth TK). Before getting to the main topic of this post, here is the abstract:

Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component –the Bayes factor– that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of eleven ‘issues’ of NHBT. We believe that several issues identified by Tendeiro and Kiers are of central importance for elucidating the complementary roles of hypothesis testing versus parameter estimation and for appreciating the virtue of statistical thinking over conducting statistical rituals. But although we agree with many of their thoughtful recommendations, we believe that Tendeiro and Kiers are overly pessimistic, and that several of their ‘issues’ with NHBT may in fact be conceived as pronounced advantages. We illustrate our arguments with simple, concrete examples and end with a critical discussion of one of the recommendations by Tendeiro and Kiers, which is that “estimation of the full posterior distribution offers a more complete picture” than a Bayes factor hypothesis test.


A Primer on Bayesian Model-Averaged Meta-Analysis

This post is an extended synopsis of a preprint that is available on PsyArXiv: https://psyarxiv.com/97qup/


Meta-analysis is the predominant approach for quantitatively synthesizing a set of studies. If the studies themselves are of high quality, meta-analysis can provide valuable insights into the current scientific state of knowledge about a particular phenomenon. In psychological science, the most common approach is to conduct frequentist meta-analysis. In this primer, we discuss an alternative method, Bayesian model-averaged meta-analysis. This procedure combines the results of four Bayesian meta-analysis models: (1) fixed-effect null hypothesis, (2) fixed-effect alternative hypothesis, (3) random-effects null hypothesis, and (4) random-effects alternative hypothesis. These models are combined according to their plausibilities in light of the observed data to address the two key questions “Is the overall effect non-zero?” and “Is there between-study variability in effect size?”. Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.

Omit Needless Words: An Unapproachable Example of Conciseness Related by the Traveling Chinese Story-teller Kai Lung

As mentioned in an earlier post, the epigraphs in Harold Jeffreys’s 1935 geophysics book “Earthquakes and mountains” prompted me to read “The Wallet of Kai Lung”, a collection of short stories by Ernest Bramah Smith (1868-1942). In one of the stories, “The confession of Kai Lung”, the traveling Chinese story-teller Kai Lung relates the following autobiographical tale, “an unapproachable example of conciseness”:

Preprint: A Tutorial on Bayesian Multi-Model Linear Regression with BAS and JASP

This post is a teaser for van den Bergh, D., Clyde, M. A., Raj, A., de Jong, T., Gronau, Q. F., Marsman, M., Ly, A., and Wagenmakers, E.-J. (2020). A Tutorial on Bayesian Multi-Model Linear Regression with BAS and JASP. Preprint available on                                                                             PsyArXiv: https://psyarxiv.com/pqju6/


Linear regression analyses commonly involve two consecutive stages of statistical inquiry. In the first stage, a single ‘best’ model is defined by a specific selection of relevant predictors; in the second stage, the regression coefficients of the winning model are used for prediction and for inference concerning the importance of the predictors. However, such second-stage inference ignores the model uncertainty from the first stage, resulting in overconfident parameter estimates that generalize poorly. These drawbacks can be overcome by model averaging, a technique that retains all models for inference, weighting each model’s contribution by its posterior probability. Although conceptually straightforward, model averaging is rarely used in applied research, possibly due to the lack of easily accessible software. To bridge the gap between theory and practice, we provide a tutorial on linear regression using Bayesian model averaging in JASP, based on the BAS package in R. Firstly, we provide theoretical background on linear regression, Bayesian inference, and Bayesian model averaging. Secondly, we demonstrate the method on an example data set from the World Happiness Report. Lastly, we discuss limitations of model averaging and directions for dealing with violations of model assumptions.

« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes