Powered by JASP

Corona and the Statistics Wars

As the corona-crisis engulfs the world, politicians left and right are accused of “politicizing” the pandemic. In order to follow suit I will try to weaponize the pandemic to argue in favor of Bayesian inference over frequentist inference.

In recent months it has become clear that the corona pandemic is not just fought by doctors, nurses, and entire populations as they implement social distancing; it is also fought by statistical modelers, armed with data. As the disease spreads, it becomes crucial to study it statistically: how contagious it is, how it may respond to particular policy measures, how many people will get infected, and how many hospital beds will be needed. Fundamentally, one of the key goals is prediction. Good predictions come with a measure of uncertainty, or at least present different scenarios ranging from pessimistic to optimistic.

So how do statistical models for corona make their predictions? I am not an epidemiologist, but the current corona modeling effort is clearly a process that unfolds as more data become available. Good models will continually consume new data (i.e., new corona cases, information from other countries, covariates, etc.) in order to update their predictions. In other words, the models learn from incoming data in order to make increasingly accurate predictions about the future. This process of continual learning, without post-hoc and ad-hoc corrections for “data snooping”, is entirely natural — to the best of my knowledge, nobody has yet proposed that predictions be corrected for the fact that the models were estimated on a growing body of data.

Preprint: Default Bayes Factors for Testing the (In)equality of Several Population Variances

This post summarizes Dablander, F., van den Berg, D., Ly, A., Wagenmakers, E.-J. (2020). Default Bayes Factors for Testing the (In)equality of Several Population Variances. Preprint available on ArXiv:                                     https://arxiv.org/abs/2003.06278.


“Testing the (in)equality of variances is an important problem in many statistical applications. We develop default Bayes factor tests to assess the (in)equality of two or more population variances, as well as a test for whether the population variance equals a specific value. The resulting test can be used to check assumptions for commonly used procedures such as the t-test or ANOVA, or test substantive hypotheses concerning variances directly. We further extend the Bayes factor to allow H0 to have a null-region. Researchers may have directed hypotheses such as \sigma^2_1 > \sigma^2_2, or want to combine hypotheses about equality with hypotheses about inequality, for example \sigma^2_1 = \sigma^2_2 > (\sigma^2_3, \sigma^2_4). We generalize our Bayes factor to accommodate such hypotheses for K > 2 groups. We show that our Bayes factor fulfills a number of desiderata, provide practical examples illustrating the method, and compare it to a recently proposed fractional Bayes factor procedure by Böing-Messing and Mulder (2018). Our procedure is implemented in the R package bfvartest.”

David Spiegelhalter’s Gullible Skeptic, and a Bayesian “Hard-Nosed Skeptic” Reanalysis of the ANDROMEDA-SHOCK Trial

In a recent blog post, Bayesian icon David Spiegelhalter proposes a new analysis of the results from the ANDROMEDA-SHOCK randomized clinical trial. This trial was published in JAMA under the informative title “Effect of a Resuscitation Strategy Targeting Peripheral Perfusion Status vs Serum Lactate Levels on 28-Day Mortality Among Patients With Septic Shock”.

In JAMA, the authors summarize their findings as follows: “In this randomized clinical trial of 424 patients with early septic shock, 28-day mortality was 34.9% [74/212 patients] in the peripheral perfusion–targeted resuscitation [henceforth PPTR] group compared with 43.4% [92/212] in the lactate level–targeted resuscitation group, a difference that did not reach statistical significance.” The authors conclude that “These findings do not support the use of a peripheral perfusion–targeted resuscitation strategy in patients with septic shock.”

Misconception: The Relative Belief Ratio Equals the Marginal Likelihood

The Misconception

The relative belief ratio (e.g., Evans 2015, Horwich 1982/2016) equals the marginal likelihood.

The Correction

The relative belief ratio is proportional to the marginal likelihood. Dividing two marginal likelihoods (i.e., computing a Bayes factor) cancels the constant of proportionality, such that the Bayes factor equals the ratio of two complementary relative belief ratios (Evans 2015, p.109, proposition 4.3.1).

The Explanation

In the highly recommended book Measuring statistical evidence using relative belief, Evans (2015) defines evidence as follows (see also Carnap 1950, pp. 326-333; Horwich 1982/2016, p. 48; Keynes 1921, p. 170):

\begin{equation<em>}     \text{Evidence for } \theta = \frac{p(\theta \mid \text{data})}{p(\theta)},  \end{equation</em>}


where \theta represents a parameter (or, more generally, a model, a hypothesis, a claim, or a proposition). In other words, data provide evidence for a claim \theta to the extent that they make \theta more likely than it was before. This is a sensible axiom; who would be willing to argue that data provide evidence for a claim when they make that claim less plausible than it was before?

Powered by WordPress | Designed by Elegant Themes