Powered by JASP

The Jeffreys-Fisher Maxim and the Bristol Theme in Chess

WARNING: This post starts with two chess studies. They are both magnificent, but if you don’t play chess you might want to skip them. I thank Ulrike Fischer for creating the awesome LaTeX package “chessboard”. NB. The idea discussed here also occurs in Haaf et al. (2019), the topic of a previous post.

The Bristol Theme

The game of chess is both an art, a science, and a sport. In practical over-the-board play, the element of art usually takes a backseat to more practical aspects such as opening preparation and positional evaluation. In endgame study composition, on the other hand, the art aspect reigns supreme. One of my favorite themes in chess endgame study composition is the Bristol clearance. Here is the study from 1861 that gave the theme it’s name:

(more…)


The Best Statistics Book of All Time, According to a Twitter Poll

Some time ago I ran a twitter poll to determine what people believe is the best statistics book of all time. This is the result:

The first thing to note about this poll is that there are only 26 votes. My disappointment at this low number intensified after I ran a control poll, which received more than double the votes:


(more…)


Curiouser and Curiouser: Down the Rabbit Hole with the One-Sided P-value

 

WARNING: This is a Bayesian perspective on a frequentist procedure. Consequently, hard-core frequentists may protest and argue that, for the goals that they pursue, everything makes perfect sense. Bayesians will remain befuddled. Also, I’d like to thank Richard Morey for insightful, critical, and constructive comments.

In an unlikely alliance, Deborah Mayo and Richard Morey (henceforth: M&M) recently produced an interesting and highly topical preprint “A poor prognosis for the diagnostic screening critique of statistical tests”. While reading it, I stumbled upon the following remarkable statement-of-fact (see also Casella & Berger, 1987):

“Let our goal be to test the hypotheses:

H_0: \mu \leq 100 against H_1: \mu > 100

The test is the same if we’re testing H_0: \mu = 100 against H_1: \mu > 100.”

Wait, what? This equivalence may be defensible from a frequentist point of view (e.g., if you reject H_0: \mu = 100 against H_1: \mu > 100, then you will also reject negative values of \mu), but it violates common sense: the hypotheses “\mu \leq 100” and “\mu=100” are not the same: they make different predictions and therefore ought to receive different support from the data.

As a demonstration, below I will discuss three concrete data scenarios.
To prevent confusion, the hypothesis “\mu > 100” is denoted by H_+, the point-null hypothesis is denoted by H_0, and the hypothesis that “\mu \leq100” is denoted by H_-.
(more…)


A Comprehensive Overview of Statistical Methods to Quantify Evidence in Favor of a Point Null Hypothesis: Alternatives to the Bayes Factor

An often voiced concern about p-value null hypothesis testing is that p-values cannot be used to quantify evidence in favor of the point null hypothesis. This is particularly worrisome if you conduct a replication study, if you perform an assumption check, if you hope to show empirical support for a theory that posits an invariance, or if you wish to argue that the data show “evidence of absence” instead of “absence of evidence”.

Researchers interested in quantifying evidence in favor of the point null hypothesis can of course turn to the Bayes factor, which compares predictive performance of any two rival models. Crucially, the null hypothesis does not receive a special status — from the Bayes factor perspective, the null hypothesis is just another data-predicting device whose relative accuracy can be determined from the observed data. However, Bayes factors are not for everyone. Because Bayes factors assess predictive performance, they depend on the specification of prior distributions. Detractors argue that if these prior distributions are manifestly silly or if one is unable to specify a model such that it makes predictions that are even remotely plausible, then the Bayes factor is a suboptimal tool. But what are the concrete alternatives to Bayes factors when it comes to quantifying evidence in favor of a point null hypothesis?

It is immediately clear that neither interval estimation methods nor equivalence tests, nor the Bayesian “ROPE” can offer any solace, because these methods do not take the point null hypothesis seriously; their starting assumption is that the point null hypothesis is false. Even when the point null is changed to Tukey’s “perinull”, these methods are generally poorly equipped to quantify evidence. To see this, imagine we have a binomial test against chance, and we observe 52 successes out of 100 attempts. Surely this is evidence in favor of the point null hypothesis. But how much exactly? Evidence is that which changes our opinion — how much does observing 52 successes out of 100 attempts bolster our confidence in the point null? ROPE, equivalence tests, and interval estimation methods cannot answer this question.

Also problematic are Bayesian methods that depend on the alternative hypothesis having advance access to the data, since such advance access allows the alternative hypothesis to mimic the point null, creating a non-diagnostic test in case the data are consistent with the point null. Should we despair? Are researchers who wish to quantify evidence in favor of a point null hypothesis doomed to compute a Bayes factor by specifying a concrete alternative hypothesis and assigning a point-mass to the null? In a recent paper I outline all of the known alternatives to the Bayes factor and discuss their pros and cons. The ultimate goal is to provide the practitioner with a better impression of the different statistical tools that are available to quantify evidence in favor of a point null hypothesis. A preprint is available here.

References

Wagenmakers, E-J. (2019). A comprehensive overview of methods to quantify evidence in favor of a point null hypothesis: Alternatives to the Bayes factor. Preprint.

About The Author

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

Powered by WordPress | Designed by Elegant Themes