WARNING: This post starts with two chess studies. They are both magnificent, but if you don’t play chess you might want to skip them. I thank Ulrike Fischer for creating the awesome LaTeX package “chessboard”. NB. The idea discussed here also occurs in Haaf et al. (2019), the topic of a previous post.
The game of chess is both an art, a science, and a sport. In practical over-the-board play, the element of art usually takes a backseat to more practical aspects such as opening preparation and positional evaluation. In endgame study composition, on the other hand, the art aspect reigns supreme. One of my favorite themes in chess endgame study composition is the Bristol clearance. Here is the study from 1861 that gave the theme it’s name:
(more…)
Some time ago I ran a twitter poll to determine what people believe is the best statistics book of all time. This is the result:
The first thing to note about this poll is that there are only 26 votes. My disappointment at this low number intensified after I ran a control poll, which received more than double the votes:
WARNING: This is a Bayesian perspective on a frequentist procedure. Consequently, hard-core frequentists may protest and argue that, for the goals that they pursue, everything makes perfect sense. Bayesians will remain befuddled. Also, I’d like to thank Richard Morey for insightful, critical, and constructive comments.
In an unlikely alliance, Deborah Mayo and Richard Morey (henceforth: M&M) recently produced an interesting and highly topical preprint “A poor prognosis for the diagnostic screening critique of statistical tests”. While reading it, I stumbled upon the following remarkable statement-of-fact (see also Casella & Berger, 1987):
“Let our goal be to test the hypotheses:
against
The test is the same if we’re testing
against
.”
Wait, what? This equivalence may be defensible from a frequentist point of view (e.g., if you reject against
, then you will also reject negative values of
), but it violates common sense: the hypotheses “
” and “
” are not the same: they make different predictions and therefore ought to receive different support from the data.
As a demonstration, below I will discuss three concrete data scenarios.
To prevent confusion, the hypothesis “” is denoted by
, the point-null hypothesis is denoted by
, and the hypothesis that “
” is denoted by
.
(more…)
An often voiced concern about p-value null hypothesis testing is that p-values cannot be used to quantify evidence in favor of the point null hypothesis. This is particularly worrisome if you conduct a replication study, if you perform an assumption check, if you hope to show empirical support for a theory that posits an invariance, or if you wish to argue that the data show “evidence of absence” instead of “absence of evidence”.
Researchers interested in quantifying evidence in favor of the point null hypothesis can of course turn to the Bayes factor, which compares predictive performance of any two rival models. Crucially, the null hypothesis does not receive a special status — from the Bayes factor perspective, the null hypothesis is just another data-predicting device whose relative accuracy can be determined from the observed data. However, Bayes factors are not for everyone. Because Bayes factors assess predictive performance, they depend on the specification of prior distributions. Detractors argue that if these prior distributions are manifestly silly or if one is unable to specify a model such that it makes predictions that are even remotely plausible, then the Bayes factor is a suboptimal tool. But what are the concrete alternatives to Bayes factors when it comes to quantifying evidence in favor of a point null hypothesis?