Powered by JASP

A Galton Board Demonstration of Why All Statistical Models are Misspecified

The Galton board or quincunx is a fascinating device that provides a compelling demonstration of one the main laws of statistics. In the device, balls are dropped from above onto a series of pegs that are organized in rows of increasing width. Whenever a ball hits a particular peg, it can drop either to the right or to the left, presumably with a chance of 50% (but more about this later). When many balls are dropped, most of them remain somewhere near the middle of the device, but a few balls experience a successive run of movements in the same direction and therefore drift off to the sides.

(more…)

The Single Most Prevalent Misinterpretation of Bayes’ Rule

We thank Alexander Ly for constructive comments on an earlier draft of this post.

Bayes’ rule tells us how to learn from experience, that is, by updating our knowledge about the world using relative predictive performance: hypotheses that predicted the data relatively well receive a boost in credibility, whereas hypotheses that predicted the data relatively poorly suffer a decline (e.g., Wagenmakers et al., 2016). This predictive updating principle holds for propositions, hypotheses, models, and parameters: every time, our uncertainty is updated using the same mathematical operation. Take for instance the learning process involving just two models, \mathcal{M}1 and \mathcal{M}2 (but note that these may equally well refer to parameter values, say \theta1 and \theta2, within a single model). The odds form of Bayes’ rule yields

\underbrace{ \frac{p(\mathcal{M}_1  \mid \text{data})}{p(\mathcal{M}_2  \mid \text{data})}}_{\substack{\text{Posterior uncertainty}\\ \text{about the world}} } = \underbrace{ \frac{p(\mathcal{M}_1)}{p(\mathcal{M}_2)}}_{\substack{\text{Prior uncertainty}\\ \text{about the world}} } \times \,\,\,\,\,\,\, \underbrace{ \frac{p(\text{data} \mid \mathcal{M}_1)}{p(\text{data} \mid  \mathcal{M}_2)}}_{\substack{\text{Predictive}\\ \text{updating factor} } }.
(more…)


Preprint: Multiple Perspectives on Inference for Two Simple Statistical Scenarios

Abstract

When data analysts operate within different statistical frameworks (e.g., frequentist versus Bayesian, emphasis on estimation versus emphasis on testing), how does this impact the qualitative conclusions that are drawn for real data? To study this question empirically we selected from the literature two simple scenarios -involving a comparison of two proportions and a Pearson correlation- and asked four teams of statisticians to provide a concise analysis and a qualitative interpretation of the outcome. The results showed considerable overall agreement; nevertheless, this agreement did not appear to diminish the intensity of the subsequent debate over which statistical framework is more appropriate to address the questions at hand.

(more…)


Rejoinder: More Limitations of Bayesian Leave-One-Out Cross-Validation

In a recent article for Computational Brain & Behavior, we discussed several limitations of Bayesian leave-one-out cross-validation (LOO) for model selection. Our contribution attracted three thought-provoking commentaries by (1) Vehtari, Simpson, Yao, and Gelman, (2) Navarro, and (3) Shiffrin and Chandramouli. We just submitted a rejoinder in which we address each of the commentaries and identify several additional limitations of LOO-based methods such as Bayesian stacking. We focus on differences between LOO-based methods versus approaches that consistently use Bayes’ rule for both parameter estimation and model comparison.

(more…)


Powered by WordPress | Designed by Elegant Themes