Powered by JASP
Currently Browsing: General

The Story of a Lost Researcher Trying to Learn About Disruptive Behaviors During the Full Moon

Karoline Huth is a first-year student in our Psychology Research Master at the University of Amsterdam. This blog post describes her presentation for the recent course on Bayesian Inference for Psychology. The assignment was to conduct a Bayesian reanalysis of existing data. Kudos to Karoline for being brave enough to share her work here! [EJ]

Presentation Intro and Background

Bayesian statistics is a trending topic in psychological research. Even though its benefits have been widely discussed (e.g., Marsman & Wagenmakers, 2017; Wagenmakers et al., 2018), many researchers still don’t use it. I argue these researchers fall in one of four subgroups:

  1. Those that are resistant: This group of researchers is the most difficult to address. They know exactly what the Bayesian approach is and how it works. But for whatever reason they stay resistant and prefer the common, frequentist approach.
  2. Those that are oblivious: There are those, unfortunately, that have never heard of the Bayesian approach. How is this possible? This group most likely consists of on the one hand inactive researchers and on the other students that are still solely taught the frequentist approach.
  3. Those that are lazy: Even though it has been their New-Year’s Resolution for the last five years, these researchers haven’t managed to learn more about the Bayesian approach and how to implement it. Consequently, they are not sufficiently confident to adopt the Bayesian framework.
  4. Those that are lost: Last of all, this group has heard of the Bayesian approach, is aware of its benefits, and knows the mathematical background. These researchers would like to apply the Bayesian approach but do not know what statistical software to use. In the end they resort to using the common frequentist analysis, saving them hours of programming.
(more…)

A Galton Board Demonstration of Why All Statistical Models are Misspecified

The Galton board or quincunx is a fascinating device that provides a compelling demonstration of one the main laws of statistics. In the device, balls are dropped from above onto a series of pegs that are organized in rows of increasing width. Whenever a ball hits a particular peg, it can drop either to the right or to the left, presumably with a chance of 50% (but more about this later). When many balls are dropped, most of them remain somewhere near the middle of the device, but a few balls experience a successive run of movements in the same direction and therefore drift off to the sides.

(more…)

The Single Most Prevalent Misinterpretation of Bayes’ Rule

We thank Alexander Ly for constructive comments on an earlier draft of this post.

Bayes’ rule tells us how to learn from experience, that is, by updating our knowledge about the world using relative predictive performance: hypotheses that predicted the data relatively well receive a boost in credibility, whereas hypotheses that predicted the data relatively poorly suffer a decline (e.g., Wagenmakers et al., 2016). This predictive updating principle holds for propositions, hypotheses, models, and parameters: every time, our uncertainty is updated using the same mathematical operation. Take for instance the learning process involving just two models, \mathcal{M}1 and \mathcal{M}2 (but note that these may equally well refer to parameter values, say \theta1 and \theta2, within a single model). The odds form of Bayes’ rule yields

\underbrace{ \frac{p(\mathcal{M}_1  \mid \text{data})}{p(\mathcal{M}_2  \mid \text{data})}}_{\substack{\text{Posterior uncertainty}\\ \text{about the world}} } = \underbrace{ \frac{p(\mathcal{M}_1)}{p(\mathcal{M}_2)}}_{\substack{\text{Prior uncertainty}\\ \text{about the world}} } \times \,\,\,\,\,\,\, \underbrace{ \frac{p(\text{data} \mid \mathcal{M}_1)}{p(\text{data} \mid  \mathcal{M}_2)}}_{\substack{\text{Predictive}\\ \text{updating factor} } }.
(more…)


Preprint: Multiple Perspectives on Inference for Two Simple Statistical Scenarios

Abstract

When data analysts operate within different statistical frameworks (e.g., frequentist versus Bayesian, emphasis on estimation versus emphasis on testing), how does this impact the qualitative conclusions that are drawn for real data? To study this question empirically we selected from the literature two simple scenarios -involving a comparison of two proportions and a Pearson correlation- and asked four teams of statisticians to provide a concise analysis and a qualitative interpretation of the outcome. The results showed considerable overall agreement; nevertheless, this agreement did not appear to diminish the intensity of the subsequent debate over which statistical framework is more appropriate to address the questions at hand.

(more…)


Rejoinder: More Limitations of Bayesian Leave-One-Out Cross-Validation

In a recent article for Computational Brain & Behavior, we discussed several limitations of Bayesian leave-one-out cross-validation (LOO) for model selection. Our contribution attracted three thought-provoking commentaries by (1) Vehtari, Simpson, Yao, and Gelman, (2) Navarro, and (3) Shiffrin and Chandramouli. We just submitted a rejoinder in which we address each of the commentaries and identify several additional limitations of LOO-based methods such as Bayesian stacking. We focus on differences between LOO-based methods versus approaches that consistently use Bayes’ rule for both parameter estimation and model comparison.

(more…)


« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes