Powered by JASP
Currently Browsing: General

Preprint: Practical Challenges and Methodological Flexibility in Prior Elicitation

This post is an extended synopsis of Stefan, A. M., Evans, N. J., & Wagenmakers, E.-J. (2019). Practical challenges and methodological flexibility in prior elicitation. Manuscript submitted for publication. Preprint available on PsyArXiv: https://psyarxiv.com/d42xb/

 
 
 

Abstract

It is a well-known fact that Bayesian analyses require the specification of a prior distribution, and that different priors can lead to different quantitative, or even qualitative, conclusions. Because the prior distribution can be so influential, one of the most frequently asked questions about the Bayesian statistical framework is: How should I specify the prior distributions? Here, we take a closer look at prior elicitation — a subjective Bayesian method for specifying (informed) prior distributions based on expert knowledge — and examine the practical challenges researchers may face when implementing this approach for specifying their prior distributions. Specifically, our review of the literature suggests that there is a high degree of methodological flexibility within current prior elicitation techniques. This means that the results of a prior elicitation effort are not solely determined by the expert’s knowledge, but also heavily depend on the methodological decisions a researcher makes in the prior elicitation process. Thus, it appears that prior elicitation does not completely solve the issue of prior specification, but instead shifts influential decisions to a different level. We demonstrate the potential variability resulting from different methodological choices within the prior elicitation process in several examples, and make recommendations for how the variability in prior elicitation can be managed in future prior elicitation efforts.
(more…)


A Breakdown of “Preregistration is Redundant, at Best”

In this sentence-by-sentence breakdown of the paper “Preregistration is Redundant, at Best”, I argue that preregistration is a pragmatic tool to combat biases that invalidate statistical inference. In a perfect world, strong theory sufficiently constrains the analysis process, and/or Bayesian robots can update beliefs based on fully reported data. In the real world, however, even astrophysicists require a firewall between the analyst and the data. Nevertheless, preregistration should not be glorified. Although I disagree with the title of the paper, I found myself agreeing with almost all of the authors’ main arguments.
(more…)


How to Evaluate a Subjective Prior Objectively

The Misconception

Gelman and Hennig (2017, p. 989) argue that subjective priors cannot be evaluated by means of the data:

“However, priors in the subjectivist Bayesian conception are not open to falsification (…), because by definition they must be fixed before observation. Adjusting the prior after having observed the data to be analysed violates coherence. The Bayesian system as derived from axioms such as coherence (…) is designed to cover all aspects of learning from data, including model selection and rejection, but this requires that all potential later decisions are already incorporated in the prior, which itself is not interpreted as a testable statement about yet unknown observations. In particular this means that, once a coherent subjectivist Bayesian has assessed a set-up as exchangeable a priori, he or she cannot drop this assumption later, whatever the data are (think of observing 20 0s, then 20 1s, and then 10 further 0s in a binary experiment)”
 

 
Similar claims have been made in the scholarly review paper by Consonni at al., 2018 (p. 628): “The above view of “objectivity” presupposes that a model has a different theoretical status relative to the prior: it is the latter which encapsulates the subjective uncertainty of the researcher, while the model is less debatable, possibly because it can usually be tested through data.”

The Correction

Statistical models are a combination of likelihood and prior that together yield predictions for observed data (Box, 1980; Evans, 2015). The adequacy of these predictions can be rigorously assessed using Bayes factors (Wagenmakers, 2017; but see the blog post by Christian Robert, further discussed below). In order to evaluate the empirical success of a particular subjective prior distribution, we require multiple subjective Bayesians, or a single “schizophrenic” subjective Bayesian that is willing to entertain several different priors.
(more…)


Did Alan Turing Invent the Bayes factor?

The otherwise excellent article by Consonni et al. (2018), discussed last week, makes the following claim:

“…the initial use of the BF can be attributed both to Jeffreys and Turing who introduced it independently around the same time (Kass & Raftery, 1995)” (Consonni et al., 2018, p. 638)

 
 
 
 
 
 
 
 
 
This claim recently resurfaced on Twitter as well:

But is this really true? (more…)


“Prior Distributions for Objective Bayesian Analysis”

The purpose of this blog post is to call attention to the paper “Prior Distributions for Objective Bayesian Analysis”, authored by Guido Consonni, Dimitris Fouskakis, Brunero Liseo, and Ioannis Ntzoufras (NB: Ioannis is a member of the JASP advisory board!). The paper –published in the journal “Bayesian Analysis”— provides a comprehensive overview of objective Bayesian analysis, with an emphasis on model selection and linear regression.
(more…)


« Previous Entries

Powered by WordPress | Designed by Elegant Themes