Powered by JASP

Preprint: A Conceptual Introduction to Bayesian Model Averaging

 

Preprint: doi:10.31234/osf.io/wgb64

Abstract

“Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion, and then learning about the parameters of this selected model. Crucially however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model selection process is ignored. An alternative is to learn the parameters for all candidate models, and then combine the estimates according to the posterior probabilities of the associated models. The result is known as Bayesian model averaging (BMA). BMA has several important advantages over all-or-none selection methods, but has been used only sparingly in the social sciences. In this conceptual introduction we explain the principles of BMA, describe its advantages over all-or-none model selection, and showcase its utility for three examples: ANCOVA, meta-analysis, and network analysis.”
(more…)


Jeffreys’s Oven

Recently I was involved in an Email correspondence where someone claimed that Bayes factors always involve a point null hypothesis, and that the point null is never true — hence, Bayes factors are useless, QED. Previous posts on this blog here and here discussed the scientific relevance (or even inevitability?) of the point null hypothesis, but the deeper problem with the argument is that the premise is false. Bayes factors compare the predictive performance of any two models; one of the models may be a point-null hypothesis, if this is deemed desirable, interesting, or scientifically relevant; however, instead of the point-null you can just as well specify a Tukey peri-null hypothesis, an interval-null hypothesis, a directional hypothesis, or a nonnested hypothesis. The only precondition that needs to be satisfied in order to compute a Bayes factor between two models is that the models must make predictions (see also Lee & Vanpaemel, 2018).

I have encountered a similar style of reasoning before, and I was wondering how to classify this fallacy. So I ran the following poll on twitter:


(more…)


Preprint: Five Bayesian Intuitions for the Stopping Rule Principle

Preprint: https://psyarxiv.com/5ntkd

Abstract

“Is it statistically appropriate to monitor evidence for or against a hypothesis as the data accumulate, and stop whenever this evidence is deemed sufficiently compelling? Researchers raised in the tradition of frequentist inference may intuit that such a practice will bias the results and may even lead to “sampling to a foregone conclusion”. In contrast, the Bayesian formalism entails that the decision on whether or not to terminate data collection is irrelevant for the assessment of the strength of the evidence. Here we provide five Bayesian intuitions for why the rational updating of beliefs ought not to depend on the decision when to stop data collection, that is, for the Stopping Rule Principle.”

(more…)


Powered by WordPress | Designed by Elegant Themes