Powered by JASP

Laplace’s Demon

if there could be any mortal who could observe with his mind the interconnection of all causes, nothing indeed would escape him. For he who knows the causes of things that are to be necessarily knows all the things that are going to be. (…) For the things which are going to be do not come into existence suddenly, but the passage of time is like the unwinding of a rope, producing nothing new but unfolding what was there at first.
– Cicero, de Divinatione, 44 BC


A deterministic universe consists of causal chains that link past, present, and future in an unbreakable bond; the fact that you are reading these words right now is an inevitable consequence of domino-like cause-and-effect relationships that date back all the way to the Big Bang. If this is true, Laplace argued, then complete knowledge of the universe at any particular time allows one to perfectly predict the future and flawlessly retrace the past. The hypothetical intelligence that would possess such complete knowledge has become known as ‘Laplace’s demon’.


Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection

This post is an extended synopsis of a preprint that is available on PsyArXiv.

“[…] if you can’t do simple problems, how can you do complicated ones?” — Dennis Lindley (1985, p. 65)

Cross-validation (CV) is increasingly popular as a generic method to adjudicate between mathematical models of cognition and behavior. In order to measure model generalizability, CV quantifies out-of-sample predictive performance, and the CV preference goes to the model that predicted the out-of-sample data best. The advantages of CV include theoretic simplicity and practical feasibility. Despite its prominence, however, the limitations of CV are often underappreciated. We demonstrate with three concrete examples how Bayesian leave-one-out cross-validation (referred to as LOO) can yield conclusions that appear undesirable.


Powered by WordPress | Designed by Elegant Themes