Powered by JASP

The Lab’s First Compelling Replication of a Counterintuitive Result

The small plastic dome containing a die in the popular game “Mens Erger Je Niet!” (“Don’t Get So Annoyed!”) causes a bias — the die tends to land on the side opposite to how it started. This was not our initial hypothesis, however…

The 106-year old game “Mens Erger Je Niet!” (a German invention) involves players tossing a die and then moving a set of tokens around the board. The winner is the person who first brings home all of his tokens. The English version is known as Ludo, and the American versions are Parcheesi and Trouble. The outcry “Mens Erger Je Niet!” translates to “Don’t Get So Annoyed!”, because it is actually quite frustrating when your token cannot even enter the game (because you fail to throw the required 6 to start) or when your token is almost home, only to be “hit” by someone else’s token, causing it to be sent all the way back to its starting position.

Some modern versions of the game come with a “die machine”; instead of throwing the die, players hit a small plastic dome, which makes the die inside jump up, bounce against the dome, spin around, and land. But is this dome-die fair? One of us (EJ) who had experience with this machine felt that although the pips may come up about equally often, there would be a sequential dependency in the outcomes. Specifically, EJ’s original hypothesis was motivated by the observation that the dome sometimes misfires — it is depressed but the die does not jump. In other words, a “1” is more likely to be followed by a “1” than by a different number, a “2” more likely to be followed by a “2”, etc. Some of this action can be seen in the gif below:
(more…)


Bayesian Scepsis About SWEPIS: Quantifying the Evidence That Early Induction of Labour Prevents Perinatal Deaths

To paraphrase Mark Twain: “to someone with a hammer, everything looks like a nail”. And so, having implemented the Bayesian A/B test (Kass & Vaidyanathan, 1992) in R and in JASP (Gronau, Raj, & Wagenmakers, 2019), we have been on a mission to apply the methodology to various clinical trials. In contrast to most psychology experiments, lives are actually on the line in clinical trials, and we believe our Bayesian A/B test offers insights over and above the usual “p<.05, the treatment effect is present” and “p>.05, the treatment effect is absent”. A collection of these brief Bayesian reanalyses can be found here.

Apart from the merits and demerits of our specific analysis, it strikes us as undesirable that important clinical trials are analyzed in only one way — that is, based on the efforts of a single data-analyst, who operates within a single statistical framework, using a single statistical test, drawing a specific set of all-or-none conclusions. Instead, it seems prudent to present, alongside the original article, a series of brief comments that contain alternative statistical analyses; if these confirm the original result, this inspires trust in the conclusion; if these alternative analyses contradict the original result, this is grounds for caution and a deeper reflection on what the data tell us. Either way, we learn something important that we did not know before.
(more…)


This Statement by Sir Ronald Fisher Will Shock You

Sir Ronald Aylmer Fisher (1890-1962) was one of the greatest statisticians of all time. However, Fisher was also stubborn, belligerent, and a eugenicist. When it comes to shocking remarks, one does not need to dig deep:

  1. In a dissenting opinion on the 1950 UNESCO report “The race question”, Fisher argued that “Available scientific knowledge provides a firm basis for believing that the groups of mankind differ in their innate capacity for intellectual and emotional development”.
  2. Fisher strongly, repeatedly, and persistently opposed the conclusion that smoking is a cause of lung cancer.
  3. Fisher felt that “The theory of inverse probability [i.e., Bayesian statistics] is founded upon an error, and must be wholly rejected.” (for details see Aldrich, 2008).
  4. In The Design of Experiments Fisher argued that “it should be noted that the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis.” (1935, p. 16). This confession should be shocking, because it means that we cannot quantify evidence for a scientific law. As Jeffreys (1961, p. 377) pointed out, in Fisher’s procedure the law (i.e, the null hypothesis) “is merely something set up like a coconut to stand until it is hit”.

The next section discusses another shocking statement, one that has been conveniently forgotten and flies in the face of current statistical practice.
(more…)


Preprint: Robust Bayesian Meta-Analysis: Addressing Publication Bias with Model-Averaging

This post is a teaser for Maier, Bartoš, & Wagenmakers (2020). Robust Bayesian meta-analysis: Addressing publication bias with model-averaging. Preprint available on PsyArXiv: https://psyarxiv.com/u4cns

 

Abstract

“Meta-analysis is an important quantitative tool for cumulative science, but its application is frustrated by publication bias. In order to test and adjust for publication bias, we extend model-averaged Bayesian meta-analysis with selection models. The resulting Robust Bayesian Meta-analysis (RoBMA) methodology does not require all-or-none decisions about the presence of publication bias, is able to quantify evidence in favor of the absence of publication bias, and performs well under high heterogeneity. By model-averaging over a set of 12 models, RoBMA is relatively robust to model misspecification, and simulations show that it outperforms existing methods. We demonstrate that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives. We provide an implementation in R and JASP so that researchers can easily apply the new methodology to their own data.”
(more…)


Struggling with de Finetti’s Representation Theorem

De Finetti’s Representation Theorem is among the most celebrated results in Bayesian statistics. As I mentioned in an earlier post, I have never really understood its significance. A host of excellent writers have all tried to explain why the result is so important [e.g., Lindley (2006, pp. 107-109), Diaconis & Skyrms (2018, pp. 122-125), and the various works by Zabell], but their words just went over my head. Yes, I understand that for an exchangeable series, the probability of the data can be viewed as a weighted mixture over a prior distribution, but this just seemed like an application of Bayes rule — you integrate out the parameter to obtain the result. So what’s the big deal?

Recently I stumbled across a 2004 article by Phil Dawid, one of the most reputable (and original) Bayesian statisticians. In his article, Dawid provides a relatively accessible introduction to the importance of de Finetti’s theorem. In the section “Exchangeability”, Dawid writes:
(more…)


PROBABILITY DOES NOT EXIST (Part V): De Finetti’s 1974 Preface (Part III)

This is the third and final post on the 1974 preface of Bruno de Finetti’s masterpiece “Theory of Probability”, which is missing from the reprint of the 1970 book. Below, the use of italics is always as in the original text.
 
 
 
 
 
 
 
 
 
 
 
 

De Finetti’s Preface Continued [Annotated]

“It would be impossible, even if space permitted, to trace back the possible development of my ideas, and their relationships with more or less similar positions held by other authors, both past and present. A brief survey is better than nothing, however (even though there is an inevitable arbitrariness in the selection of names to be mentioned).
     I am convinced that my basic ideas go back to the years of High School as a result of my preference for the British philosophers Locke, Berkeley and, above all, Hume! I do not know to what extent the Italian school textbooks and my own interpretations were valid: I believe that my work based on exchangeability corresponds to Hume’s ideas, but some other scholars do not agree. I was also favourably impressed, a few years later, by the ideas of Pragmatism, and the related notions of operational definitions in Physics. I particularly liked the Pragmatism of Giovanni Vailati—who somehow `Italianized’ James and Peirce—and, as for operationalism, I was very much struck by Einstein’s relativity of `simultaneity’, and by Mach and (later) Bridgman.
     As far as Probability is concerned, the first book I encountered was that of Czuber. (Before 1950—my first visit to the USA—I did not know any English, but only German and French.) For two or three years (before and after the `Laurea’ in Mathematics, and some application of probability to research on Medelian heredity), I attempted to find valid foundations for all the theories mentioned, and I reached the conclusion that the classical and frequentist theories admitted no sensible foundation, whereas the subjectivistic one was fully justified on a normative-behaviouristic basis.”

(more…)


« Previous Entries

Powered by WordPress | Designed by Elegant Themes