Powered by JASP

Book Review of “Bayesian Probability for Babies”

“Bayesian Probability for Babies” is a book that explains Bayes’ rule through a simple story about cookies. I battle-tested the book on my two-year old son Theo (admittedly no longer a baby), and he seemed somewhat intrigued by the idea of candy-covered cookies, although the more subtle points of the story must have eluded him. Theo gives the book three out of five stars: the cookies are a good idea but the book has no dinosaurs.
(more…)


Progesterone in Women with Bleeding in Early Pregnancy: Absence of Evidence, Not Evidence of Absence

Available on https://psyarxiv.com/etk7g/, this is a comment on a recent article in the New England Journal of Medicine (Coomarasamy et al., 2019). A response by the authors will follow at a later point.

A recent trial assessed the effectiveness of progesterone in preventing miscarriages.1 The number of live births was 74.7% (1513/2025) in the progesterone group and 72.5% (1459/2013) in the placebo group (p=.08). The authors concluded: “The incidence of adverse events did not differ significantly between the groups.”
(more…)


Preprint: Laypeople Can Predict Which Social Science Studies Replicate

This post is an extended synopsis of Hoogeveen, S., Sarafoglou A., & Wagenmakers, E.-J. (2019). Laypeople Can Predict Which Social Science Studies Replicate. Preprint available on PsyArXiv:https://psyarxiv.com/egw9d.
 
 
 
 
 

Abstract

Large-scale collaborative projects recently demonstrated that several key findings from the social science literature could not be replicated successfully. Here we assess the extent to which a finding’s replication success relates to its intuitive plausibility. Each of 27 high-profile social science findings was evaluated by 233 people without a PhD in psychology. Results showed that these laypeople predicted replication success with above-chance performance (i.e., 58%). In addition, when laypeople were informed about the strength of evidence from the original studies, this boosted their prediction performance to 67%. We discuss the prediction patterns and apply signal detection theory to disentangle detection ability from response bias. Our study suggests that laypeople’s predictions contain useful information for assessing the probability that a given finding will replicate successfully.
(more…)


Book Review of “Thinking in Bets”, Part 2 of 2

This week’s post continues the review of Thinking in Bets by Annie Duke. As I indicated last week, the book is fun and informative, and I gave it 4 out of 5 stars. Consider for instance the following footnote (p. 90):

 
 
 
 
 
 

“I lifted these [absurd reasons people give for their car accidents – EJ] from an article by Robert MacCoun (described in the following paragraph) and repeat them without guilt. First, they are incredibly amusing and informative; the greater crime would be not sharing them. Second, MacCoun acknowledged that he got them from the book Anguished English, written by my father, Richard Lederer.”

Last week I also outlined three traps that the book presents for the unaware reader. To this list I want to add only a single trap, and it relates to the author’s conceptualization of hindsight bias.
(more…)


Book Review of “Thinking in Bets”, Part 1 of 2

Written by Annie “The Duchess of Poker” Duke, Thinking in Bets is a national bestseller, and for good reason. The writing style is direct and to-the-point, and the advice is motivated by concrete examples taken from the author’s own experience. For instance, one anecdote concerns a bet among a group of friends on whether or not one of them, “Ira the Whale”, could eat 100 White Castle burgers in a single sitting. David Grey, one of the author’s friends, bet $200 on the Whale:
(more…)


Redefine Statistical Significance XVIII: A Shockingly Honest Counterargument

Background: the 2018 article “Redefine Statistical Significance” suggested that it is prudent to treat p-values just below .05 with a grain of salt, as such p-values provide only weak evidence against the null. By threatening the status quo, this modest proposal ruffled some feathers and elicited a number of counterarguments. As discussed in this series of posts, none of these counterarguments pass muster. Recently, however, Johnson et al. (in press, Injury) presented an empirical counterargument that we believe is new. This counterargument is brutally honest and somewhat shocking (to us, anyway).

Johnson and colleagues start off by defining the p-value and its purpose:

“The primary purpose of using a P value is to minimize type I errors — erroneous conclusions made about differences between groups when no such difference truly exists. The type I error rate is often specified a priori at 0.05, meaning that there is a 1 in 20 chance — or a 5% risk — that the difference detected is because of chance rather than attributed to the effects of the intervention.”

(more…)


« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes