A Personal Impression of the ASA Symposium on Statistical Inference: A World Beyond p<.05


I (Alex Etz) recently attended the American Statistical Association’s “Symposium on Statistical Inference” (SSI) in Bethesda Maryland. In this post I will give you a summary of its contents and some of my personal highlights from the SSI.

The purpose of the SSI was to follow up on the historic ASA statement on p-values and statistical significance. The ASA statement on p-values  was written by a relatively small group of influential statisticians and lays out a series of principles regarding what they see as the current consensus about p-values. Notably, there were mainly “don’ts” in the ASA statement. For instance: “P-values do not measure the probability that the studied hypothesis is true, nor the probability that the data were produced by random chance alone”; “Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold”; “A p-value, or statistical significance, does not measure the size of an effect or the importance of a result” (emphasis mine).


An Interactive App for Designing Informative Experiments

Bayesian inference offers the pragmatic researcher a series of perks (Wagenmakers, Morey, & Lee, 2016). For instance, Bayesian hypothesis tests can quantify support in favor of a null hypothesis, and they allow researchers to track evidence as data accumulate (e.g., Rouder, 2014).

However, Bayesian inference also confronts researchers with new challenges, for instance concerning the planning of experiments. Within the Bayesian paradigm, is there a procedure that resembles a frequentist power analysis? (yes, there is!)


Redefine Statistical Significance Part X: Why the Point-Null Will Never Die

In our previous post, we discussed the paper “Abandon Statistical Significance”, which is a response to the paper “Redefine Statistical Significance” that has dominated the contents of this blog so far. The Abandoners include Andrew Gelman and Christian Robert, and on their own blogs they’ve each posted a reaction to our Bayesian Spectacles post. Below is a short response to their reaction to the discussion of the reply to the original paper. 🙂


Redefine Statistical Significance Part IX: Gelman and Robert Join the Fray, But Are Quickly Chased by Two Kangaroos

Andrew Gelman and Christian Robert are two of the most opinionated and influential statisticians in the world today. Fear and anguish strike into the heart of the luckless researchers who find the fruits of their labor discussed on the pages of the duo’s blogs: how many fatal mistakes will be uncovered, how many flawed arguments will be exposed? Personally, we celebrate every time our work is put through the Gelman-grinder or meets the Robert-razor and, after a thorough evisceration, receives the label “not completely wrong”, or –thank the heavens– “Meh”. Whenever this occurs, friends send us enthusiastic Emails along the lines of “Did you see that? Your work is discussed on the Gelman/Robert blog and he did not hate it!” (true story).


Redefine Statistical Significance Part VIII: How 88 Authors Overlooked a Giraffe and Sailed Straight into an Iceberg

The key point of the paper “Redefine Statistical Significance” is that p-just-below-.05 results should be approached with care. They should perhaps evoke curiosity, but they should not receive the blanket endorsement that is implicit in the bold claim “we reject the null hypothesis”. The statistical argument is straightforward and has been known for over half a century: for p-just-below-.05 results, the alternative hypothesis does not convincingly outpredict the null hypothesis, not even when we cheat and cherry-pick the alternative hypothesis that is inspired by the data.

The claim that p-just-below-.05 results are evidentially weak was recently echoed by the American Statistical Association when they stated that “a p-value near 0.05 taken by itself offers only weak evidence against the null hypothesis” (Wasserstein and Lazar, 2016, p. 132). Extensive mathematical arguments are provided in Berger and Delampady, 1987; Berger & Sellke, 1987; Edwards, Lindman, and Savage, 1963; Johnson, 2013; and Sellke, Bayarri, and Berger, 2001 — these papers are relevant and influential; in our opinion, anybody who critiques or praises the p-value ought to be intimately aware of their contents.


Redefine Statistical Significance Part VII: Bursting the Bubble

The paper Redefine Statistical Significance reveals an inconvenient truth: p-values near .05 are evidentially weak. Such p-values should not be used “for sanctification, for the preservation of conclusions from all criticism, for the granting of an imprimatur.” (Tukey, 1962, p. 13 — NB: Tukey was referring to statistical procedures in general, not to p-values or p-just-below-.05 results specifically).

Unfortunately, in the current academic environment, a p<.05 result is meant to accomplish exactly this: sanctification. After all, as a field, we have agreed that p-values below .05 are “significant”, and that in such cases “the null hypothesis can be rejected”. How rude then, how inappropriate, that some critics still wish to dispute the findings! Do they think that they are above the law?


« Previous Entries

Powered by WordPress | Designed by Elegant Themes
Follow by Email