Powered by JASP

Dennis Lindley’s Second Paradox

What is commonly referred to as “Lindley’s paradox” exposed a deep philosophical divide between frequentist and Bayesian testing, namely that, regardless of the prior distribution used, high-N data that show a significant p-value may at the same time indicate strong evidence in favor of the null hypothesis (Lindley, 1957). This “paradox” is due to Dennis Lindley, one of the most brilliant and influential scholars in statistics.1

Lindley was thoroughly and irrevocably a Bayesian, never passing on the opportunity of being polemic. For example, he argued that “the only good statistics is Bayesian statistics” (Lindley, 1975) or that Bradley Efron, who just received a big price, may have been “falling over all those bootstraps lying around” (Lindley, 1986). He also trashed Taleb’s Black Swan in great style. Somewhat surprisingly, he also took issues with the Bayes factor.2

(more…)

A Short Writing Checklist for Students

A number of years ago I compiled a writing checklist for students. Its primary purpose was to make my life easier, but the list was meant to be helpful for the students as well. The checklist is here.

My pet peeves: (1) abrupt changes of topic; (2) poorly designed figures; (3) tables and figures that are not described properly in the main text; and (4) ambiguous referents (“this”, “it”).

For more detailed advice I highly recommend the guidelines from Dan Simons. Also, you may enjoy the article I wrote for the APS Observer a decade ago (Wagenmakers, 2009).

(more…)


Prediction is Easy, Especially About the Past: A Critique of Posterior Bayes Factors

The Misconception

Posterior Bayes factors are a good idea: they provide a measure of evidence but are relatively unaffected by the shape of the prior distribution.

The Correction

Posterior Bayes factors use the data twice, effectively biasing the outcome in favor of the more complex model.

The Explanation

The standard Bayes factor is the ratio of predictive performance between two rival models. For each model M_i, its predictive performance p(y | M_i) is computed as the likelihood for the observed data, averaged over the prior distribution for the model parameters \theta | M_i. Suppressing the dependence on the model, this yields p(y) = \int p(y | \theta) p(\theta) \, \text{d}\theta. Note that, as the words imply, “predictions” are generated from the “prior”. The consequence, of course, is that the shape of the prior distribution influences the predictions, and thereby the Bayes factor. Some consider this prior dependence to be a severe limitation; indeed, it would be more convenient if the observed data could be used to assist the models in making predictions — after all, it is easier to make “predictions” about the past than about the future.

(more…)


Is Polya’s Fundamental Principle Fundamentally Flawed?

One of the famous fallacies in deductive logic is known as “affirming the consequent”. Here is an example of a syllogism gone wrong:

General statement
    When Socrates rises early in the morning,     he always has a foul mood.
Specific statement
    Socrates has a foul mood.
Deduction (invalid)
    Socrates has risen early in the morning.

The deduction is invalid because Socrates may also be in a foul mood at other times of the day as well. What the fallacy does is take the general statement “A -> B” (A implies B), and interpret it as “B -> A” (B implies A).
(more…)


Yes, Psychologists Must Change the Way They Analyze Their Data

Back in 2011, Daryl Bem shocked the academic world by publishing an article in which he claimed to present empirical evidence for the existence of precognition, that is, the ability of people to “look into the future” (Bem, 2011). Particularly shocking was the fact that Bem had managed to publish this claim in the flagship journal of social psychology, the Journal of Personality and Social Psychology (JPSP).

After learning about the Bem paper, together with several colleagues at the Psychological Methods Unit we wrote a reply titled “Why psychologists must change the way they analyze their data: The case of psi” (Wagenmakers et al., 2011).  In this reply, we pointed to the exploratory elements in Bem’s article, and we showed with a Bayesian re-analyses how p-values just below .05 offer little evidence against the null hypothesis (a message recently repeated in the paper “Redefine Statistical Significance”, which is the topic of an ongoing series of posts on this blog).

(more…)

The Story of a Lost Researcher Trying to Learn About Disruptive Behaviors During the Full Moon

Karoline Huth is a first-year student in our Psychology Research Master at the University of Amsterdam. This blog post describes her presentation for the recent course on Bayesian Inference for Psychology. The assignment was to conduct a Bayesian reanalysis of existing data. Kudos to Karoline for being brave enough to share her work here! [EJ]

Presentation Intro and Background

Bayesian statistics is a trending topic in psychological research. Even though its benefits have been widely discussed (e.g., Marsman & Wagenmakers, 2017; Wagenmakers et al., 2018), many researchers still don’t use it. I argue these researchers fall in one of four subgroups:

  1. Those that are resistant: This group of researchers is the most difficult to address. They know exactly what the Bayesian approach is and how it works. But for whatever reason they stay resistant and prefer the common, frequentist approach.
  2. Those that are oblivious: There are those, unfortunately, that have never heard of the Bayesian approach. How is this possible? This group most likely consists of on the one hand inactive researchers and on the other students that are still solely taught the frequentist approach.
  3. Those that are lazy: Even though it has been their New-Year’s Resolution for the last five years, these researchers haven’t managed to learn more about the Bayesian approach and how to implement it. Consequently, they are not sufficiently confident to adopt the Bayesian framework.
  4. Those that are lost: Last of all, this group has heard of the Bayesian approach, is aware of its benefits, and knows the mathematical background. These researchers would like to apply the Bayesian approach but do not know what statistical software to use. In the end they resort to using the common frequentist analysis, saving them hours of programming.
(more…)

« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes