Powered by JASP

Posted on Jan 17th, 2019

One of the famous fallacies in deductive logic is known as “affirming the consequent”. Here is an example of a syllogism gone wrong:

General statement

When Socrates rises early in the morning, he always has a foul mood.

Specific statement

Socrates has a foul mood.

Deduction (invalid)

Socrates has risen early in the morning.

The deduction is invalid because Socrates may also be in a foul mood at other times of the day as well. What the fallacy does is take the general statement “A -> B” (A implies B), and interpret it as “B -> A” (B implies A).

(more…)

Posted on Jan 10th, 2019

Back in 2011, Daryl Bem shocked the academic world by publishing an article in which he claimed to present empirical evidence for the existence of *precognition*, that is, the ability of people to “look into the future” (Bem, 2011). Particularly shocking was the fact that Bem had managed to publish this claim in the flagship journal of social psychology, the *Journal of Personality and Social Psychology* (JPSP).

After learning about the Bem paper, together with several colleagues at the Psychological Methods Unit we wrote a reply titled “Why psychologists must change the way they analyze their data: The case of psi” (Wagenmakers et al., 2011). In this reply, we pointed to the exploratory elements in Bem’s article, and we showed with a Bayesian re-analyses how *p*-values just below .05 offer little evidence against the null hypothesis (a message recently repeated in the paper “Redefine Statistical Significance”, which is the topic of an ongoing series of posts on this blog).

Posted on Jan 3rd, 2019

*Karoline Huth is a first-year student in our Psychology Research Master at the University of Amsterdam. This blog post describes her presentation for the recent course on Bayesian Inference for Psychology. The assignment was to conduct a Bayesian reanalysis of existing data. Kudos to Karoline for being brave enough to share her work here! [EJ]*

Bayesian statistics is a trending topic in psychological research. Even though its benefits have been widely discussed (e.g., Marsman & Wagenmakers, 2017; Wagenmakers et al., 2018), many researchers still don’t use it. I argue these researchers fall in one of four subgroups:

*Those that are resistant:*This group of researchers is the most difficult to address. They know exactly what the Bayesian approach is and how it works. But for whatever reason they stay resistant and prefer the common, frequentist approach.*Those that are oblivious*: There are those, unfortunately, that have never heard of the Bayesian approach. How is this possible? This group most likely consists of on the one hand inactive researchers and on the other students that are still solely taught the frequentist approach.*Those that are lazy*: Even though it has been their New-Year’s Resolution for the last five years, these researchers haven’t managed to learn more about the Bayesian approach and how to implement it. Consequently, they are not sufficiently confident to adopt the Bayesian framework.*Those that are lost*: Last of all, this group has heard of the Bayesian approach, is aware of its benefits, and knows the mathematical background. These researchers would like to apply the Bayesian approach but do not know what statistical software to use. In the end they resort to using the common frequentist analysis, saving them hours of programming.

Posted on Dec 27th, 2018

The Galton board or quincunx is a fascinating device that provides a compelling demonstration of one the main laws of statistics. In the device, balls are dropped from above onto a series of pegs that are organized in rows of increasing width. Whenever a ball hits a particular peg, it can drop either to the right or to the left, presumably with a chance of 50% (but more about this later). When many balls are dropped, most of them remain somewhere near the middle of the device, but a few balls experience a successive run of movements in the same direction and therefore drift off to the sides.

(more…)
Posted on Dec 20th, 2018

*We thank Alexander Ly for constructive comments on an earlier draft of this post.*

Bayes’ rule tells us how to learn from experience, that is, by updating our knowledge about the world using *relative predictive performance*: hypotheses that predicted the data relatively well receive a boost in credibility, whereas hypotheses that predicted the data relatively poorly suffer a decline (e.g., Wagenmakers et al., 2016). This predictive updating principle holds for propositions, hypotheses, models, and parameters: every time, our uncertainty is updated using the same mathematical operation. Take for instance the learning process involving just two models, _{1} and _{2} (but note that these may equally well refer to parameter values, say _{1} and _{2}, within a single model). The odds form of Bayes’ rule yields

Posted on Dec 10th, 2018

When data analysts operate within different statistical frameworks (e.g., frequentist versus Bayesian, emphasis on estimation versus emphasis on testing), how does this impact the qualitative conclusions that are drawn for real data? To study this question empirically we selected from the literature two simple scenarios -involving a comparison of two proportions and a Pearson correlation- and asked four teams of statisticians to provide a concise analysis and a qualitative interpretation of the outcome. The results showed considerable overall agreement; nevertheless, this agreement did not appear to diminish the intensity of the subsequent debate over which statistical framework is more appropriate to address the questions at hand.