Powered by JASP

The Man Who Rewrote Conditional Probability

The universal notation for “the probability of A given B” is p(A | B). We were surprised to learn that the vertical stroke was first introduced by none other than… Sir Harold Jeffreys! At least Jeffreys himself seems to think so, and the sentiment is echoed on the website “Earliest Uses of Symbols in Probability and Statistics” Specifically, on page 15 of his brilliant book “Scientific Inference” (1931), Jeffreys introduces the vertical stroke notation:

And on page 25 of the even more brilliant book “Theory of Probability” (1939), Jeffreys explains the history of the notation in more detail:

So the man who invented the Bayesian hypothesis test (together with Dorothy Wrinch; but see Etz & Wagenmakers, 2017), the man who inferred that the earth’s core was not solid, and the man who first proposed the vertical stroke notation for conditional probability are one and the same. For more background on Harold Jeffreys’s contributions to probability theory we recommend the riveting book by Howie (2002).

Someone Badly Needs to Fix the Wikipedia Entry for Harold Jeffreys

As an aside, the Wikipedia entry on Jeffreys hardly does justice to his groundbreaking contributions in astronomy, geophysics, and statistics. The second sentence of the Wiki entry (accessed Jan 28, 2019) goes “The book that he and Bertha Swirles wrote Theory of Probability, which first appeared in 1939, played an important role in the revival of the Bayesian view of probability.” — this is incorrect: Sir Jeffreys and Lady Jeffreys co-authored the 1946 book Methods of Mathematical Physics, but not Theory of Probability.

The Wiki entry, which is embarrassingly short, later mentions: “The textbook Probability Theory: The Logic of Science, written by the physicist and probability theorist Edwin T. Jaynes, is dedicated to Jeffreys. The dedication reads, “Dedicated to the memory of Sir Harold Jeffreys, who saw the truth and preserved it.” A nice tidbit, no doubt, but does it really warrant mention in a one-page entry on one of the most impressive scientists of the past century? Moreover, a substantial portion of the entry is spent bemoaning Jeffreys’s reluctance to accept the continental drift hypothesis (e.g., Jeffreys, 1976, pp. 481-492), which again is not representative of the many contributions throughout his career.

One last example. The Wiki entry states: “It is only through an appendix to the third edition of Jeffreys’ book Scientific Inference that we know about Mary Cartwright’s method of proving that the number π is irrational.” Again, a nice tidbit, but surely not worth more than a bare mention even in an exhaustive biography. If tidbits need to be included at all, we respectfully suggest to mention the fact that Jeffreys introduced the symbol for conditional probability.

References

Etz, A. & Wagenmakers, E.-J. (2017). J. B. S. Haldane’s contribution to the Bayes factor hypothesis test. Statistical Science, 32, 313-329.Open Access

Howie, D. (2002). Interpreting probability: Controversies and developments in the early twentieth century. Cambridge: Cambridge University Press.

Jeffreys, H. (1931). Scientific Inference. Cambridge: Cambridge University Press.

Jeffreys, H. (1939). Theory of Probability. Oxford: Oxford University Press.

Jeffreys, H. (1976). The Earth: Its Origin, History and Physical Constitution. Cambridge: Cambridge University Press.

About The Authors

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

Maarten Marsman

Maarten Marsman is assistent professor at the Psychological Methods Group at the University of Amsterdam.


Dennis Lindley’s Second Paradox

What is commonly referred to as “Lindley’s paradox” exposed a deep philosophical divide between frequentist and Bayesian testing, namely that, regardless of the prior distribution used, high-N data that show a significant p-value may at the same time indicate strong evidence in favor of the null hypothesis (Lindley, 1957). This “paradox” is due to Dennis Lindley, one of the most brilliant and influential scholars in statistics.1

Lindley was thoroughly and irrevocably a Bayesian, never passing on the opportunity of being polemic. For example, he argued that “the only good statistics is Bayesian statistics” (Lindley, 1975) or that Bradley Efron, who just received a big price, may have been “falling over all those bootstraps lying around” (Lindley, 1986). He also trashed Taleb’s Black Swan in great style. Somewhat surprisingly, he also took issues with the Bayes factor.2

Specifically, in 1997 Lindley argued that Bayes factor proponents could find themselves simultaneously saying that (a) it is more likely to be a tall man than a tall woman, (b) it is more likely to be a short man than a short women, and (c) it is more likely to be a woman than a man. With his characteristic wit, he concludes that “one hardly advances the respect with which statisticians are held in society by making such declarations.” He further points out that this scenario is conceptually related to Simpson’s paradox.

In a short commentary, we illustrate Lindley’s critique for a simple example. We conclude that Lindley’s phrasing is imprecise, and that the paradoxical property he points out is actually intuitive. All told, it seems the Bayes factor is quite alright.

References

Dablander, F., van den Bergh, D., & Wagenmakers, E. J. (2018). Another Paradox? A Comment on Lindley (1997). PsyArXiv. November, 8.

Degroot, M. H. (1982). Comment. Journal of the American Statistical Association, 77(378), 336-339.

Lindley, D. V. (1957). A statistical paradox. Biometrika, 44(1/2), 187-192.

Lindley, D. V. (1975). The future of statistics: a Bayesian 21st century. Advances in Applied Probability, 7, 106-115.

Lindley, D. V. (1986). Comment. The American Statistician, 40(1), 6-7.

Lindley, D. V. (1997). Some comments on Bayes factors. Journal of Statistical Planning and Inference, 61(1), 181–189.

Lindley, D. V. (1991). Making Decisions. John Wiley & Sons.

Lindley, D. V. (2006). Understanding uncertainty. John Wiley & Sons.

Footnotes

1 As per Sigler’s law of eponymy, the “paradox” was discovered earlier by Harold Jeffreys. Bartlett pointed out an (inconsequential) error in Lindley’s exposition, and it is thus sometimes called the “Jeffreys-Lindley-Bartlett” paradox.

2 An interview with Dennis Lindley is available from here. He wrote two accessible books on probability and decision making, see here and here.

About The Authors

Fabian Dablander

Fabian Dablander is a PhD candidate at the Psychological Methods Group of the University of Amsterdam. You can find him on Twitter @fdabl.

Don van den Bergh

Don van den Bergh is a PhD candidate at the Psychological Methods Group of the University of Amsterdam.

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

A Short Writing Checklist for Students

A number of years ago I compiled a writing checklist for students. Its primary purpose was to make my life easier, but the list was meant to be helpful for the students as well. The checklist is here.

My pet peeves: (1) abrupt changes of topic; (2) poorly designed figures; (3) tables and figures that are not described properly in the main text; and (4) ambiguous referents (“this”, “it”).

For more detailed advice I highly recommend the guidelines from Dan Simons. Also, you may enjoy the article I wrote for the APS Observer a decade ago (Wagenmakers, 2009).

About The Author

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.


Prediction is Easy, Especially About the Past: A Critique of Posterior Bayes Factors

The Misconception

Posterior Bayes factors are a good idea: they provide a measure of evidence but are relatively unaffected by the shape of the prior distribution.

The Correction

Posterior Bayes factors use the data twice, effectively biasing the outcome in favor of the more complex model.

The Explanation

The standard Bayes factor is the ratio of predictive performance between two rival models. For each model M_i, its predictive performance p(y | M_i) is computed as the likelihood for the observed data, averaged over the prior distribution for the model parameters \theta | M_i. Suppressing the dependence on the model, this yields p(y) = \int p(y | \theta) p(\theta) \, \text{d}\theta. Note that, as the words imply, “predictions” are generated from the “prior”. The consequence, of course, is that the shape of the prior distribution influences the predictions, and thereby the Bayes factor. Some consider this prior dependence to be a severe limitation; indeed, it would be more convenient if the observed data could be used to assist the models in making predictions — after all, it is easier to make “predictions” about the past than about the future.

(more…)


Is Polya’s Fundamental Principle Fundamentally Flawed?

One of the famous fallacies in deductive logic is known as “affirming the consequent”. Here is an example of a syllogism gone wrong:

General statement
    When Socrates rises early in the morning,     he always has a foul mood.
Specific statement
    Socrates has a foul mood.
Deduction (invalid)
    Socrates has risen early in the morning.

The deduction is invalid because Socrates may also be in a foul mood at other times of the day as well. What the fallacy does is take the general statement “A -> B” (A implies B), and interpret it as “B -> A” (B implies A).
(more…)


Yes, Psychologists Must Change the Way They Analyze Their Data

Back in 2011, Daryl Bem shocked the academic world by publishing an article in which he claimed to present empirical evidence for the existence of precognition, that is, the ability of people to “look into the future” (Bem, 2011). Particularly shocking was the fact that Bem had managed to publish this claim in the flagship journal of social psychology, the Journal of Personality and Social Psychology (JPSP).

After learning about the Bem paper, together with several colleagues at the Psychological Methods Unit we wrote a reply titled “Why psychologists must change the way they analyze their data: The case of psi” (Wagenmakers et al., 2011).  In this reply, we pointed to the exploratory elements in Bem’s article, and we showed with a Bayesian re-analyses how p-values just below .05 offer little evidence against the null hypothesis (a message recently repeated in the paper “Redefine Statistical Significance”, which is the topic of an ongoing series of posts on this blog).

(more…)

« Previous Entries

Powered by WordPress | Designed by Elegant Themes