Did Alan Turing Invent the Bayes factor?

The otherwise excellent article by Consonni et al. (2018), discussed last week, makes the following claim:

“…the initial use of the BF can be attributed both to Jeffreys and Turing who introduced it independently around the same time (Kass & Raftery, 1995)” (Consonni et al., 2018, p. 638)

This claim recently resurfaced on Twitter as well:

But is this really true?
First we consult the rightly famous Kass & Raftery paper, and we find the following statement:

“The terminology [i.e., Bayes factor — EJ] is apparently due to Good 1958, who attributed the method to Turing in addition to, and independently of, Jeffreys at about the same time; see Good 1983.”

Jack Good was a disciple of Alan Turing at Bletchley Park, and Good has emphasized Turing’s use of the Bayes factor in several publications. However, the claim that Turing ought to be credited with the invention of the Bayes factor appears to be flat-out wrong, for at least two reasons. As stated in Etz & Wagenmakers (2017):

“When the hypotheses in question are simple point hypotheses, the Bayes factor reduces to a likelihood ratio, a method of measuring evidential strength which dates back as far as Johann Lambert in 1760 (Lambert and DiLaura, 2001) and Daniel Bernoulli in 1777 (Kendall et al., 1961; see Edwards, 1974 for a historical review); C. S. Peirce had specifically called it a
measure of ‘weight of evidence’ as far back as 1878 (Peirce, 1878; see Good, 1979). Alan Turing also independently developed likelihood ratio tests using Bayes’ theorem, deriving decibans to describe the intensity of the evidence, but this approach was again based on
the comparison of simple versus simple hypotheses. For example, Turing used decibans when decrypting the Enigma codes to infer the identity of a given letter in German military communications during World War II (Turing, 1941/2012). As Good (1979) notes, Jeffreys’s Bayes factor approach to testing hypotheses “is especially ‘Bayesian’ [because] either [hypothesis] is composite” (p. 393).

In other words, Turing did not use a “real” Bayes factor, which involves an averaging process over the prior distribution. Second, Turing used his tests quite some years after Haldane (1932) and Jeffreys (1935), with conceptual work by Wrinch & Jeffreys done as early as 1921. Again from Etz & Wagenmakers (2017):

“Turing started his Maths Tripos at King’s College in 1931, graduated BA in 1934, and was a Fellow of King’s College from 1935–1936. Anthony (A. W. F.) Edwards speculates that Turing might have attended some of Jeffreys’s lectures while at Cambridge, where he would have learned about details of Bayes’ theorem (Edwards, 2015, personal communication). According to the college’s official record of lecture lists, Jeffreys’s lectures ’Probability’ started in 1935 (or possibly Easter Term 1936), and in the year 1936 they were in the Michaelmas (i.e., Fall) Term. Turing would have had the opportunity of attending them in the Easter Term or the Michaelmas Term in 1936 (Edwards, 2015, personal communication). Jack (I. J.) Good has also provided speculation about their potential connection, “Turing and Jeffreys were both Cambridge dons, so perhaps Turing had seen Jeffreys’s use of Bayes factors; but, if he had, the effect on him must have been unconscious for he never mentioned this influence and he was an honest man. He had plenty of time to tell me the influence if he was aware of it, for I was his main statistical assistant for a year” (Good, 1980, p. 26). Later, in an interview with David Banks, Good remarks that “Turing might have seen [Wrinch and Jeffreys’s] work, but probably he thought of [his likelihood ratio tests] independently” (Banks, 1996, p. 11). Of course, Turing could have learned about Bayes’ theorem from any of the standard probability books at the time, such as Todhunter (1858), but the potential connection is of interest. For more detail on Turing’s work on cryptanalysis, see Zabell (2012).”

The conclusion is that Turing did not use “real” Bayes factors, and employed his likelihood ratios some time after articles promoting Bayes factors had appeared in the statistical literature.

However, let’s continue our historical excavation and examine Good (1983), as referenced in Kass & Raftery (1995). This is the inspiring paper “Explicativity, Corroboration, and the Relative Odds of Hypotheses (#846)” (pp. 149-170; this paper was actually published in Synthese, 1975 — see references). On page 159, Turing introduces the Bayes factor as follows:

In succinct modern notation, the weight of evidence in favor of H provided by E given G is

W(H:E|G) = log O(H|E ∙ G)/O(H|G),

the logarithm of the factor by which the odds of H are multiplied when E is observed, given G as background information. Turing (1941) called this factor “the factor in favor of H.” Since, by two applications of Bayes’ theorem, this factor is seen to equal P(E|H ∙ G)/P(E|not-H ∙ G) it may also be called the Bayes factor in favor of H (provided by E given G), or perhaps the Bayes-Jeffreys-Turing factor. The theorem that this factor was equal to the probability ratio, or simple likelihood ratio, was mentioned by Wrinch and Jeffreys (1921, p. 387).”

It appears, therefore, that Good used the name “Bayes factor”, because the updating factor follows immediately from Bayes’ theorem. This is of course a recurrent theme on BayesianSpectacles — see for instance the post “Bayes factors for those who hate Bayes factors“.


Consonni, G., Fouskakis, D., Liseo, B., & Ntzoufras, I. (2018). Prior distributions for objective Bayesian analysis. Bayesian Analysis, 13, 627-679.

Etz, A., & Wagenmakers, E.-J. (2017). J. B. S. Haldane’s contribution to the Bayes factor hypothesis test. Statistical Science, 32, 313-329.

Good, I. J. (1958). Significance tests in parallel and in series. Journal of the American Statistical Association, 53, 799-813.

Good, I. J. (1975). Explicativity, corroboration, and the relative odds of hypotheses. Synthese, 30, 39-73.

Good, I. J. (1983). Good thinking: The foundations of probability and its applications. Minneapolis: University of Minnesota Press.

Haldane, J. B. S. (1932). A note on inverse probability. Mathematical Proceedings of the Cambridge Philosophical Society, 28, 55-61.

Jeffreys, H. (1935). Some tests of significance, treated by the theory of probability. Proceedings of the Cambridge Philosophy Society, 31, 203-222.

Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773-795.

Wrinch, D., Jeffreys, H. (1921). On certain fundamental principles of scientific inquiry. Philosophical Magazine, 42, 369-390.

About The Author

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.