Rosenkrantz on Severity and the Problem of Old Evidence

In the previous post I discussed the problem of old evidence and wrote “It is highly likely that my argument is old, or even beside the point. I am not an expert on this particular problem.” Sure enough, Andrew Fowlie kindly attended me to the following book chapter by Roger Rosenkrantz:

Rosenkrantz, R. D. (1983). Why Glymour is a Bayesian. In Earman, J. (Ed.), Testing Scientific Theories (pp. 69-98). Minneapolis: University of Minnesota Press.

This chapter is an amusing, relevant, and well-written response to Clark Glymour’s chapter “Why I am not a Bayesian”:

Glymour, C. (1981) Why I am not a Bayesian. In Theory and Evidence (pp. 63-93) Chicago: University of Chicago Press.

Rosenkrantz starts as follows:
“In the third chapter of his book Theory and Evidence, Clark Glymour explains why he is not a Bayesian. I shall attempt to show, on the contrary, that he is a Bayesian, more so than many who march under that banner.”

From here, the paper only gets better. Rosenkrantz dissects and defangs the problem of old evidence, and I find myself in complete agreement. Moreover, Rosenkrantz describes how the concept of a severe test can be accommodated within a Bayesian framework. This is exactly the point of a recent paper that I have co-authored with Noah van Dongen and Jan Sprenger:

van Dongen, N., Wagenmakers, E.-J., & Sprenger, J. (in press). A Bayesian perspective on severity: Risky predictions and specific hypotheses. Psychonomic Bulletin & Review.

Had we realized the importance of the Rosenkrantz chapter we would certainly have given it due credit. To underscore the relevance of Rosenkrantz’ work and give an impression his writing style I offer the following example:

“It is unfortunate that a misreading of my (possibly obscure) 1976 paper on simplicity prevented Glymour from appreciating that Bayesian analysis delivers precisely what his own intuitions demand. He says that I fail to show “that in curve-fitting the average likelihood of a linear hypothesis is greater than the average likelihood of a quadratic or higher degree hypothesis.” But of course I don’t want to show that, for it isn’t true! What can be shown is that the average likelihood of the quadratic family will be higher than that of the linear family when the data fit the quadratic hypothesis sufficiently better than the linear one, whereas the latter will enjoy higher average likelihood when the two families fit equally well.“ (Rosenkrantz, 1983, p. 82)

I tried to find additional information on Rosenkrantz but came up short. There are no images, no relevant pages, nothing. The only old evidence that supports the hypothesis of Rosenkrantz’s existence appears to be his scientific output… 

About the author

Eric-Jan Wagenmakers

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.