JASP_logo

Replaying the Tape of Life

In his highly influential book ‘Wonderful Life’, Harvard paleontologist Stephen Jay Gould proposed that evolution is an unpredictable process that can be characterized as

“a staggeringly improbable series of events, sensible enough in retrospect and subject to rigorous explanation, but utterly unpredictable and quite unrepeatable. Wind back the tape of life to the early days of the Burgess Shale; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like human intelligence would grace the replay.” (Gould, 1989, p. 45)

According to Gould himself, the Gedankenexperiment of ‘replaying life’s tape’ addresses “the most important question we can ask about the history of life” (p. 48):

“You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past–say, to the seas of the Burgess Shale. Then let the tape run again and see if the repetition looks at all like the original. If each replay strongly resembles life’s actual pathway, then we must conclude that what really happened pretty much had to occur. But suppose that the experimental versions all yield sensible results strikingly different from the actual history of life? What could we then say about the predictability of self-conscious intelligence? or of mammals?” (Gould,1989, p. 48)

(more…)


A Bayesian Decalogue: Introduction

With apologies to Bertrand Russell.

John Tukey famously stated that the collective noun for a group of statisticians is a quarrel, and I. J. Good argued that there are at least 46,656 qualitatively different interpretations of Bayesian inference (Good, 1971). With so much Bayesian quarrelling, outsiders may falsely conclude that the field is in disarray. In order to provide a more balanced perspective, here we present a Bayesian decalogue, a list of ten commandments that every Bayesian subscribes to — correction (lest we violate our first commandment): that every Bayesian is likely to subscribe to. The list below is not intended to be comprehensive, and we have tried to steer away from technicalities and to focus instead on the conceptual foundations. In a series of upcoming blog posts we will elaborate on each commandment in turn. Behold our Bayesian decalogue:

(more…)


A 171-Year-Old Suggestion to Promote Open Science

Tl;dr In 1847, Augustus De Morgan suggested that researchers could avoid overselling their work if, every time they made a key claim, they reminded the reader (and themselves) of how confident they were in making that claim. In 1971, Eric Minturn went further and proposed that such confidence could be expressed as a wager, with beneficial side-effects: “Replication would be encouraged. Graduate students would have a new source of money. Hypocrisy would be unmasked.”


The main principles of Open Science are modest: “don’t hide stuff” and “be completely honest”. Indeed, these principles are so fundamental that the term “Open Science” should be considered a pleonasm: openness is a defining characteristic of science, without which peers cannot properly judge the validity of the claims that are presented.

Unfortunately, in actual research practice, there are papers and careers on the line, making it difficult even for well-intentioned researchers to display the kind of scientific integrity that could very well torpedo their academic future. In other words, even though most if not all researchers will agree that it is crucial to be honest, it is not clear how such honesty can be expected, encouraged, and accepted.

(more…)


Error Rate Schmerror Rate

“Anything is fair in love and war” — this saying also applies to the eternal struggle between frequentists (those who draw conclusions based on the performance of their procedures in repeated use) and Bayesians (those who quantify uncertainty for the case at hand). One argument that frequentists have hurled at the Bayesian camp is that “Bayesian procedures do not control error rate”. This sounds like a pretty serious accusation, and it may perhaps dissuade researchers who are on the fence from learning more about Bayesian inference. “Perhaps,” these researchers argue, “perhaps the Bayesian method for updating knowledge is somehow deficient. After all, it does not control error rate. This sounds pretty scary”.

The purpose of this post is twofold. First, we will show that Bayesian inference does something much better than “controlling error rate”: it provides the probability that you are making an error for the experiment that you actually care about. Second, we will show that Bayesian inference can be used to “control error rate” — Bayesian methods usually do not strive to control error rate, but this is not because of a some internal limitation; instead, Bayesians believe that it is simply more relevant to know the probability of making an error for the case at hand than for imaginary alternative scenarios. That is, for inference, Bayesians adopt a “post-data” perspective in which one conditions on what is known. But it is perfectly possible to set up a Bayesian procedure and control error rate at the same time.

(more…)


The Frequentist Chef

Over the past year or so I’ve been working on a book provisionally titled “Bayesian bedtime stories”. Below is a part of the preface. This post continues the cooking analogy from the previous post.

Like cooking, reasoning under uncertainty is not always easy, particularly when the ingredients leave something to be desired. But unlike cooking, reasoning under uncertainty can be executed like the gods: flawlessly. The sacrifice that is required is only that one respects the laws of probability. Why would anybody want to do anything else?

This is not the place to bemuse the historical accidents that resulted in the rise, the fall, and the revival of Bayesian inference. But it is important to mention that Bayesian inference, godlike in its purity and elegance, is not the only game in town. In fact, researchers in empirical disciplines –psychology, biology, medicine, economics– predominantly use a different method to draw conclusions from data.

(more…)


The Bayesian Chef

Over the past year or so I’ve been working on a book provisionally titled “Bayesian bedtime stories”. Below is a part of the preface. The next post continues the cooking analogy by introducing the frequentist chef.

Even though the book [Bayesian Bedtime Stories] addresses a large variety of questions, the method of reasoning is always based on the same principle: contradictions and internal inconsistencies are not allowed. For instance, the propositions ‘Linda is a bank teller’ and ‘Linda is a feminist’ are each necessarily more plausible than the conjunction ‘Linda is a feminist bank teller’. Any method of reasoning that leads to a different conclusion is seriously deficient.

In order for our reasoning to be reasonable we therefore need to exclude relentlessly from consideration all methods, however beguiling or familiar, that produce internal inconsistencies. When we remove the debris only a single method remains. This method, known as Bayesian inference, stipulates that when we reason with uncertainty, we should obey the laws of probability theory. Simple and elegant, these laws lay the foundation for a reasoning process that cannot be improved upon; it is perfect — the reasoning process of the gods.1

‘Thou shalt not contradict thyself’. The equation in the clouds shows Bayes’ rule: the only way to reason with uncertainty while not contradicting yourself. Bayes’ rule states that our prior opinions are updated by data in proportion to predictive success: opinions that predicted the data better than average receive a boost in credibility, whereas opinions that predicted the data worse than average suffer a decline (Wagenmakers et al., 2016). In other words, the learning process is driven by relative prediction errors. CC-BY: Artwork by Viktor Beekman, concept by Eric-Jan Wagenmakers.

As may be expected, adopting the reasoning process of a god brings several advantages. One of these advantages is that only the ingredients of the reasoning process are up for debate; that is, one may discuss how exactly a particular model of the world is to be constructed — how ideas are translated to numbers and equations. The proper design of statistical models is an art that requires both training and talent. One may also discuss what data are relevant for the model. But once the ingredients –model and data– are in place, the reasoning process itself unfolds in a single unique way. No discussion about that process is possible. Given the model of the world and the data available, the gods’ method of reasoning is unwavering and will infallibly lead to the same conclusion. That conclusion is misleading only to the extent that the ingredients were misleading.

Let’s emphasize this important advantage by further exploiting the cooking analogy. Suppose that, given particular ingredients, there exists a single unique way of preparing the best meal. You may have poor ingredients at your disposal –six ounces of half-rotten meat, two old potatoes, and a molded piece of cheese– but given these ingredients, you can follow a few simple rules and create the single best meal, a meal that even Andhrimnir, the Norse god of cooking, could not improve upon. What chef would deviate from these rules and willingly create an inferior dish? 2

The gods’ reasoning process is named after the reverend Thomas Bayes who first discovered the main theorem. What Bayes’ theorem (henceforth Bayes’ rule) accomplished is to outline how prior (pre-data) uncertainties and beliefs shall be updated to posterior (post-data) uncertainties and beliefs; in short, Bayes’ rule tells us how we ought to learn from experience.

All living creatures learn from experience, and this must be done by updating knowledge in light of prediction errors: gross prediction errors necessitate large adjustments in knowledge, whereas small prediction errors necessitate only minor adjustments.

In general terms, we then have the following rule for learning from experience:

Present uncertainty about the world
=
Past uncertainty about the world
x
Predictive updating factor

 
 

The bottom line is that Bayes’ rule allows its followers to use the power of probability theory to learn about the things they are unsure of. Nevertheless, Bayesian inference is not without serious competition. In the next post, we will examine the frequentist chef (uh-oh, indigestion alert) and compare the two.

Footnotes

1 As documented in many science fiction stories, the universe ceases to exist at the exact moment when its creator becomes aware of an internal inconsistency.

2 We purposefully ignore the fact that Andhrimnir only prepares a single dish. At Godchecker, the entry on Andhrimnir states: “He’s an Aesir chef with only one house special. He takes the Cosmic Boar. He kills it. He cooks it. The Gods eat it. It returns to life in the night ready for use in the next set meal. It’s a real pig of a life for the boar. A little variety in the kitchen would work wonders.”

References

Wagenmakers, E.-J., Morey, R. D., & Lee, M. D. (2016). Bayesian benefits for the pragmatic researcher. Current Directions in Psychological Science, 25, 169-176.

About The Author

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

« Previous Entries

Powered by WordPress | Designed by Elegant Themes