# Error Rate Schmerror Rate

“Anything is fair in love and war” — this saying also applies to the eternal struggle between frequentists (those who draw conclusions based on the performance of their procedures in repeated use) and Bayesians (those who quantify uncertainty for the case at hand). One argument that frequentists have hurled at the Bayesian camp is that “Bayesian procedures do not control error rate”. This sounds like a pretty serious accusation, and it may perhaps dissuade researchers who are on the fence from learning more about Bayesian inference. “Perhaps,” these researchers argue, “perhaps the Bayesian method for updating knowledge is somehow deficient. After all, it does not control error rate. This sounds pretty scary”.

The purpose of this post is twofold. First, we will show that Bayesian inference does something much better than “controlling error rate”: it provides the probability that you are making an error for the experiment that you actually care about. Second, we will show that Bayesian inference can be used to “control error rate” — Bayesian methods usually do not strive to control error rate, but this is not because of a some internal limitation; instead, Bayesians believe that it is simply more relevant to know the probability of making an error for the case at hand than for imaginary alternative scenarios. That is, for inference, Bayesians adopt a “post-data” perspective in which one conditions on what is known. But it is perfectly possible to set up a Bayesian procedure and control error rate at the same time.

# The Frequentist Chef

Over the past year or so I’ve been working on a book provisionally titled “Bayesian bedtime stories”. Below is a part of the preface. This post continues the cooking analogy from the previous post.

Like cooking, reasoning under uncertainty is not always easy, particularly when the ingredients leave something to be desired. But unlike cooking, reasoning under uncertainty can be executed like the gods: flawlessly. The sacrifice that is required is only that one respects the laws of probability. Why would anybody want to do anything else?

This is not the place to bemuse the historical accidents that resulted in the rise, the fall, and the revival of Bayesian inference. But it is important to mention that Bayesian inference, godlike in its purity and elegance, is not the only game in town. In fact, researchers in empirical disciplines –psychology, biology, medicine, economics– predominantly use a different method to draw conclusions from data.

# The Bayesian Chef

Over the past year or so I’ve been working on a book provisionally titled “Bayesian bedtime stories”. Below is a part of the preface. The next post continues the cooking analogy by introducing the frequentist chef.

Even though the book [Bayesian Bedtime Stories] addresses a large variety of questions, the method of reasoning is always based on the same principle: contradictions and internal inconsistencies are not allowed. For instance, the propositions ‘Linda is a bank teller’ and ‘Linda is a feminist’ are each necessarily more plausible than the conjunction ‘Linda is a feminist bank teller’. Any method of reasoning that leads to a different conclusion is seriously deficient.

In order for our reasoning to be reasonable we therefore need to exclude relentlessly from consideration all methods, however beguiling or familiar, that produce internal inconsistencies. When we remove the debris only a single method remains. This method, known as Bayesian inference, stipulates that when we reason with uncertainty, we should obey the laws of probability theory. Simple and elegant, these laws lay the foundation for a reasoning process that cannot be improved upon; it is perfect — the reasoning process of the gods.1

‘Thou shalt not contradict thyself’. The equation in the clouds shows Bayes’ rule: the only way to reason with uncertainty while not contradicting yourself. Bayes’ rule states that our prior opinions are updated by data in proportion to predictive success: opinions that predicted the data better than average receive a boost in credibility, whereas opinions that predicted the data worse than average suffer a decline (Wagenmakers et al., 2016). In other words, the learning process is driven by relative prediction errors. CC-BY: Artwork by Viktor Beekman, concept by Eric-Jan Wagenmakers.

As may be expected, adopting the reasoning process of a god brings several advantages. One of these advantages is that only the ingredients of the reasoning process are up for debate; that is, one may discuss how exactly a particular model of the world is to be constructed — how ideas are translated to numbers and equations. The proper design of statistical models is an art that requires both training and talent. One may also discuss what data are relevant for the model. But once the ingredients –model and data– are in place, the reasoning process itself unfolds in a single unique way. No discussion about that process is possible. Given the model of the world and the data available, the gods’ method of reasoning is unwavering and will infallibly lead to the same conclusion. That conclusion is misleading only to the extent that the ingredients were misleading.

Let’s emphasize this important advantage by further exploiting the cooking analogy. Suppose that, given particular ingredients, there exists a single unique way of preparing the best meal. You may have poor ingredients at your disposal –six ounces of half-rotten meat, two old potatoes, and a molded piece of cheese– but given these ingredients, you can follow a few simple rules and create the single best meal, a meal that even Andhrimnir, the Norse god of cooking, could not improve upon. What chef would deviate from these rules and willingly create an inferior dish? 2

The gods’ reasoning process is named after the reverend Thomas Bayes who first discovered the main theorem. What Bayes’ theorem (henceforth Bayes’ rule) accomplished is to outline how prior (pre-data) uncertainties and beliefs shall be updated to posterior (post-data) uncertainties and beliefs; in short, Bayes’ rule tells us how we ought to learn from experience.

All living creatures learn from experience, and this must be done by updating knowledge in light of prediction errors: gross prediction errors necessitate large adjustments in knowledge, whereas small prediction errors necessitate only minor adjustments.

In general terms, we then have the following rule for learning from experience:

=
x
Predictive updating factor

The bottom line is that Bayes’ rule allows its followers to use the power of probability theory to learn about the things they are unsure of. Nevertheless, Bayesian inference is not without serious competition. In the next post, we will examine the frequentist chef (uh-oh, indigestion alert) and compare the two.

#### Footnotes

1 As documented in many science fiction stories, the universe ceases to exist at the exact moment when its creator becomes aware of an internal inconsistency.

2 We purposefully ignore the fact that Andhrimnir only prepares a single dish. At Godchecker, the entry on Andhrimnir states: “He’s an Aesir chef with only one house special. He takes the Cosmic Boar. He kills it. He cooks it. The Gods eat it. It returns to life in the night ready for use in the next set meal. It’s a real pig of a life for the boar. A little variety in the kitchen would work wonders.”

#### References

Wagenmakers, E.-J., Morey, R. D., & Lee, M. D. (2016). Bayesian benefits for the pragmatic researcher. Current Directions in Psychological Science, 25, 169-176.

### Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

# Let’s Poke a Pizza: A New Cartoon to Explain the Strength of Evidence in a Bayes Factor

In a previous post we discussed the interpretation of the strength of evidence coming from a Bayes factor. For concreteness, let’s take a binomial data set and suppose we have encountered 62 successes and 38 failures in a sample of 100 trials. We can easily enter these data directly using the JASP Summary Stats Module: