Powered by JASP

Rejoinder: More Limitations of Bayesian Leave-One-Out Cross-Validation

In a recent article for Computational Brain & Behavior, we discussed several limitations of Bayesian leave-one-out cross-validation (LOO) for model selection. Our contribution attracted three thought-provoking commentaries by (1) Vehtari, Simpson, Yao, and Gelman, (2) Navarro, and (3) Shiffrin and Chandramouli. We just submitted a rejoinder in which we address each of the commentaries and identify several additional limitations of LOO-based methods such as Bayesian stacking. We focus on differences between LOO-based methods versus approaches that consistently use Bayes’ rule for both parameter estimation and model comparison.


“Don’t Interfere with my Art”: On the Disputed Role of Preregistration in Exploratory Model Building

Recently the 59th annual meeting of the Psychonomic Society in New Orleans played host to an interesting series of talks on how statistical methods should interact with the practice of science. Some speakers discussed exploratory model building, suggesting that this activity may not benefit much, if any at all, from preregistration. On the Twitterverse, reports of these talks provoked an interesting discussion between supporters and detractors of preregistration for the purpose of model building. Below we describe the most relevant presentations, point to some interesting threads on Twitter, and then provide our own perspective.

The debate started when Twitter got wind of the fact that my [EJ] mentor and intellectual giant Rich Shiffrin was arguing against preregistration (his slides have also been made available, thanks to both Rich and Trish Van Zandt). Here is the abstract of his talk “Science Should Govern the Practice of Statistics”:

“Although there are two sides to these complex issues, this talk will make the case for the scientific judgment side of the ledger. I will I argue that statistics should serve science and should be consistent with scientific judgment that historically has produced progress. I argue against one-size-fits-all statistical criteria, against the view that a fundamental scientific goal should be reproducibility, and against the suppression of irreproducible results. I note that replications should on average produce smaller sized effects than initial reports, even when science is done as well as possible. I make a case that science is post hoc and that most progress occurs when unexpected results are found (and hence against the case for general use of pre-registration). I argue that much scientific progress is often due to production of causal accounts of processes underlying observed data, often instantiated as quantitative models, but aimed at explaining qualitative patterns across many conditions, in contrast to well defined descriptive statistical models.”


Transparency and The Need for Short Sentences

Recently I came across an article by Morton Ann Gernsbacher, entitled “Writing empirical articles: Transparency, reproducibility, clarity, and memorability” (preprint). The author covers a lot of ground and makes a series of good points. Also, as one would hope and expect, the article itself is a joy to read. Here is a fragment from the section “Recommendations for Clarity” — subsection “Write short sentences”:


“Every writing guide, from Strunk and White’s (1959) venerable Elements of Style to the prestigious journal Nature’s (2014) guide, admonishes writers to use shorter, rather than longer, sentences. Shorter sentences are not only easier to understand, but also better at conveying complex information (Flesch, 1948).

The trick to writing short sentences is to restrict each sentence to one and only one idea. Resist the temptation to embed multiple clauses or parentheticals, which challenge comprehension. Instead, break long, rambling sentences into crisp, more concise ones. For example, write the previous three short sentences rather than the following long sentence: The trick to writing short sentences is to restrict each sentence to one and only one idea by breaking long, rambling sentences into crisp, more concise ones while resisting the temptation to embed multiple clauses or parentheticals, which challenge comprehension.


“Bayesian Inference Without Tears” at CIRM

Today I am presenting a lecture for the “Masterclass in Bayesian Statistics” that takes place from October 22 to October 26th 2018, at CIRM (Centre International de Rencontres Mathématiques) in Marseille, France. The slides of my talk,“Bayesian Inference Without Tears” are here. Unfortunately the slides cannot convey the JASP demo work, but the presentations are taped so I hope to be able to provide a video link at some later point in time.


A Bayesian Perspective on the Proposed FDA Guidelines for Adaptive Clinical Trials

The frequentist food and drug administration (FDA) has circulated a draft version of new guidelines for adaptive designs, with the explicit purpose of soliciting comments. The draft is titled “Adaptive designs for clinical trials of drugs and biologics: Guidance for industry” and you can find it here. As summarized on the FDA webpage, this draft document


“(…) addresses principles for designing, conducting and reporting the results from an adaptive clinical trial. An adaptive design is a type of clinical trial design that allows for planned modifications to one or more aspects of the design based on data collected from the study’s subjects while the trial is ongoing. The advantage of an adaptive design is the ability to use information that was not available at the start of the trial to improve efficiency. An adaptive design can provide a greater chance to detect the true effect of a product, often with a smaller sample size or in a shorter timeframe. Additionally, an adaptive design can reduce the number of patients exposed to an unnecessary risk of an ineffective investigational treatment. Patients may even be more willing to enroll in these types of trials, as they can increase the probability that subjects will be assigned to the more effective treatment.”


Bayesian Advantages for the Pragmatic Researcher: Slides from a Talk in Frankfurt

This Monday in Frankfurt I presented a keynote lecture for the 51th Kongress der Deutschen Gesellschaft fuer Psychologie. I resisted the temptation to impress upon the audience the notion that they were all Statistical Sinners for not yet having renounced the p-value. Instead I outlined five concrete Bayesian data-analysis projects that my lab had conducted in recent years. So no p-bashing, but only Bayes-praising, and mostly by directly demonstrating the practical benefits in concrete application.

The talk itself went well, although at the beginning I believe the audience was fearful that I would just drone on and on about the theory underlying Bayes’ rule. Perhaps I’m just too much in love with the concept. Anyway, it seemed the audience was thankful when I switched to the concrete examples. I could show a new cartoon by Viktor Beekman (“The Two Faces of Bayes’ Rule”, also in our Library; concept by myself and Quentin Gronau), and I showed two pictures of my son Theo (not sure whether the audience realized that, but it was not important anyway).


« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes