The Case for Radical Transparency in Statistical Reporting

Today I am giving a lecture at the Replication and Reproducibility Event II: Moving Psychological Science Forward, organised by the British Psychological Society. The lecture is similar to the one I gave a few months ago at an ASA meeting in Bethesda, and it makes the case for radical transparency in statistical reporting. The talking points, in order:

  1. The researcher who has devised a theory and conducted an experiment is probably the galaxy’s most biased analyst of the outcome.
  2. In the current academic climate, the galaxy’s most biased analyst is allowed to conduct analyses behind closed doors, often without being required or even encouraged to share data and analysis code.
  3. So data are analyzed with no accountability, by the person who is easiest to fool, often with limited statistical training, who has every incentive imaginable to produce p < .05. This is not good.
  4. The result is publication bias, fudging, and HARKing. These again yield overconfident claims and spurious results that do not replicate. In general, researchers abhor uncertainty, and this needs to change.
  5. There are several cures for uncertainty-allergy, including:
    • preregistration
    • outcome-independent publishing
    • sensitivity analysis (e.g., multiverse analysis and crowd sourcing)
    • data sharing
    • data visualization
    • inclusive inferential analyses
  6. Transparency is mental hygiene: the scientific equivalent of brushing your teeth, or
    washing your hands after visiting the restroom. It needs to become part of our culture, and it needs to be encouraged by funders, editors, and institutes.

The complete pdf of the presentation is here.


Replication Studies: A Report from the Royal Netherlands Academy of Arts and Sciences

For the past 18 months I have served on a committee that was tasked to write a report on how to improve the replicability of the empirical sciences. The report came out this Monday, and you can find it here. Apart from the advice to conduct more replication studies, the committee’s general recommendations are as follows (pp. 47-48 of the report):


Researchers should conduct research more rigorously by strengthening standardisation, quality control, evidence-based guidelines and checklists, validation studies and internal replications. Institutions should provide researchers with more training and support for rigorous study design, research practices that improve reproducibility, and the appropriate analysis and interpretation of the results of studies.

Funding agencies and journals should require preregistration of hypothesis-testing studies. Journals should issue detailed evidence-based guidelines and checklists for reporting studies and ensure compliance with them. Journals and funding agencies should require storage of study data and methods in accessible repositories.

Journals should be more open to publishing studies with null results and incentivise researchers to report such results. Rather than reward researchers mainly for ‘high-impact’ publications, ‘innovative’ studies and inflated claims, institutions, funding agencies and journals should also offer them incentives for conducting rigorous studies and producing reproducible research results.”


Origin of the Texas Sharpshooter

The picture of the Texas sharpshooter is taken from an illustration by Dirk-Jan Hoek (CC-BY).

The infamous Texas sharpshooter fires randomly at a barn door and then paints the targets around the bullet holes, creating the false impression of being an excellent marksman. The sharpshooter symbolizes the dangers of post-hoc theorizing, that is, of finding your hypothesis in the data.

The Texas sharpshooter is commonly introduced without a reference to its progenitor.

For instance, Thompson (2009, pp. 257-258) states:

“The Texas sharpshooter fallacy is the name epidemiologists have given to the tendency to assign unwarranted significance to random data by viewing it post hoc in an unduly narrow context (Gawande, 1999). The name is derived from the story of a legendary Texan who fired his rifle randomly into the side of a barn and then painted a target around each of the bullet holes. When the paint dried, he invited his neighbours to see what a great shot he was. The neighbours were impressed: they thought it was extremely improbable that the rifleman could have hit every target dead centre unless he was indeed an extraordinary marksman, and they therefore declared the man to be the greatest sharpshooter in the state. Of course, their reasoning was fallacious. Because the sharpshooter was able to fix the targets after taking the shots, the evidence of his accuracy was far less probative than it appeared. The kind of post hoc target fixing illustrated by this story has also been called painting the target around the arrow.


Powered by WordPress | Designed by Elegant Themes