JASP_logo

The Creativity-Verification Cycle in Psychological Science: New Methods to Combat Old Idols, Part I

The promised post on Einstein will follow next week.

More and more psychologists are registering their hypotheses, predictions, and analysis plans prior to data collection. Will such preregistration be the death knell for creativity and serendipity? Gilles Dutilh, Alexandra Sarafoglou, and I recently wrote an article for Perspectives on Psychological Science that provides a historical perspective on this question. In the article, we describe the origin and development of “the empirical cycle”, that is, the modern perspective on how scientists can learn from data. In the course of our historical investigations, we came across several interesting anecdotes that lack of space prevented us from including. But we can include them in this series of blog posts. Here is the first story, courtesy of Cornelis Menke.

(more…)


Cicero and the Greeks on Necessity and Fortune

Cicero eloquently summarized the philosophical position that the universe is deterministic – all events are preordained, either by nature or by divinity. Although “ignorance of causes” may create the illusion of Fortune, in reality there is only Necessity.

Cicero Citatus, Glans Inflatus?

The male academic who cites Cicero generally lacks the insight that, instead of imbuing his writing with gravitas, he inevitably conveys the impression of being a pompous dickhead (‘glans inflatus’). Particularly damaging to a writer’s reputation are Cicero quotations that occur at the start of an article; for, as Horace reminds us, “parturiunt montes, nascetur ridiculus mus”. Indeed, the only academics who seem to get away with citing Cicero are those who study Cicero’s work professionally.

(more…)


The Merovingian, or Why Probability Belongs Wholly to the Mind

Summary: When Bayesians speak of probability, they mean plausibility.

The famous Matrix trilogy is set in a dystopic future where most of mankind has been enslaved by a computer network, and the few rebels that remain find themselves on the brink of extinction. Just when the situation seems beyond salvation, a messiah –called Neo– is awakened and proceeds to free humanity from its silicon overlord. Rather than turn the other cheek, Neo’s main purpose seems to be the physical demolition of his digital foes (‘agents’), a task that he engages in with increasing gusto and efficiency. Aside from the jaw-dropping fight scenes, the Matrix movies also contain numerous references to religious themes and philosophical dilemma’s. One particularly prominent theme is the concept of free will and the nature of probability.

(more…)


Redefine Statistical Significance Part XV: Do 72+88=160 Researchers Agree on P?

In an earlier blog post we discussed a response (co-authored by 88 researchers) to the paper “Redefine Statistical Significance” (RSS; co-authored by 72 researchers). Recall that RSS argued that p-values near .05 should be interpreted with caution, and proposed that a threshold of .005 is more in line with the kind of evidence that warrants strong claims such as “reject the null hypothesis”. The response (“bring your own alpha”, BYOA) argued that researchers should pick their own alpha, informed by the context at hand. Recently, the BYOA response was covered in Science, and this prompted us to read the revised, final version (hat tip to Brian Nosek, who attended us to the change in content; for another critique of the BYOA paper see this preprint by JP de Ruiter).

(more…)


Redefine Statistical Significance XIV: “Significant” does not Necessarily Mean “Interesting”

This is a guest post by Scott Glover.

In a recent blog post, Eric-Jan and Quentin helped themselves to some more barbecued chicken.

The paper in question reported a p-value of 0.028 as “clear evidence” for an effect of ego depletion on attention control. Using Bayesian analyses, Eric-Jan and Quentin showed how weak such evidence actually is. In none of the scenarios they examined did the Bayes Factor exceed 3.5:1 in favour of the effect. An analysis of this data using my own preferred method of likelihood ratios (Dixon, 2003; Glover & Dixon, 2004; Goodman & Royall, 1988) gives a similar answer – an AIC-adjusted (Akaike, 1973) value of λadj = 4.1 (calculation provided here) – meaning the data are only about four times as likely given the effect exists than given no effect. This is consistent with the Bayesian conclusion that such data hardly deserve the description “clear evidence.” Rather, these demonstrations serve to highlight the greatest single problem with the p-value – it is simply not a transparent index of the strength of the evidence.

Beyond this issue, however, is another equally troublesome problem, one inherent to null hypothesis significance testing (NHST): any-sized effect can be coaxed into being statistically significant by increasing the sample size (Cohen, 1994; Greenland et al., 2016; Rozeboom, 1960). In the ego depletion case, a tiny effect of 0.7% is found to be significant thanks to a sample size in the hundreds.

(more…)


The Case for Radical Transparency in Statistical Reporting

Today I am giving a lecture at the Replication and Reproducibility Event II: Moving Psychological Science Forward, organised by the British Psychological Society. The lecture is similar to the one I gave a few months ago at an ASA meeting in Bethesda, and it makes the case for radical transparency in statistical reporting. The talking points, in order:

  1. The researcher who has devised a theory and conducted an experiment is probably the galaxy’s most biased analyst of the outcome.
  2. In the current academic climate, the galaxy’s most biased analyst is allowed to conduct analyses behind closed doors, often without being required or even encouraged to share data and analysis code.
  3. So data are analyzed with no accountability, by the person who is easiest to fool, often with limited statistical training, who has every incentive imaginable to produce p < .05. This is not good.
  4. The result is publication bias, fudging, and HARKing. These again yield overconfident claims and spurious results that do not replicate. In general, researchers abhor uncertainty, and this needs to change.
  5. There are several cures for uncertainty-allergy, including:
    • preregistration
    • outcome-independent publishing
    • sensitivity analysis (e.g., multiverse analysis and crowd sourcing)
    • data sharing
    • data visualization
    • inclusive inferential analyses
  6. Transparency is mental hygiene: the scientific equivalent of brushing your teeth, or
    washing your hands after visiting the restroom. It needs to become part of our culture, and it needs to be encouraged by funders, editors, and institutes.

The complete pdf of the presentation is here.

(more…)


« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes