JASP_logo

Redefine Statistical Significance XIII: The Case of Ego Depletion

The previous blog post discussed the preprint “Ego depletion reduces attentional control: Evidence from two high-powered preregistered experiments”. Recall the preprint abstract:

 
 
 
 
 

“Two preregistered experiments with over 1000 participants in total found evidence of an ego depletion effect on attention control. Participants who exercised self-control on a writing task went on to make more errors on Stroop tasks (Experiment 1) and the Attention Network Test (Experiment 2) compared to participants who did not exercise self-control on the initial writing task. The depletion effect on response times was non-significant. A mini meta-analysis of the two experiments found a small (d = 0.20) but significant increase in error rates in the controlled writing condition, thereby providing clear evidence of poorer attention control under ego depletion. These results, which emerged from large preregistered experiments free from publication bias, represent the strongest evidence yet of the ego depletion effect.”

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Two Pitfalls of Preregistration: The Case of Ego Depletion

Several researchers have proposed that the capacity for mental control is a limited resource, one that can be temporarily depleted after having engaged in a taxing cognitive activity. This hypothetical phenomenon — called ego depletion — has been hotly debated, and its very existence has been called into question. We ourselves are in the midst of a multi-lab collaborative research effort to address the issue. This is why we were particularly intrigued, when, just a few days ago, the Twitterverse attended us to the preprint “Ego depletion reduces attentional control: Evidence from two high-powered preregistered experiments”. The abstract of the manuscript reads as follows:

“Two preregistered experiments with over 1000 participants in total found evidence of an ego depletion effect on attention control. Participants who exercised self-control on a writing task went on to make more errors on Stroop tasks (Experiment 1) and the Attention Network Test (Experiment 2) compared to participants who did not exercise self-control on the initial writing task. The depletion effect on response times was non-significant. A mini meta-analysis of the two experiments found a small (d = 0.20) but significant increase in error rates in the controlled writing condition, thereby providing clear evidence of poorer attention control under ego depletion. These results, which emerged from large preregistered experiments free from publication bias, represent the strongest evidence yet of the ego depletion effect.”

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Redefine Statistical Significance Part XII: A BITSS debate with Simine Vazire and Daniel Lakens

This Tuesday, one of us [EJ] participated in a debate about –you guessed it– the α = .005 recommendation from the paper ‘Redefine Statistical Significance’. The debate was organized as part of the Annual Meeting of the Berkeley Initiative for Transparency in the Social Sciences (BITSS), and the two other discussants were Simine Vazire and Daniel Lakens.

The debate was live-streamed and taped so that you can view a recording: the debate starts at about 32:30 and lasts until 1:40:30. The discussion starts at around 01:13:00.

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

How to Prevent Your Dog from Getting Stuck in the Dishwasher

This week, Dorothy Bishop visited Amsterdam to present a fabulous lecture on a topic that has not (yet) received the attention it deserves: “Fallibility in Science: Responsible Ways to Handle Mistakes”. Her slides are available here.

As Dorothy presented her series of punch-in-the-gut, spine-tingling examples, I was reminded of a presentation that my Research Master students had given a few days earlier. The students presented ethical dilemmas in science — hypothetical scenarios that can ensnare researchers, particularly early in their career when they lack the power to make executive decisions. And for every scenario, the students asked the class, ‘What would you do?’ Consider, for example, the following situation:

SCENARIO: You are a junior researcher who works in a large team that studies risk-seeking behavior in children with attention-deficit disorder. You have painstakingly collected the data, and a different team member (an experienced statistical modeler) has conducted the analyses. After some back-and-forth, the statistical results come out exactly as the team would have hoped. The team celebrates and prepares to submit a manuscript to Nature Human Behavior. However, you suspect that multiple analyses have been tried, and only the best one is presented in the manuscript.

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Redefine Statistical Significance Part XI: Dr. Crane Forcefully Presents…a Red Herring?

The paper “Redefine Statistical Significance” continues to make people uncomfortable. This, of course, was exactly the goal: to have researchers realize that a p-just-below-.05 outcome is evidentially weak. This insight can be painful, as many may prefer the statistical blue pill (‘believe whatever you want to believe’) over the statistical red pill (‘stay in Wonderland and see how deep the rabbit hole goes’). Consequently, a spirited discussion has ensued.

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Bayes Factors for Stan Models without Tears

For Christian Robert’s blog post about the bridgesampling package, click here.

Bayesian inference is conceptually straightforward: we start with prior uncertainty and then use Bayes’ rule to learn from data and update our beliefs. The result of this learning process is known as posterior uncertainty. Quantities of interest can be parameters (e.g., effect size) within a single statistical model or different competing models (e.g., a regression model with three predictors vs. a regression model with four predictors). When the focus is on models, a convenient way of comparing two models M1 and M2 is to consider the model odds:

 

    \begin{equation*} \label{eq:post_model_odds} \underbrace{\frac{p(\mathcal{M}_1 \mid \text{data})}{p(\mathcal{M}_2 \mid \text{data})}}_{\text{posterior odds}} = \underbrace{\frac{p(\text{data} \mid \mathcal{M}_1)}{p(\text{data} \mid \mathcal{M}_2)}}_{\text{Bayes factor BF$_{12}$}} \times \underbrace{\frac{p(\mathcal{M}_1)}{p(\mathcal{M}_2)}}_{\text{prior odds}}. \end{equation*}

 

(more…)

Share
Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

« Previous Entries Next Entries »

Powered by WordPress | Designed by Elegant Themes