Powered by JASP

Follow-up: A Bayesian Perspective on the FDA Guidelines for Adaptive Clinical Trials

In September 2018, the American Food and Drug Administration (FDA) issued a draft version of the industry guidance on “Adaptive Designs for Clinical Trials of Drugs and Biologics”. In an earlier blog post we provided some comments from a Bayesian perspective that we also submitted as feedback to the FDA. Two months ago, the FDA released the final version of the guidelines, and of course we were curious to see whether and how any of our feedback was incorporated.

Was Anything Changed?

Overall, the final version of the FDA industry guidance “Adaptive Designs for Clinical Trials of Drugs and Biologics” is very similar to the draft version. Only few sections underwent a major rewrite, and section B “Bayesian Adaptive Designs” was one of them. It now includes a new paragraph that reads as follows:

“Trial designs that use Bayesian adaptive features may rely on frequentist or Bayesian inferential procedures to support conclusions of drug effectiveness. Frequentist inference is characterized by hypothesis tests performed with known power and Type I error probabilities and is often used along with Bayesian computational techniques that rely on non-informative prior distributions. Bayesian inference is characterized by drawing conclusions based directly on posterior probabilities that a drug is effective and has important differences from frequentist inference (Berger and Wolpert 1988). For trials that use Bayesian inference with informative prior distributions, such as trials that explicitly borrow external information, Bayesian statistical properties are more informative than Type I error probability. FDA’s draft guidance for industry Interacting with the FDA on Complex Innovative Clinical Trial Designs for Drugs and Biological Products (September 2019) provides recommendations on what information should be submitted to FDA to facilitate the review of trial design proposals that use Bayesian inference.”

This new paragraph addresses some of the points we raised in our earlier blog post. Even though it remains vague on details, the document now acknowledges that control of Type I error rate is not the holy grail of every statistical method. It also clearly states that Bayesian inference differs from frequentist inference – a distinction that seems trivial but was not acknowledged in the first version of the guidance document. With the insertion of the new paragraph, the FDA also deleted a misguided statement about the use of conjugate prior distributions and a confusing fragment on Type I error probability simulations, both of which we mentioned in our blog post. The deletion of these sections certainly improves the quality of the guidelines.

However, we believe that additional clarification is needed with regard to the first two sentences of the new paragraph. Of course, statistical approaches for clinical trials that combine Bayesian and frequentist properties exist (see for example Psioda & Ibrahim, 2019; Pericchi & Pereira, 2016). However, these are very specific statistical analysis methods, so that the broad claim that “designs that use Bayesian adaptive features may rely on frequentist or Bayesian inferential procedures” seems misleading. In fact, relying on frequentist inferential procedures (i.e., p-values) without taking the flexibility of the adaptive design into account can lead to highly inflated error rates, as has been repeatedly stated in the guidance document. In this context, we judge it essential to note that in the Bayesian analysis of sequential designs no corrections or adjustments whatsoever are called for – the Bayesian analysis of sequential designs proceeds in exactly the same manner as if the data had been collected in a fixed-N design.

Down the Rabbit Hole of FDA Guidances

Section B of the guidance document now also refers industry experts to another FDA guidance document: “Interacting with the FDA on Complex Innovative Clinical Trial Designs for Drugs and Biological Products” (it can be found here). This brief guideline advises companies to seek direct communication with the FDA as early as possible in the drug testing process whenever non-standard design or analysis methods are used. It contains several recommendations for the use and reporting of Bayesian methods, such as providing a justification for prior distributions and clearly specifying Bayesian outcome evaluation criteria. However, the vagueness of these recommendations and the general emphasis on direct communication strongly suggest that the FDA will evaluate proposals on a case-by-case basis without formal rules. This means that industry experts have to rely on the competence and goodwill of the FDA agent who is handling their case.

Interestingly, the guidance for “Interacting with the FDA on Complex Innovative Clinical Trial Designs” also links to yet another FDA document on Bayesian statistics: The “Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials” (it can be found here). Unlike the other two guidances, this document provides an in-depth review of Bayesian methods and much more detailed recommendations for their use in FDA-regulated clinical trials. Even though it is also lacking a focus on Bayesian hypothesis testing, it was clearly written by Bayesian experts and does not fall prey to methodological misunderstandings.

Regulatory Uncertainty: Different Stats Guidelines for Different FDA Divisions

The existence of a well-written and statistically sound FDA guidance document on Bayesian statistics immediately raises the question why the FDA did not use this resource to inform newer guidance documents. Without knowing any particulars about the inner workings of the FDA administration, we can only suspect that their inner-organizational knowledge exchange regarding Bayesian statistical methods is far from ideal. This notion is supported by the fact that the first two guidance documents were released by a different FDA division than the “Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials” (see here for an organizational chart with the different divisions of the FDA).

Each FDA division is responsible for regulating clinical trials in a different industry sector. This means that, typically, the guidance documents issued by an FDA division only apply to regulatory processes in the respective field. Therefore, the well-written “Guidance for the use of Bayesian Statistics in Medical Device Clinical Trials” that was issued by the FDA Center for Devices and Radiological Health cannot simply be referenced by stakeholders who are working with another FDA division. These stakeholders either have to deal with a complete lack of guidance or with the vague recommendations given in the two guidance documents that we discussed before. Whether or not the clear guidelines that were issued by the FDA Center for Devices and Radiological Health can be applied, is up to the decision of the FDA agent who is handling the case in question. Therefore, using Bayesian adaptive designs – or even Bayesian methods in general – in any field other than radiological health brings about considerable uncertainty for industry experts.

What Does This Mean for Practitioners?

We fear that the FDA’s reluctance to commit to clear guidances for Bayesian adaptive clinical trials will effectively disencourage industry experts from applying these designs in practice. The lack of clear regulations causes uncertainty for industry sponsors because they need to rely on the competence and goodwill of the FDA agents handling their case. The increased need for communication will also slow down the regulatory process, which means that potential efficiency gains of innovative trial designs might easily be outweighed by the costs.

Conclusion and Recommendations

Even though the FDA improved the guidance document after the round of comments, they still seem to be hesitant to propose clear standards for Bayesian adaptive clinical trials. Given the relative novelty of these approaches, this is understandable. However, if the FDA really wants to encourage industry partners to use efficient clinical trial designs, it cannot be enough to just dip their toes into the water or pay lip-service to the general idea of conducting a Bayesian analysis. We therefore want to reiterate our earlier recommendation: if the FDA wants to provide concrete, high-quality guidelines on Bayesian adaptive clinical trials they need to involve Bayesian statisticians. With transparent guidelines and statistically sound recommendations, we believe that it would be only a matter of time until Bayesian adaptive designs are broadly adopted.

References

Pericchi, L., & Pereira, C. (2016). Adaptative significance levels using optimal decision rules: Balancing by weighting the error probabilities. Brazilian Journal of Probability and Statistics, 30(1), 70–90. https://doi.org/10.1214/14-BJPS257

Psioda, M. A., & Ibrahim, J. G. (2019). Bayesian clinical trial design using historical data that inform the treatment effect. Biostatistics, 20(3), 400–415. https://doi.org/10.1093/biostatistics/kxy009

About The Authors

Angelika Stefan

Angelika is a PhD candidate at the Psychological Methods Group of the University of Amsterdam.

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.

Quentin F. Gronau

Quentin is a PhD candidate at the Psychological Methods Group of the University of Amsterdam.

Gilles Dutilh

Gilles is a statistician at the Clinical Trial Unit of the University Hospital in Basel, Switzerland. He is responsible for statistical analyses and methodological advice for clinical research.

Felix Schönbrodt

Felix Schönbrodt is Principal Investigator at the Department of Quantitative Methods at Ludwig-Maximilians-Universität (LMU) Munich.

Powered by WordPress | Designed by Elegant Themes