Levels of Adaptation

Interactive Training

Data Management

Software

Bayes


Jonathan Kimmelman, Dr. <jonathan.kimmelman@mcgill.ca>
5/5/15

to Frank, Churchill, Clayton, Heitman, Chang, Dan, Li 
Frank-

Thanks for starting the conversation, and for your kind words about my article.

Without any question, information provision should not be static. Part of the task of DSMBs, for example, is to update consent documents in light of new information.  The question, as I see it, concerns a) the trigger for updating the information provided to patients, and b) the way new information might interact with study validity.  And perhaps too, item (a) needs to be considered in light of the ability of patients to integrate information properly.

If patients were perfect Bayesian updaters, than immediate return of information would make a lot of sense.  I suspect this is not the case- that most patients drop any priors they might have had in light of new information.  [i.e. they enter a study thinking they are unlikely to have tumor response.  but two patients before them show response- they drop their doubts and now believe their odds are close to 80% benefit, when a proper update wd be far more cautious. 

I’m reminded of a paper published by Mark Ratain (in IRB) using an approach, “cohort specific consent” in a phase 1 dose escalation cancer trial. Basically, patients were given information on past patients' responses, and offered the choice of which dose to receive.  The trial was a disaster (well- at least in my opinion). One patient in a high dose group had a big response, and all the patients piled into that dose cohort.  The actual recommended dose for phase 2 turned out to be lower, and the study ended up enrolling more patients than needed to resolve the question of dosing for p2.

So this strikes me as an instance where ‘TMI’ probably harmed patients and frustrated an efficient trial process.

I’m working on deadline for something else now- (and am in Berlin- midnight here)- so apologize for the brevity and superficialiy of the response but welcome further conversation.

—Jonathan


----
Jonathan Kimmelman
Associate Professor
Biomedical Ethics Unit / Experimental Medicine / Social Studies of Medicine
McGill University
3647 Peel St.
Montreal, QC  H3A 1X1

ph: 514.398.3306
fax: 514.398-8349
email: jonathan.kimmelman@mcgill.ca

Web site:
www.translationalethics.com


NOTE: On sabbatical Aug ’14 to Aug '15

c/o Angelika Borkowski
Prof. Dr. Kimmelman
Centrum für Schlaganfallforschung Berlin
Charité -Universitätsmedizin
Campus Mitte
Charitéplatz 1
10117 Berlin
Germany

ph: 0049 30 450 560657


On May 5, 2015, at 10:56 PM, Frank Harrell <f.harrell@vanderbilt.edu> wrote:

Dear Dr Killelman,

Your Clinical Trials article is very well written and convincing.  I wanted to start a discussion on one topic that relates to it if you don't mind.

I believe there is an issue related to the real definition of "informed" in informed consent.  Does it consider information as static, e.g., should the patient be informed of information about the therapies that existed at the time the study was designed?  The time that the very first patient was enrolled?  Or should it mean all the information available up-to-the-minute, i.e., including the outcomes of patients previously enrolled in the current study?

I think that we need a better definition of "informed" and "information" before we can be fully persuaded against the use of outcome-adaptive clinical trials.  I would love to hear the opinions of you and Dr. Hey.  I have cc'd my Vanderbilt ethicist and biostatistician colleagues who are very interested in this issue.

With regards,
Frank

--------------------------------------------


From: Louis, Thomas
Sent: Tuesday, November 27, 2018 12:53 PM
To: Nie, Lei <Lei.Nie@fda.hhs.gov>
Cc: Harrell, Frank * <Frank.Harrell@fda.hhs.gov>; Louis, Thomas <Thomas.Louis@fda.hhs.gov>
Subject: RE: follow up regarding to your comments about adaptive design guidance

Dear Lei,

Thanks for asking that I elaborate.  

As a general principle, rigid adherence to frequentist goals will eliminate or at least attenuate Bayesian benefits that are assessed using scenarios that are inconsistent with the prior distribution being used in conduct and analysis.  For example,

    If you insist on controlling type I error at the usual levels, and if the prior distribution gives relatively little mass to a region around the null, then forcing adherence will considerably reduce the ability to stop early, the ability adapt treatment assignments, etc.  For example, if the type I error for the Bayesian procedure is 0.15 and you want to force it to be 0.05, you’ll need to extend the stopping time, sacrifice power, or give up on something else.
    Similar considerations apply to producing Confidence Intervals/Credible Intervals.  And, importantly, for all but the most basic adaptations, it’s difficult if not impossible to produce the well-calibrated, frequentist CI.
    If you adhere to frequentist unbiasedness of an estimate, then you won’t be able to take advantage of `borrowing information’ because the estimate (CI) will be biased.  Of course, the estimate may have good MSE, a far more desirable property.
    Similar comments apply to testing, but in addition, even as a frequentist, if you do a two-sided test, but allocate the type I error other than 50/50, your test will be biased, so you don’t have to be a Bayesian to give up on unbiasedness.

Many of the foregoing issues and more result from evaluating parameter scenarios that are probabilistically incompatible with the prior; those that are compatible look just fine.  So, if you trust the prior, you will trust the results; if you don’t trust the prior, you shouldn’t use it!
Thomas A. Louis, PhD
FDA/CDER Expert Statistical Consultant
Personal mobile: 202-494-9331
Hopkins email: tlouis@jhu.edu

------------------------------------------------------------------------------
The ASA Biopharmaceutical Software Working Group has launched a YouTube channel is share videos

https://www.youtube.com/channel/UC3Tkg5b3QR8fXWsTBYerCng

Kyle Wathen, a member of the working group, has started working on a 10-part video series on simulating adaptive clinical trials.  He has recently posted the first video to the channel: https://www.youtube.com/watch?v=JQ-PcPG_nJU&t=2s.  This is a work in progress, but Kyle plans to release other videos over time.  His course outline is below.  Again, this is work in progress, rather than a completed course at this point in time.  For those so interested, this may provide one way to learn more and simulating clinical trials with R.

Title: Simulating Adaptive Clinical Trials – Where to start and how to expand  

Description:  This series introduces the concept of clinical trial simulation and custom R source code development.  Through a sequence of examples, this series will begin at a simple clinical trial and progress to complex design concepts including platform trials.    The videos will alternate between explanations of key concepts and hands on R development utilizing R Studios.  Each example will provide a new concept or skill to increase the viewers understanding of clinical trial simulation.    

The series would consist of several parts each having short videos ranging in 5 minutes to 30 minutes. 

Section 1: Introduction to Clinical Trial Simulation in R

Section 2: Start Simple -  Development of a fixed sample design in R

Section 3: Extending the fixed design to a sequential design with outcome adaptive randomization

Section 4: Expanding the design to including early stopping for futility or success

Section 5: Sensitivity simulations understanding the difference between the analysis model and simulation model

Section 6: Utilization of patient subgroups 

Section 7: Introduction to JAGS and the MCMC R package

Section 8: Bayesian Go -No Go framework

Section 9: Bayesian Predictive Probabilities – I have explained this concept so many times might as well make a video on it

Section 10: Simulating Platform trials – Utilizing a PlatformTrialSimualtor package built in R

Supplementary Section:

Introduction to testing and driven development in R – Utilizing testthat

S3 classes and how to write generic simulations

Paul H. Schuette, PhD
Mathematical Statistician, Scientific Computing Coordinator

Center for Drug Evaluation and Research (CDER)
Office of Biostatistics
U.S. Food and Drug Administration
Tel: 301-796-3838
Paul.Schuette@fda.hhs.gov

--------------- Lisa LaVange 2020-10-05
I have an action item from Friday’s ACTIV Statistics Subgroup Meeting to share some information about the DMC/DSMB’s role in an adaptive trial. Here is the link to a 2014 book chapter with Paul Gallo and Dave DeMets that might be of interest: file:///C:/Users/lavange/Downloads/Gallo2014_Chapter_ConsiderationsForInterimAnalys.pdf.

A more recent document that also discusses this issue is the 2019 FDA Guidance on Adaptive Designs (see section on trial integrity): https://www.fda.gov/media/78495/download.

As you can see from both documents, a case can be made for either one interim decision-recommending body or two (DMC plus a separate adaptation review body). If the choice is for one, then for IND studies, adaptations would need to be carefully prescribed with little leeway for the DMC to modify, so that the potential for bias due to the DMC having already reviewed unblinded interim results would be at a minimum. Additionally, simulation results may assume binding stopping rules, and if so, that needs to be taken into consideration by the DMC.

https://www.bmj.com/content/369/bmj.m115?utm_source=twitter&utm_medium=social&utm_term=hootsuite&utm_content=sme&utm_campaign=usage