- It was nice to see sufficient time provided for all speakers to
present their views and recommendations and to answer a good many
questions from the audience
- All the talks by the speakers (excluding mine, which only others
can judge) were excellent
Michael
Rosenbaum and Joshua Betz: Covariate Adjustment for Marginal
Estimates
- The advocacy of covariate adjustment to increase efficiency and
lower sample sizes is great to see
- Alternate and fussy views appear here
- I think of a marginal treatment effect as being more applicable when
the respose variable Y has an essentially unlimited range, e.g., when
assessing the difference in systolic blood pressure. When Y has a
restricted range, e.g., when estimating the absolute risk of a clinical
outcome, if the clinical trial sample has a different average absolute
risk than the intended target population, the marginal effect (sample
averaged treatment effect) will not be useful for estimating a
population quantity. “Risk magnification” is a big effect, caused by
baseline variable distributions. Differences in mean blood pressure
don’t shift as much as a function of mean baseline blood pressure.
- I like absolute risk reduction but feel that to be maximally useful
we need to account for risk magnification by presenting a separate
absolute risk reduction estimate for every patient in the trial. An
example is here which
pushes the idea of avoiding one-number summaries of treatment effects in
general. Model-free methods are unlikely to provide competitive mean
squared errors in estimating these patient-type-specific absolute risk
reductions.
- Marginal odds ratios may not apply to anyone in the trial or in the
population
- There are some important issues when categorizing the Rankin stroke
scale or analyzing change from
baseline in an ordinal outcome (especially floor and ceiling effects
and regression to the mean)
- The link between the Wilcoxon estimand and the odds ratio in a
proportional odds model is very tight even under extreme
non-proportional odds; a special marginal odds ratio is not
needed
- This illustrates
Michael’s super important point about reducing sample size when
adjusting for outcome heterogeneity (i.e., adjusting for covariates) -
this is a 30,000 patient trial with a binary endpoint (death) that
occurs in 0.07 of the patients. Ordinary covariate adjustment with a
logistic model yields the same power as an unadjusted analysis but with
17% fewer patients randomized.
- With standard covariate adjustment in linear models, the residual
variance goes down by a factor of \(1 -
R^{2}\) where \(R^{2}\) is the
proportion of outcome heterogeneity explained by the covariates. So the
efficiency of an unadjusted analysis is \(1 -
R^{2}\). What is the corresponding calculation for the marginal
approach? I see that Joshua provided this
link which sounds relevant.
- Lots of useful sofware was made available
- Marginal estimates are for the sample average treatment effect. This
will not estimate the population average treatment effect unless the
clinical trial is done on a random sample from the population. I’ve
never seen a clinical trial that used random sampling. See this.
Kelley Kidwell: SMART Trials
- When trying to find which subgroup has the best results for a given
treatment regimen, I’m especially interested in the width of uncertainty
intervals for both subgroup-specific treatment effects and for
differential treatment effects (interactions)
- What is the limit of how chronic the disease can be and how
long-acting a given treatment can be before a SMART design is no longer
possible?
- I wish I had had the opportunity to work on a SMART trial
- Change from baseline in pain severity is highly problematic
- It’s nice to have the RShiny sample size calculator for the most
common SMART designs, plus a number of practical references
- I’d enjoy seeing some work on how Bayesian and frequentist methods
compare for SMART
- Wonderful description of the essence of various classes of study
designs
- Described randomization module embedded inside REDCap, and its
limitations
- New REDCap tools released last week with multiple features
- multiple randomizations, blinding, randomization numbers (must be
translated to treatment by unblinded personnel)
- randomization metadata (e.g. who randomized?)
- tools for decentralized trials, randomization override or offline
use
- randomization using customized APIs (e.g. to allow response-adaptive
randomization), with “outside” randomizations stored in main REDCap
database
- Raised tough questions about adaptive trials, with emphasis on trial
efficiency
- Note that Stephen Senn has written about problems with precision of
treatment effect estimates when adaptation is done
- We can have more discussion about baseline imbalances
- Nice discussion of “fit for purpose”
- Animated discussion of simple vs. covariate-adjusted analyses