Maximum power when there is only one patient at each level (continuous Y)
22.1.2 What is a Fundamental Outcome Assessment?
In a given week or day what is the severity of the worst thing that happened to the patient?
Expert clinician consensus of outcome ranks
Spacing of outcome categories irrelevant
Avoids defining additive weights for multiple events on same week
Events can be graded & can code common co-occurring events as worse event
Can translate an ordinal longitudinal model to obtain a variety of estimates
time until a condition
expected time in state
Bayesian partial proportional odds model can compute the probability that the treatment affects mortality differently than it affects nonfatal outcomes
Model also elegantly handles partial information: at each day/week the ordinal Y can be left, right, or interval censored when a range of the scale was not measured
22.1.3 Examples of Longitudinal Ordinal Outcomes
0=alive 1=dead
censored at 3w: 000
death at 2w: 01
longitudinal binary logistic model OR \(\approx\) HR
0=at home 1=hospitalized 2=MI 3=dead
hospitalized at 3w, rehosp at 7w, MI at 8w & stays in hosp, f/u ends at 10w: 0010001211
0-6 QOL excellent–poor, 7=MI 8=stroke 9=dead
QOL varies, not assessed in 3w but pt event free, stroke at 8w, death 9w: 12[0-6]334589
Fitted a frequentist first-order Markov partial proportional odds (PO) model to 1352 VIOLET patients using the RVGAM package to simulate 250,000 patient longitudinal records with daily assessments up to 28d: hbiostat.org/data/repo/simlongord.html
Simulation inserted an odds ratio of 0.75 for tx=1 : tx=0 (log OR = -0.288)
Case study uses the first 500 simulated patients
13203 records
average of 26.4 records per patient out of a maximum of 28, due to deaths
on ventilator or diagnosed with acute respiratory distress syndrome (ARDS)
dead
Death is an absorbing state
only possible previous states are the first 3
at baseline no one was at home
a patient who dies has Dead as the status on their final record, with no “deaths carried forward”
later we will carry deaths forward just to be able to look at empirical state occupancy probabilities (SOPs) vs. model estimates
Frequentist modeling using the VGAM package allows us to use the unconstrained partial PO (PPO) model with regard to time, but does not allow us to compute uncertainty intervals for derived parameters (e.g., SOPs and mean time in states)
Can use the bootstrap to obtain approximate confidence limits (as below)
Bayesian analysis using the rmsb package provides exact uncertainty intervals for derived parameters but at present rmsb only implements the constrained PPO model when getting predicted values
PPO for time allows mix of outcomes to change over time (which occurred in the real data)
Model specification:
For day \(t\) let \(Y(t)\) denote the ordinal outcome for a patient \(\Pr(Y \geq y | X, Y(t-1)) = \text{expit}(\alpha_{y} + X\beta +
g(Y(t-1), t))\)
\(g\) contains regression coefficients for the previous state \(Y(t-1)\) effect, the absolute time \(t\) effect, and any \(y\)-dependency on the \(t\) effect (non-PO for \(t\))
Baseline covariates: age, SOFA score (a measure of organ function), treatment (tx)
Time-dependent covariate: previous state (yprev, 3 levels)
Time trend: linear spline with knot at day 2 (handles exception at day 1 when almost no one was sent home)
Changing mix of outcomes over time
effect of time on transition ORs for different cutoffs of Y
2 time components (one slope change) \(\times\) 3 Y cutoffs = 6 parameters related to day
Reverse coding of Y so that higher levels are worse
22.3.1 Descriptives
all state transitions from one day to the next
SOPs estimated by proportions (need to carry death forward)
Code
require(rms)require(data.table)require(VGAM)getHdata(simlongord500)d <- simlongord500setDT(d, key='id')d[, y :=factor(y, levels=rev(levels(y )))]d[, yprev :=factor(yprev, levels=rev(levels(yprev)))]setnames(d, 'time', 'day')# Show descriptive statistics for baseline datadescribe(d[day ==1, .(yprev, age, sofa, tx)], 'Baseline Variables')
Baseline Variables Descriptives
Baseline Variables
4 Variables 500 Observations
yprev
n
missing
distinct
500
0
2
Value In Hospital/Facility Vent/ARDS
Frequency 340 160
Proportion 0.68 0.32
We will use the simpler model, which has the better (smaller) AIC. Check the PO assumption on time by comparing the simpler model’s AIC to the AIC from a fully PO model.
Code
h <-vglm(ordered(y) ~ yprev +lsp(day, 2) + age + sofa + tx,cumulative(reverse=TRUE, parallel=TRUE), data=d)lrtest(f, h)
The model allowing for non-PO in time is better. Now show Wald tests on the parameters.
Code
wald <-function(f) { se <-sqrt(diag(vcov(f))) s <-round(cbind(beta=coef(f), SE=se, Z=coef(f) / se), 3) a <-c('>= in hospital/facility', '>= vent/ARDS', 'dead','previous state in hospital/facility','previous state vent/ARDS','initial slope for day, >= hospital/facility','initial slope for day, >= vent/ARDS','initial slope for day, dead','slope increment, >= hospital/facilty','slope increment, >= vent/ARDS','slope increament, dead','baseline age linear effect','baseline SOFA score linear effect','treatment log OR')rownames(s) <- a s}wald(f)
beta SE Z
>= in hospital/facility -5.030 0.629 -7.996
>= vent/ARDS -13.193 0.578 -22.827
dead -19.580 0.936 -20.914
previous state in hospital/facility 8.773 0.266 33.008
previous state vent/ARDS 15.138 0.322 47.022
initial slope for day, >= hospital/facility -1.431 0.284 -5.036
initial slope for day, >= vent/ARDS -0.707 0.257 -2.752
initial slope for day, dead 0.137 0.469 0.293
slope increment, >= hospital/facilty 1.484 0.286 5.191
slope increment, >= vent/ARDS 0.746 0.261 2.857
slope increament, dead -0.120 0.476 -0.252
baseline age linear effect 0.011 0.003 4.054
baseline SOFA score linear effect 0.064 0.013 5.088
treatment log OR -0.111 0.085 -1.317
We see evidence for a benefit of treatment. Compute the treatment transition OR and approximate 0.95 confidence interval.
Code
lor <-coef(f)['tx']se <-sqrt(vcov(f)['tx', 'tx'])b <-exp(lor +qnorm(0.975) * se *c(0, -1, 1))names(b) <-c('OR', 'Lower', 'Upper')round(b, 3)
OR Lower Upper
0.895 0.758 1.056
The maximum likelihood estimate of the OR is somewhat at odds with the true OR of 0.75 on which the simulations were based.
22.3.3 Covariate Effects
Most interesting covariate effect is effect of time since randomization
Code
w <- d[day ==1]dat <-expand.grid(yprev ='In Hospital/Facility', age=median(w$age),sofa=median(w$sofa), tx=0, day=1:28)ltrans <-function(fit, mod) { p <-predict(fit, dat) u <-data.frame(day=as.vector(row(p)), y=as.vector(col(p)), logit=as.vector(p)) u$y <-factor(u$y, 1:3, paste('>=', levels(d$y)[-1])) u$mod <- mod u}u <-rbind(ltrans(f, 'model with few knots'),ltrans(g, 'model with more knots'))ggplot(u, aes(x=day, y=logit, color=y)) +geom_line() +facet_wrap(~ mod, ncol=2) +xlab('Day') +ylab('Log Odds') +labs(caption='Relative log odds of transitioning from in hospital/facility to indicated status')
22.3.4 Correlation Structure
The data were simulated under a first-order Markov process so it doesn’t make sense to check correlation pattern assumptions for our model
When the simulated data were created, the within-patient correlation pattern was checked against the pattern from the fitted model by simulating a large trial from the model fit and comparing correlations in the simulated data to those in the real data
It showed excellent agreement
Let’s compute the Spearman \(\rho\) correlation matrix on the 500 patient dataset and show the matrix from the real data next to it
Delete day 28 from the new correlation matrix to conform with correlation matrix computed on real data
Also show correlation matrix from 10,000 patient sample
Heights of bars are proportional to Spearman \(\rho\)
Code
# Tall and thin -> short and wide data tablew <-dcast(d[, .(id, day, y=as.numeric(y))], id ~ day, value.var='y')r <-cor(as.matrix(w[, -1]), use='pairwise.complete.obs',method='spearman')[-28, -28]p <-plotCorrM(r, xangle=90)p[[1]] +theme(legend.position='none') +labs(caption='Spearman correlation matrix from 500 patient dataset')
Code
vcorr <-readRDS('markov-vcorr.rds')ra <- vcorr$r.actualplotCorrM(ra, xangle=90)[[1]] +theme(legend.position='none') +labs(caption='Spearman correlation matrix from actual data')
Estimating the whole correlation matrix from 500 patients is noisy
Compute the mean absolute difference between two correlation matrices (the first on 10,000 simulated patients assuming a first-order Markov process and the second from the real data)
Compute means of mean absolute differences stratified by the day involved
Code
ad <-abs(rc[-28, -28] - ra)round(apply(ad, 1, mean), 2)
Actual and simulated with-patient correlations agree well except when day 1 is involved.
Code
p[[2]]
Usual serial correlation declining pattern; outcome status values become less correlated within patient as time gap increases
Also see non-isotropic pattern: correlations depend also on absolute time, not just gap
Formal Goodness of Fit Assessments for Correlation Structure
Data simulation model \(\rightarrow\) we already know that the first order Markov process has to fit
Do two formal assessments to demonstrate how this can be done in general. Both make the correlation structure more versatile.
Add patient-specific intercepts to see if a compound symmetry structure adds anything to the first-order Markov structure
Add a dependency on state before last in addition to our model’s dependency on the last state to see if a second-order Markov process fits better
Add Random Effects
Bayesian models handle random effects more naturally than frequentist models \(\rightarrow\) use a Bayesian partial PO first-order Markov model (Rrmsb package)
Use cmdstan software in place of the default of rstan
Code
require(rmsb)cmdstanr::set_cmdstan_path(cmdstan.loc) # cmdstan.loc is defined in ~/.Rprofileoptions(prType='html')# Use all but one coreoptions(mc.cores = parallel::detectCores() -1, rmsb.backend='cmdstan')seed <-2# The following took 15m using 4 coresb <-blrm(y ~ yprev +lsp(day, 2) + age + sofa + tx +cluster(id),~lsp(day, 2), data=d, file='markov-bppo.rds')stanDx(b)
Iterations: 2000 on each of 4 chains, with 4000 posterior distribution samples saved
For each parameter, n_eff is a crude measure of effective sample size
and Rhat is the potential scale reduction factor on split chains
(at convergence, Rhat=1)
Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.
Checking sampler transitions for divergences.
6 of 4000 (0.15%) transitions ended with a divergence.
These divergent transitions indicate that HMC is not fully able to explore the posterior distribution.
Try increasing adapt delta closer to 1.
If this doesn't remove all divergences, try to reparameterize the model.
Checking E-BFMI - sampler transitions HMC potential energy.
E-BFMI satisfactory.
Effective sample size satisfactory.
Split R-hat values satisfactory all parameters.
Processing complete.
Divergent samples: 0 0 3 3
EBFMI: 0.452 0.415 0.49 0.35
Parameter Rhat ESS bulk ESS tail
1 alpha[1] 1.002 2707 2238
2 alpha[2] 1.003 1596 1793
3 alpha[3] 1.005 1242 1893
4 beta[1] 1.003 2257 2264
5 beta[2] 1.001 2583 2543
6 beta[3] 1.013 469 895
7 beta[4] 1.000 3453 3040
8 beta[5] 1.001 3199 2700
9 beta[6] 1.005 750 1432
10 beta[7] 1.002 3410 1888
11 sigmag[1] 1.029 174 443
12 tau[1,1] 1.002 4497 3196
13 tau[2,1] 1.000 3716 3278
14 tau[1,2] 1.003 3880 2937
15 tau[2,2] 1.002 4488 3212
Code
b
Bayesian Partial Proportional Odds Ordinal Logistic Model
Dirichlet Priors With Concentration Parameter 0.455 for Intercepts
blrm(formula = y ~ yprev + lsp(day, 2) + age + sofa + tx + cluster(id),
ppo = ~lsp(day, 2), data = d, file = "markov-bppo.rds")
Frequencies of Responses
Home In Hospital/Facility Vent/ARDS
8141 3887 913
Dead
51
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 12992
B 0.028 [0.028, 0.029]
g 5.358 [5.134, 5.616]
C 0.983 [0.982, 0.983]
Draws 4000
gp 0.453 [0.45, 0.456]
Dxy 0.965 [0.965, 0.966]
Chains 4
EV 0.876 [0.864, 0.886]
Time 211.4s
v 26.633 [24.406, 28.992]
p 7
vp 0.205 [0.202, 0.208]
Cluster on id
Clusters 500
σγ 0.3913 [0.0071, 0.6328]
Mean β
Median β
S.E.
Lower
Upper
Pr(β>0)
Symmetry
y≥In Hospital/Facility
-4.8317
-4.8527
0.6449
-6.1173
-3.6028
0.0000
1.07
y≥Vent/ARDS
-13.1432
-13.1439
0.5720
-14.2189
-12.0002
0.0000
0.99
y≥Dead
-19.8810
-19.8297
0.9875
-21.8128
-18.0560
0.0000
0.89
yprev=In Hospital/Facility
8.6424
8.6395
0.2791
8.0763
9.1543
1.0000
1.05
yprev=Vent/ARDS
15.0620
15.0566
0.3321
14.4023
15.7044
1.0000
1.05
day
-1.4992
-1.4899
0.2876
-2.0645
-0.9417
0.0000
0.92
day'
1.5420
1.5325
0.2886
0.9892
2.1160
1.0000
1.09
age
0.0125
0.0124
0.0032
0.0064
0.0192
1.0000
1.02
sofa
0.0764
0.0759
0.0164
0.0473
0.1122
1.0000
1.12
tx
-0.1156
-0.1126
0.0976
-0.2888
0.0875
0.1192
0.99
day:y≥Vent/ARDS
0.7327
0.7301
0.3784
-0.0151
1.4705
0.9748
1.02
day':y≥Vent/ARDS
-0.7466
-0.7425
0.3828
-1.5146
-0.0157
0.0240
0.97
day:y≥Dead
1.6845
1.6666
0.5684
0.6111
2.7775
0.9998
1.13
day':y≥Dead
-1.7186
-1.6939
0.5754
-2.8060
-0.6102
0.0005
0.89
Note that blrm parameterizes the partial PO parameters differently than vglm.
Posterior median of the standard deviation of the random effects \(\sigma_\gamma\) is 0.39
This is fairly small on the logit scale in which most of the action takes place in \([-4, 4]\)
Random intercepts add an inconsequential improvement in the fit, justifying the Markov process’ conditional (on prior state) independence assumption
Another useful analysis would entail comparing the SD of the posterior distributions for the main parameters with and without inclusion of the random effects
Second-order Markov Process
On follow-up days 2-28
Frequentist partial PO model
Code
# Derive time-before-last states (lag-1 `yprev`)h <- d[, yprev2 :=shift(yprev), by=id]h <- h[day >1, ]# Fit first-order model ignoring day 1 so can compare to second-order# We have to make time linear since no day 1 dataf1 <-vglm(ordered(y) ~ yprev + day + age + sofa + tx,cumulative(reverse=TRUE, parallel=FALSE~ day), data=h)f2 <-vglm(ordered(y) ~ yprev + yprev2 + day + age + sofa + tx,cumulative(reverse=TRUE, parallel=FALSE~ day), data=h)lrtest(f2, f1)
Likelihood ratio test
Model 1: ordered(y) ~ yprev + yprev2 + day + age + sofa + tx
Model 2: ordered(y) ~ yprev + day + age + sofa + tx
#Df LogLik Df Chisq Pr(>Chisq)
1 37463 -2094.5
2 37465 -2095.0 2 1.1236 0.5702
Code
AIC(f1); AIC(f2)
[1] 4212.051
[1] 4214.928
First-order model has better fit “for the money” by AIC
Formal chunk test of second-order terms not impressive
22.3.5 Computing Derived Quantities
From the fitted Markov state transition model, compute for one covariate setting and two treatments:
state occupancy probabilities
mean time in state
differences between treatments in mean time in state
To specify covariate setting:
most common initial state is In Hospital/Facility so use that
within that category look at relationship between the two covariates
they have no correlation so use the individual medians
x <- w[, lapply(.SD, median), .SDcols=Cs(age, sofa)]adjto <- x[, paste0('age=', x[, age], ' sofa=', x[, sofa],' initial state=', istate)]# Expand to cover both treatments and initial statex <-cbind(tx=0:1, yprev=istate, x)x
tx yprev age sofa
<int> <char> <num> <num>
1: 0 In Hospital/Facility 56 5
2: 1 In Hospital/Facility 56 5
Compute all SOPs for each treatment. soprobMarkovOrdm is in Hmisc.
Code
S <- z <-NULLfor(Tx in0:1) { s <-soprobMarkovOrdm(f, x[tx == Tx, ], times=1:28, ylevels=levels(d$y),absorb='Dead', tvarname='day') S <-rbind(S, cbind(tx=Tx, s)) u <-data.frame(day=as.vector(row(s)), y=as.vector(col(s)), p=as.vector(s)) u$tx <- Tx z <-rbind(z, u)}z$y <-factor(z$y, 1:4, levels(d$y))revo <-function(z) { z <-as.factor(z)factor(z, levels=rev(levels(as.factor(z))))}ggplot(z, aes(x=factor(day), y=p, fill=revo(y))) +facet_wrap(~paste('Treatment', tx), nrow=1) +geom_col() +xlab('Day') +ylab('Probability') +guides(fill=guide_legend(title='Status')) +labs(caption=paste0('Estimated state occupancy probabilities for\n', adjto)) +theme(legend.position='bottom',axis.text.x=element_text(angle=90, hjust=1))
Compute by treatment the mean time unwell (expected number of days not at home). Expected days in state is simply the sum over days of daily probabilities of being in that state.
tx=0 tx=1 Days Difference
27.65094449 27.74380920 0.09286472
22.3.6 Bootstrap Confidence Interval for Difference in Mean Time Unwell
Need to sample with replacement from patients, not records
code taken from rms package’s bootcov function
sampling patients entails including some patients multiple times and omitting others
save all the record numbers, group them by patient ID, sample from these IDs, then use all the original records whose record numbers correspond to the sampled IDs
Use the basic bootstrap to get 0.95 confidence intervals
Speed up the model fit by having each bootstrap fit use as starting parameter estimates the values from the original data fit
Code
B <-500# number of bootstrap resamplesrecno <-split(1:nrow(d), d$id)npt <-length(recno) # 500startbeta <-coef(f)seed <-3if(file.exists('markov-boot.rds')) { z <-readRDS('markov-boot.rds') betas <- z$betas diffmean <- z$diffmean} else { betas <- diffmean <-numeric(B) ylev <-levels(d$y)for(i in1: B) { j <-unlist(recno[sample(npt, npt, replace=TRUE)]) g <-vglm(ordered(y) ~ yprev +lsp(day, 2) + age + sofa + tx,cumulative(reverse=TRUE, parallel=FALSE~lsp(day, 2)),coefstart=startbeta, data=d[j, ]) betas[i] <-coef(g)['tx'] s0 <-soprobMarkovOrdm(g, x[tx ==0, ], times=1:28, ylevels=ylev,absorb='Dead', tvarname='day') s1 <-soprobMarkovOrdm(g, x[tx ==1, ], times=1:28, ylevels=ylev,absorb='Dead', tvarname='day')# P(not at home) = 1 - P(home); sum these probs to get E[days] mtud <-sum(1.- s1[, 'Home']) -sum(1.- s0[, 'Home']) diffmean[i] <- mtud }saveRDS(list(betas=betas, diffmean=diffmean), 'markov-boot.rds', compress='xz')}
See how bootstrap treatment log ORs relate to differences in days unwell.
Code
ggfreqScatter(betas, diffmean,xlab='Log OR', ylab='Difference in Mean Days Unwell')
Compute basic bootstrap 0.95 confidence interval for OR and differences in mean time
Code
# bootBCa is in the rms package and uses the boot packageclb <-exp(bootBCa(coef(f)['tx'], betas, seed=seed, n=npt, type='basic'))clm <-bootBCa(dmtu, diffmean, seed=seed, n=npt, type='basic')a <-round(c(clb, clm), 3)[c(1,3,2,4)]data.frame(Quantity=c('OR', 'Difference in mean days unwell'),Lower=a[1:2], Upper=a[3:4])
Quantity Lower Upper
1 OR 0.942 1.375
2 Difference in mean days unwell -0.542 2.781
22.3.7 Notes on Inference
Differences between treatments in mean time in state(s) is zero if and only if the treatment OR=1
note agreement in bootstrap estimates
will not necessarily be true if PO is relaxed for treatment
inference about any treatment effect is the same for all covariate settings that do not interact with treatment
\(\rightarrow p\)-values are the same for the two metrics, and Bayesian posterior probabilities are also identical
Bayesian posterior probabilities for mean time in state \(> \epsilon\), for \(\epsilon > 0\), will vary with covariate settings (sicker patients at baseline have more room to move)
Rohde, M. D., French, B., Stewart, T. G., & Harrell, F. E. (2024). Bayesian transition models for ordinal longitudinal outcomes. Statistics in Medicine, 43(18), 3539–3561. https://doi.org/10.1002/sim.10133
```{r include=FALSE}require(Hmisc)options(qproject='rms', prType='html')require(qreport)getRs('qbookfun.r')hookaddcap()knitr::set_alias(w = 'fig.width', h = 'fig.height', cap = 'fig.cap', scap ='fig.scap')```# Semiparametric Ordinal Longitudinal ModelsKey reference: @roh24bay; 2024 Statistics in Medicine tutorial## Longitudinal Ordinal Models as Unifying ConceptsThis material in this section is taken from [hbiostat.org/talks/rcteff.html](https://hbiostat.org/talks/rcteff.html).See also [hbiostat.org/proj/covid19/ordmarkov.html](http://hbiostat.org/proj/covid19/ordmarkov.html) and [hbiostat.org/endpoint](https://hbiostat.org/endpoint) and [Ordinal state transition models as a unifying risk prediction framework](https://www.fharrell.com/talk/icsa).### General Outcome Attributes* Timing and severity of outcomes* Handle + terminal events (death) + non-terminal events (MI, stroke) + recurrent events (hospitalization)* Break the ties; the more levels of Y the better: [fharrell.com/post/ordinal-info](https://fharrell.com/post/ordinal-info) + Maximum power when there is only one patient at each level (continuous Y)### What is a Fundamental Outcome Assessment?* In a given week or day what is the severity of the worst thing that happened to the patient?* Expert clinician consensus of outcome ranks* Spacing of outcome categories irrelevant* Avoids defining additive weights for multiple events on same week* Events can be graded & can code common co-occurring events as worse event* Can translate an ordinal longitudinal model to obtain a variety of estimates + time until a condition + expected time in state* Bayesian partial proportional odds model can compute the probability that the treatment affects mortality differently than it affects nonfatal outcomes* Model also elegantly handles partial information: at each day/week the ordinal Y can be left, right, or interval censored when a range of the scale was not measured### Examples of Longitudinal Ordinal Outcomes* 0=alive 1=dead + censored at 3w: 000 + death at 2w: 01 + longitudinal binary logistic model OR $\approx$ HR* 0=at home 1=hospitalized 2=MI 3=dead + hospitalized at 3w, rehosp at 7w, MI at 8w \& stays in hosp, f/u ends at 10w: 0010001211* 0-6 QOL excellent--poor, 7=MI 8=stroke 9=dead + QOL varies, not assessed in 3w but pt event free, stroke at 8w, death 9w: 12[0-6]334589 + MI status unknown at 7w: 12[0-6]334[5-7]89^[Better: treat the outcome as being in one of two non-contiguous values {5,7} instead of [5-7] but no software is currently available for this] + Can make first 200 levels be a continuous response variable and the remaining values represent clinical event overrides### Statistical Model* Proportional odds ordinal logistic model with covariate adjustment* Patient random effects (intercepts) handle intra-patient correlation* Better fitting: Markov model + handles absorbing states, extremely high day-to-day correlations within subject + faster, flexible, uses standard software + state transition probabilities + after fit translate to unconditional state occupancy probabilities + use these to estimate expected time in a set of states (e.g., on ventilator or dead); restricted mean survival time without assuming PH* Extension of binary logistic model* Generalization of Wilcoxon-Mann-Whitney Two-Sample Test* No assumption about Y distribution for a given patient type* Does not use the numeric Y codes## Case Studies* Random effects model for continuous Y: @sec-long-bayes-re* Markov model for continuous Y: @sec-long-bayes-markov* Multiple detailed case studies for discrete ordinal Y:[hbiostat.org/proj/covid19](https://hbiostat.org/proj/covid19). + ORCHID: hydroxychloroquine for treatment of COVID-19 with patient assessment on select days + VIOLET: vitamin D for serious respiratory illness with assessment on 28 consecutive days - Large power gain demonstrated over time to recovery or ordinal status at a given day - Loosely speaking serial assessments for each 5 day period had the same statistical information as a new patient assessed once + ACTT-1: NIH-NIAID Remdesivir study for treatment of COVID-19 with daily assessment while in hospital, select days after that with interval censoring - Assesses time-varying effect of remdesivir - Handles death explicitly, unlike per-patient time to recovery + Other: Bayesian and frequentist power simulation, exploration of unequal time gaps, etc.## Case Study For 4-Level Ordinal Longitudinal Outcome* VIOLET: randomized clinical trial of seriously ill adults inICUs to study the effectiveness of vitamin D vs. placebo* Daily ordinal outcomes assessed for 28 days with very little missing data* Original paper: [DOI:10.1056/NEJMoa1911124](https://www.nejm.org/doi/full/10.1056/NEJMoa1911124) focused on 1078 patients with confirmed baseline vitamin D deficiency* Focus on 1352 of the original 1360 randomized patients* Extensive re-analyses: + [hbiostat.org/proj/covid19/violet2.html](https://hbiostat.org/proj/covid19/violet2.html) + [hbiostat.org/proj/covid19/orchid.html](https://hbiostat.org/proj/covid19/orchid.html) + [hbiostat.org/R/Hmisc/markov/sim.html](https://hbiostat.org/R/Hmisc/markov/sim.html)* Fitted a frequentist first-order Markov partial proportional odds (PO) model to 1352 VIOLET patients using the `R``VGAM` package to simulate 250,000 patient longitudinal records with daily assessments up to 28d: [hbiostat.org/data/repo/simlongord.html](http://hbiostat.org/data/repo/simlongord.html)* Simulation inserted an odds ratio of 0.75 for `tx`=1 : `tx`=0 (log OR = -0.288)* Case study uses the first 500 simulated patients + 13203 records + average of 26.4 records per patient out of a maximum of 28, due to deaths + full 250,00 and 500-patient datasets available at [hbiostat.org/data](https://hbiostat.org/data)* 4-level outcomes: + patient at home + in hospital or other health facility + on ventilator or diagnosed with acute respiratory distress syndrome (ARDS) + dead* Death is an absorbing state + only possible _previous_ states are the first 3 + at baseline no one was at home + a patient who dies has Dead as the status on their final record, with no "deaths carried forward" + later we will carry deaths forward just to be able to look at empirical state occupancy probabilities (SOPs) vs. model estimates* Frequentist modeling using the `VGAM` package allows us to use the unconstrained partial PO (PPO) model with regard to time, but does not allow us to compute uncertainty intervals for derived parameters (e.g., SOPs and mean time in states)* Can use the bootstrap to obtain approximate confidence limits (as below)* Bayesian analysis using the `rmsb` package provides exact uncertainty intervals for derived parameters but at present `rmsb` only implements the constrained PPO model when getting predicted values* PPO for time allows mix of outcomes to change over time (which occurred in the real data)* Model specification: + For day $t$ let $Y(t)$ denote the ordinal outcome for a patient<br> $\Pr(Y \geq y | X, Y(t-1)) = \text{expit}(\alpha_{y} + X\beta + g(Y(t-1), t))$ + $g$ contains regression coefficients for the previous state $Y(t-1)$ effect, the absolute time $t$ effect, and any $y$-dependency on the $t$ effect (non-PO for $t$) + Baseline covariates: `age`, `SOFA` score (a measure of organ function), treatment (`tx`) + Time-dependent covariate: previous state (`yprev`, 3 levels) + Time trend: linear spline with knot at day 2 (handles exception at day 1 when almost no one was sent home) + Changing mix of outcomes over time - effect of time on transition ORs for different cutoffs of Y - 2 time components (one slope change) $\times$ 3 Y cutoffs = 6 parameters related to `day`* Reverse coding of Y so that higher levels are worse### Descriptives* all state transitions from one day to the next* SOPs estimated by proportions (need to carry death forward)```{r setdata}require(rms)require(data.table)require(VGAM)getHdata(simlongord500)d <- simlongord500setDT(d, key='id')d[, y := factor(y, levels=rev(levels(y )))]d[, yprev := factor(yprev, levels=rev(levels(yprev)))]setnames(d, 'time', 'day')# Show descriptive statistics for baseline datadescribe(d[day == 1, .(yprev, age, sofa, tx)], 'Baseline Variables')# Check that death can only occur on the last dayd[, .(ddif=if(any(y == 'Dead')) min(day[y == 'Dead']) - max(day) else NA_integer_), by=id][, table(ddif)]``````{r trans,h=7,w=7}#| label: fig-markov-trans#| fig-cap: "Transition proportions from data simulated from VIOLET"require(ggplot2)propsTrans(y ~ day + id, data=d, maxsize=4, arrow='->') + theme(axis.text.x=element_text(angle=90, hjust=1))```Show state occupancy proportions by creating a data table with death carried forward.```{r soprops,w=7,h=4}#| label: fig-markov-soprops#| fig-cap: "State occupancy proportions from simulated VIOLET data with death carried forward"w <- d[day < 28 & y == 'Dead', ]w[, if(.N > 1) stop('Error: more than one death record'), by=id]w <- w[, .(day = (day + 1) : 28, y = y, tx=tx), by=id]u <- rbind(d, w, fill=TRUE)setkey(u, id)u[, Tx := paste0('tx=', tx)]propsPO(y ~ day + Tx, data=u) + guides(fill=guide_legend(title='Status')) + theme(legend.position='bottom', axis.text.x=element_text(angle=90, hjust=1))```### Model Fitting* Fit the PPO first-order Markov model without assuming PO for the time effect* Also fit a model that has linear splines with more knots to add flexibility in how time and baseline covariates are transformed* Disregard the terrible statistical practice of using asterisks to denote "significant" results```{r ppofit}f <- vglm(ordered(y) ~ yprev + lsp(day, 2) + age + sofa + tx, cumulative(reverse=TRUE, parallel=FALSE ~ lsp(day, 2)), data=d)summary(f)g <- vglm(ordered(y) ~ yprev + lsp(day, c(2, 4, 8, 15)) + lsp(age, c(35, 60, 75)) + lsp(sofa, c(2, 6, 10)) + tx, cumulative(reverse=TRUE, parallel=FALSE ~ lsp(day, c(2, 4, 8, 15))), data=d)summary(g)lrtest(g, f)AIC(f); AIC(g)```We will use the simpler model, which has the better (smaller) AIC. Check the PO assumption on time by comparing the simpler model's AIC to the AIC from a fully PO model.```{r testpo}h <- vglm(ordered(y) ~ yprev + lsp(day, 2) + age + sofa + tx, cumulative(reverse=TRUE, parallel=TRUE), data=d)lrtest(f, h)AIC(f); AIC(h)```The model allowing for non-PO in time is better. Now show Wald tests on the parameters.```{r }wald <- function(f) { se <- sqrt(diag(vcov(f))) s <- round(cbind(beta=coef(f), SE=se, Z=coef(f) / se), 3) a <- c('>= in hospital/facility', '>= vent/ARDS', 'dead', 'previous state in hospital/facility', 'previous state vent/ARDS', 'initial slope for day, >= hospital/facility', 'initial slope for day, >= vent/ARDS', 'initial slope for day, dead', 'slope increment, >= hospital/facilty', 'slope increment, >= vent/ARDS', 'slope increament, dead', 'baseline age linear effect', 'baseline SOFA score linear effect', 'treatment log OR') rownames(s) <- a s}wald(f)```We see evidence for a benefit of treatment. Compute the treatment transition OR and approximate 0.95 confidence interval.```{r }lor <- coef(f)['tx']se <- sqrt(vcov(f)['tx', 'tx'])b <- exp(lor + qnorm(0.975) * se * c(0, -1, 1))names(b) <- c('OR', 'Lower', 'Upper')round(b, 3)```The maximum likelihood estimate of the OR is somewhat at odds with the true OR of 0.75 on which the simulations were based.### Covariate Effects* Most interesting covariate effect is effect of time since randomization```{r timeeff,w=7.5,h=4}#| label: fig-markov-timeeff#| fig-cap: "Estimated time trends in relative log odds of transitions. Variables not shown are set to median/mode and `tx=0`."#| fig-scap: "Estimated time trends in relative log transition odds"w <- d[day == 1]dat <- expand.grid(yprev = 'In Hospital/Facility', age=median(w$age), sofa=median(w$sofa), tx=0, day=1:28)ltrans <- function(fit, mod) { p <- predict(fit, dat) u <- data.frame(day=as.vector(row(p)), y=as.vector(col(p)), logit=as.vector(p)) u$y <- factor(u$y, 1:3, paste('>=', levels(d$y)[-1])) u$mod <- mod u}u <- rbind(ltrans(f, 'model with few knots'), ltrans(g, 'model with more knots'))ggplot(u, aes(x=day, y=logit, color=y)) + geom_line() + facet_wrap(~ mod, ncol=2) + xlab('Day') + ylab('Log Odds') + labs(caption='Relative log odds of transitioning from in hospital/facility to indicated status')```### Correlation Structure* The data were simulated under a first-order Markov process so it doesn't make sense to check correlation pattern assumptions for our model* When the simulated data were created, the within-patient correlation pattern was checked against the pattern from the fitted model by simulating a large trial from the model fit and comparing correlations in the simulated data to those in the real data* It showed excellent agreement* Let's compute the Spearman $\rho$ correlation matrix on the 500 patient dataset and show the matrix from the real data next to it* Delete day 28 from the new correlation matrix to conform with correlation matrix computed on real data* Also show correlation matrix from 10,000 patient sample* Heights of bars are proportional to Spearman $\rho$```{r corm,fig.show='asis'}# Tall and thin -> short and wide data tablew <- dcast(d[, .(id, day, y=as.numeric(y))], id ~ day, value.var='y')r <- cor(as.matrix(w[, -1]), use='pairwise.complete.obs', method='spearman')[-28, -28]p <- plotCorrM(r, xangle=90)p[[1]] + theme(legend.position='none') + labs(caption='Spearman correlation matrix from 500 patient dataset')vcorr <- readRDS('markov-vcorr.rds')ra <- vcorr$r.actualplotCorrM(ra, xangle=90)[[1]] + theme(legend.position='none') + labs(caption='Spearman correlation matrix from actual data')rc <- vcorr$r.simulatedplotCorrM(rc, xangle=90)[[1]] + theme(legend.position='none') + labs(caption='Spearman correlation matrix from 10,000 simulated patients')```* Estimating the whole correlation matrix from 500 patients is noisy* Compute the mean absolute difference between two correlation matrices (the first on 10,000 simulated patients assuming a first-order Markov process and the second from the real data)* Compute means of mean absolute differences stratified by the day involved```{r madr}ad <- abs(rc[-28, -28] - ra)round(apply(ad, 1, mean), 2)```Actual and simulated with-patient correlations agree well except when day 1 is involved.```{r vario}#| label: fig-markov-vario#| fig-cap: "Variogram-like graph for checking intra-patient correlation structure. $x$-axis shows the number of days between two measurements."#| fig-scap: "Variogram-like graph"p[[2]]```* Usual serial correlation declining pattern; outcome status values become less correlated within patient as time gap increases* Also see non-isotropic pattern: correlations depend also on absolute time, not just gap#### Formal Goodness of Fit Assessments for Correlation Structure* Data simulation model $\rightarrow$ we already know that the first order Markov process has to fit* Do two formal assessments to demonstrate how this can be done in general. Both make the correlation structure more versatile. + Add patient-specific intercepts to see if a compound symmetry structure adds anything to the first-order Markov structure + Add a dependency on state before last in addition to our model's dependency on the last state to see if a second-order Markov process fits better**Add Random Effects*** Bayesian models handle random effects more naturally than frequentist models $\rightarrow$ use a Bayesian partial PO first-order Markov model (`R``rmsb` package)* Use `cmdstan` software in place of the default of `rstan````{r bayesre}require(rmsb)cmdstanr::set_cmdstan_path(cmdstan.loc) # cmdstan.loc is defined in ~/.Rprofileoptions(prType='html')# Use all but one coreoptions(mc.cores = parallel::detectCores() - 1, rmsb.backend='cmdstan')seed <- 2 # The following took 15m using 4 coresb <- blrm(y ~ yprev + lsp(day, 2) + age + sofa + tx + cluster(id), ~ lsp(day, 2), data=d, file='markov-bppo.rds')stanDx(b)``````{r bayesre2}b```* Note that `blrm` parameterizes the partial PO parameters differently than `vglm`.* Posterior median of the standard deviation of the random effects $\sigma_\gamma$ is 0.39* This is fairly small on the logit scale in which most of the action takes place in $[-4, 4]$* Random intercepts add an inconsequential improvement in the fit, justifying the Markov process' conditional (on prior state) independence assumption* Another useful analysis would entail comparing the SD of the posterior distributions for the main parameters with and without inclusion of the random effects**Second-order Markov Process*** On follow-up days 2-28* Frequentist partial PO model```{r markov2}# Derive time-before-last states (lag-1 `yprev`)h <- d[, yprev2 := shift(yprev), by=id]h <- h[day > 1, ]# Fit first-order model ignoring day 1 so can compare to second-order# We have to make time linear since no day 1 dataf1 <- vglm(ordered(y) ~ yprev + day + age + sofa + tx, cumulative(reverse=TRUE, parallel=FALSE ~ day), data=h)f2 <- vglm(ordered(y) ~ yprev + yprev2 + day + age + sofa + tx, cumulative(reverse=TRUE, parallel=FALSE ~ day), data=h)lrtest(f2, f1)AIC(f1); AIC(f2)```* First-order model has better fit "for the money" by AIC* Formal chunk test of second-order terms not impressive### Computing Derived QuantitiesFrom the fitted Markov state transition model, compute for one covariate setting and two treatments:* state occupancy probabilities* mean time in state* differences between treatments in mean time in stateTo specify covariate setting:* most common initial state is `In Hospital/Facility` so use that* within that category look at relationship between the two covariates* they have no correlation so use the individual medians```{r cov2}istate <- 'In Hospital/Facility'w <- d[day == 1 & yprev == istate, ]w[, cor(age, sofa, method='spearman')]x <- w[, lapply(.SD, median), .SDcols=Cs(age, sofa)]adjto <- x[, paste0('age=', x[, age], ' sofa=', x[, sofa], ' initial state=', istate)]# Expand to cover both treatments and initial statex <- cbind(tx=0:1, yprev=istate, x)x```Compute all SOPs for each treatment. `soprobMarkovOrdm` is in `Hmisc`.```{r sops,w=7,h=4}#| label: fig-markov-sops#| fig-cap: "State occupancy probabilities for each treatment"S <- z <- NULLfor(Tx in 0:1) { s <- soprobMarkovOrdm(f, x[tx == Tx, ], times=1:28, ylevels=levels(d$y), absorb='Dead', tvarname='day') S <- rbind(S, cbind(tx=Tx, s)) u <- data.frame(day=as.vector(row(s)), y=as.vector(col(s)), p=as.vector(s)) u$tx <- Tx z <- rbind(z, u)}z$y <- factor(z$y, 1:4, levels(d$y))revo <- function(z) { z <- as.factor(z) factor(z, levels=rev(levels(as.factor(z))))}ggplot(z, aes(x=factor(day), y=p, fill=revo(y))) + facet_wrap(~ paste('Treatment', tx), nrow=1) + geom_col() + xlab('Day') + ylab('Probability') + guides(fill=guide_legend(title='Status')) + labs(caption=paste0('Estimated state occupancy probabilities for\n', adjto)) + theme(legend.position='bottom', axis.text.x=element_text(angle=90, hjust=1))```Compute by treatment the mean time unwell (expected number of days not at home). Expected days in state is simply the sum over days of daily probabilities of being in that state.```{r mtu}mtu <- tapply(1. - S[, 'Home'], S[, 'tx'], sum)dmtu <- diff(mtu)w <- c(mtu, dmtu)names(w) <- c('tx=0', 'tx=1', 'Days Difference')w```We estimate that patients on treatment 1 have 3 less days unwell than those on treatment 0 for the given covariate settings.Do a similar calculation for the expected number of days alive out of 28 days (similar to restricted mean survival time.```{r mta}mta <- tapply(1. - S[, 'Dead'], S[, 'tx'], sum)w <- c(mta, diff(mta))names(w) <- c('tx=0', 'tx=1', 'Days Difference')w```### Bootstrap Confidence Interval for Difference in Mean Time Unwell* Need to sample with replacement from _patients_, not records + code taken from `rms` package's `bootcov` function + sampling patients entails including some patients multiple times and omitting others + save all the record numbers, group them by patient ID, sample from these IDs, then use all the original records whose record numbers correspond to the sampled IDs* Use the basic bootstrap to get 0.95 confidence intervals* Speed up the model fit by having each bootstrap fit use as starting parameter estimates the values from the original data fit```{r boot}B <- 500 # number of bootstrap resamplesrecno <- split(1 : nrow(d), d$id)npt <- length(recno) # 500startbeta <- coef(f)seed <- 3if(file.exists('markov-boot.rds')) { z <- readRDS('markov-boot.rds') betas <- z$betas diffmean <- z$diffmean} else { betas <- diffmean <- numeric(B) ylev <- levels(d$y) for(i in 1 : B) { j <- unlist(recno[sample(npt, npt, replace=TRUE)]) g <- vglm(ordered(y) ~ yprev + lsp(day, 2) + age + sofa + tx, cumulative(reverse=TRUE, parallel=FALSE ~ lsp(day, 2)), coefstart=startbeta, data=d[j, ]) betas[i] <- coef(g)['tx'] s0 <- soprobMarkovOrdm(g, x[tx == 0, ], times=1:28, ylevels=ylev, absorb='Dead', tvarname='day') s1 <- soprobMarkovOrdm(g, x[tx == 1, ], times=1:28, ylevels=ylev, absorb='Dead', tvarname='day') # P(not at home) = 1 - P(home); sum these probs to get E[days] mtud <- sum(1. - s1[, 'Home']) - sum(1. - s0[, 'Home']) diffmean[i] <- mtud } saveRDS(list(betas=betas, diffmean=diffmean), 'markov-boot.rds', compress='xz')}```See how bootstrap treatment log ORs relate to differences in days unwell.```{r compare,h=5,w=6}#| label: fig-markov-compare#| fig-cap: "Relationship between bootstrap log ORs and differences in mean days unwell"ggfreqScatter(betas, diffmean, xlab='Log OR', ylab='Difference in Mean Days Unwell')```Compute basic bootstrap 0.95 confidence interval for OR and differences in mean time```{r bootci}# bootBCa is in the rms package and uses the boot packageclb <- exp(bootBCa(coef(f)['tx'], betas, seed=seed, n=npt, type='basic'))clm <- bootBCa(dmtu, diffmean, seed=seed, n=npt, type='basic')a <- round(c(clb, clm), 3)[c(1,3,2,4)]data.frame(Quantity=c('OR', 'Difference in mean days unwell'), Lower=a[1:2], Upper=a[3:4])```### Notes on Inference* Differences between treatments in mean time in state(s) is zero if and only if the treatment OR=1 + note agreement in bootstrap estimates + will not necessarily be true if PO is relaxed for treatment + inference about _any_ treatment effect is the same for all covariate settings that do not interact with treatment + $\rightarrow p$-values are the same for the two metrics, and Bayesian posterior probabilities are also identical* Bayesian posterior probabilities for mean time in state $> \epsilon$, for $\epsilon > 0$, will vary with covariate settings (sicker patients at baseline have more room to move)```{r echo=FALSE}saveCap('22')```