Some good general references on longitudinal data analysis are Davis (2002), Pinheiro & Bates (2000), Diggle et al. (2002), Venables & Ripley (2003), Hand & Crowder (1996), Verbeke & Molenberghs (2000), Lindsey (1997)
7.1 Notation
\(N\) subjects A
Subject \(i\) (\(i=1,2,\ldots,N\)) has \(n_{i}\) responses measured at times \(t_{i1}, t_{i2}, \ldots, t_{in_{i}}\)
Response at time \(t\) for subject \(i\): \(Y_{it}\)
Subject \(i\) has baseline covariates \(X_{i}\)
Generally the response measured at time \(t_{i1}=0\) is a covariate in \(X_{i}\) instead of being the first measured response \(Y_{i0}\)
Time trend in response is modeled with \(k\) parameters so that the time “main effect” has \(k\) d.f.
Let the basis functions modeling the time effect be \(g_{1}(t),
g_{2}(t), \ldots, g_{k}(t)\)
7.2 Model Specification for Effects on \(E(Y)\)
7.2.1 Common Basis Functions
\(k\) dummy variables for \(k+1\) unique times (assumes no B functional form for time but may spend many d.f.)
\(k=1\) for linear time trend, \(g_{1}(t)=t\)
\(k\)–order polynomial in \(t\)
\(k+1\)–knot restricted cubic spline (one linear term, \(k-1\) nonlinear terms)
7.2.2 Model for Mean Profile
A model for mean time-response profile without interactions between time C and any \(X\): \(E[Y_{it} | X_{i}] = X_{i}\beta + \gamma_{1}g_{1}(t) +
\gamma_{2}g_{2}(t) + \ldots + \gamma_{k}g_{k}(t)\)
Model with interactions between time and some \(X\)’s: add product terms for desired interaction effects
Example: To allow the mean time trend for subjects in group 1 (reference group) to be arbitrarily different from time trend for subjects in group 2, have a dummy variable for group 2, a time “main effect” curve with \(k\) d.f. and all \(k\) products of these time components with the dummy variable for group 2
Time should be modeled using indicator variables only when time is really discrete, e.g., when time is in weeks and subjects were followed at exactly the intended weeks. In general time should be modeled continuously (and nonlinearly if there are more than 2 followup times) using actual visit dates instead of intended dates (Donohue et al., n.d.).
7.2.3 Model Specification for Treatment Comparisons
In studies comparing two or more treatments, a response is often D measured at baseline (pre-randomization)
Analyst has the option to use this measurement as \(Y_{i0}\) or as part of \(X_{i}\)
Comments from Jim Rochon
For RCTs, I draw a sharp line at the point when the intervention begins. The LHS [left hand side of the model equation] is reserved for something that is a response to treatment. Anything before this point can potentially be included as a covariate in the regression model. This includes the “baseline” value of the outcome variable. Indeed, the best predictor of the outcome at the end of the study is typically where the patient began at the beginning. It drinks up a lot of variability in the outcome; and, the effect of other covariates is typically mediated through this variable.
I treat anything after the intervention begins as an outcome. In the western scientific method, an “effect” must follow the “cause” even if by a split second.
Note that an RCT is different than a cohort study. In a cohort study, “Time 0” is not terribly meaningful. If we want to model, say, the trend over time, it would be legitimate, in my view, to include the “baseline” value on the LHS of that regression model.
Now, even if the intervention, e.g., surgery, has an immediate effect, I would include still reserve the LHS for anything that might legitimately be considered as the response to the intervention. So, if we cleared a blocked artery and then measured the MABP, then that would still be included on the LHS.
Now, it could well be that most of the therapeutic effect occurred by the time that the first repeated measure was taken, and then levels off. Then, a plot of the means would essentially be two parallel lines and the treatment effect is the distance between the lines, i.e., the difference in the intercepts.
If the linear trend from baseline to Time 1 continues beyond Time 1, then the lines will have a common intercept but the slopes will diverge. Then, the treatment effect will the difference in slopes.
One point to remember is that the estimated intercept is the value at time 0 that we predict from the set of repeated measures post randomization. In the first case above, the model will predict different intercepts even though randomization would suggest that they would start from the same place. This is because we were asleep at the switch and didn’t record the “action” from baseline to time 1. In the second case, the model will predict the same intercept values because the linear trend from baseline to time 1 was continued thereafter.
More importantly, there are considerable benefits to including it as a covariate on the RHS. The baseline value tends to be the best predictor of the outcome post-randomization, and this maneuver increases the precision of the estimated treatment effect. Additionally, any other prognostic factors correlated with the outcome variable will also be correlated with the baseline value of that outcome, and this has two important consequences. First, this greatly reduces the need to enter a large number of prognostic factors as covariates in the linear models. Their effect is already mediated through the baseline value of the outcome variable. Secondly, any imbalances across the treatment arms in important prognostic factors will induce an imbalance across the treatment arms in the baseline value of the outcome. Including the baseline value thereby reduces the need to enter these variables as covariates in the linear models.
Senn (2006) states that temporally and logically, a “baseline cannot be a response to treatment”, so baseline and response cannot be modeled in an integrated framework.
… one should focus clearly on ‘outcomes’ as being the only values that can be influenced by treatment and examine critically any schemes that assume that these are linked in some rigid and deterministic view to ‘baseline’ values. An alternative tradition sees a baseline as being merely one of a number of measurements capable of improving predictions of outcomes and models it in this way.
The final reason that baseline cannot be modeled as the response at E time zero is that many studies have inclusion/exclusion criteria that include cutoffs on the baseline variable. In other words, the baseline measurement comes from a truncated distribution. In general it is not appropriate to model the baseline with the same distributional shape as the follow-up measurements. Thus the approaches recommended by Liang & Zeger (2000) and Liu et al. (2009) are problematic1.
1 In addition to this, one of the paper’s conclusions that analysis of covariance is not appropriate if the population means of the baseline variable are not identical in the treatment groups is not correct (Senn, 2006). See Kenward et al. (2010) for a rebuke of Liu et al. (2009).
7.3 Modeling Within-Subject Dependence
Random effects and mixed effects models have become very popular F
Disadvantages:
Induced correlation structure for \(Y\) may be unrealistic
Numerically demanding
Require complex approximations for distributions of test statistics
Conditional random effects vs. (subject-) marginal models:
Random effects are subject-conditional
Random effects models are needed to estimate responses for individual subjects
Models without random effects are marginalized with respect to subject-specific effects
They are natural when the interest is on group-level (i.e., covariate-specific but not patient-specific) parameters (e.g., overall treatment effect)
Random effects are natural when there is clustering at more than the subject level (multi-level models)
Extended linear model (marginal; with no random effects) is a logical extension of the univariate model (e.g., few statisticians use subject random effects for univariate \(Y\))
This was known as growth curve models and generalized least squares (Goldstein, 1989; Potthoff & Roy, 1964) and was developed long before mixed effect models became popular
Pinheiro and Bates (Section~5.1.2) state that “in some applications, one may wish to avoid incorporating random effects in the model to account for dependence among observations, choosing to use the within-group component \(\Lambda_{i}\) to directly model variance-covariance structure of the response.”
We will assume that \(Y_{it} | X_{i}\) has a multivariate normal G distribution with mean given above and with variance-covariance matrix \(V_{i}\), an \(n_{i}\times n_{i}\) matrix that is a function of \(t_{i1}, \ldots, t_{in_{i}}\)
We further assume that the diagonals of \(V_{i}\) are all equal
Procedure can be generalized to allow for heteroscedasticity over time or with respect to \(X\) (e.g., males may be allowed to have a different variance than females)
This extended linear model has the following assumptions: H
all the assumptions of OLS at a single time point including correct modeling of predictor effects and univariate normality of responses conditional on \(X\)
the distribution of two responses at two different times for the same subject, conditional on \(X\), is bivariate normal with a specified correlation coefficient
the joint distribution of all \(n_{i}\) responses for the \(i^{th}\) subject is multivariate normal with the given correlation pattern (which implies the previous two distributional assumptions)
responses from any times for any two different subjects are uncorrelated
What Methods To Use for Repeated Measurements / Serial Data? 23
Does not extend to complex settings such as time-dependent covariates and dynamic 16 models
×
×
×
×
?
2 Thanks to Charles Berry, Brian Cade, Peter Flom, Bert Gunter, and Leena Choi for valuable input.
3 GEE: generalized estimating equations; GLS: generalized least squares; LOCF: last observation carried forward.
4 E.g., compute within-subject slope, mean, or area under the curve over time. Assumes that the summary measure is an adequate summary of the time profile and assesses the relevant treatment effect.
5 Unless one uses the Huynh-Feldt or Greenhouse-Geisser correction
6 For full efficiency, if using the working independence model
7 Or requires the user to specify one
8 For full efficiency of regression coefficient estimates
9 Unless the last observation is missing
10 The cluster sandwich variance estimator used to estimate SEs in GEE does not perform well in this situation, and neither does the working independence model because it does not weight subjects properly.
11 Unless one knows how to properly do a weighted analysis
12 Or users population averages
13 Unlike GLS, does not use standard maximum likelihood methods yielding simple likelihood ratio \(\chi^2\) statistics. Requires high-dimensional integration to marginalize random effects, using complex approximations, and if using SAS, unintuitive d.f. for the various tests.
14 Because there is no correct formula for SE of effects; ordinary SEs are not penalized for imputation and are too small
15 If correction not applied
16 E.g., a model with a predictor that is a lagged value of the response variable
Markov models use ordinary univariate software and are very flexible I
They apply the same way to binary, ordinal, nominal, and continuous Y
They require post-fitting calculations to get probabilities, means, and quantiles that are not conditional on the previous Y value
Gardiner et al. (2009) compared several longitudinal data J models, especially with regard to assumptions and how regression coefficients are estimated. Peters et al. (2012) have an empirical study confirming that the “use all available data” approach of likelihood–based longitudinal models makes imputation of follow-up measurements unnecessary.
7.4 Parameter Estimation Procedure
Generalized least squares K
Like weighted least squares but uses a covariance matrix that is not diagonal
Each subject can have her own shape of \(V_{i}\) due to each subject being measured at a different set of times
Maximum likelihood
Newton-Raphson or other trial-and-error methods used for estimating parameters
When imbalances are not severe, OLS fitted ignoring subject L identifiers may be efficient
But OLS standard errors will be too small as they don’t take intra-cluster correlation into account
May be rectified by substituting covariance matrix estimated from Huber-White cluster sandwich estimator or from cluster bootstrap
When imbalances are severe and intra-subject correlations are M strong, OLS is not expected to be efficient because it gives equal weight to each observation
a subject contributing two distant observations receives \(\frac{1}{5}\) the weight of a subject having 10 tightly-spaced observations
7.5 Common Correlation Structures
Usually restrict ourselves to isotropic correlation structures N — correlation between responses within subject at two times depends only on a measure of distance between the two times, not the individual times
We simplify further and assume depends on \(|t_{1} - t_{2}|\)
Can speak interchangeably of correlations of residuals within subjects or correlations between responses measured at different times on the same subject, conditional on covariates \(X\)
Assume that the correlation coefficient for \(Y_{it_{1}}\) vs. \(Y_{it_{2}}\) conditional on baseline covariates \(X_{i}\) for subject \(i\) is \(h(|t_{1} - t_{2}|, \rho)\), where \(\rho\) is a vector (usually a scalar) set of fundamental correlation parameters
Some commonly used structures when times are continuous and are O not equally spaced (Pinheiro & Bates, 2000, sec. 5.3.3) (nlme correlation function names are at the right if the structure is implemented in nlme):
Table 7.1: Some longitudinal data correlation structures
Structure
nlme Function
Compound symmetry: \(h = \rho\) if \(t_{1} \neq t_{2}\), 1 if \(t_{1}=t_{2}\)17
corCompSymm
Autoregressive-moving average lag 1: \(h = \rho^{|t_{1} - t_{2}|} = \rho^s\) where \(s = |t_{1}-t_{2}|\)
The structures 3-7 use \(\rho\) as a scaling parameter, not as something restricted to be in \([0,1]\)
7.6 Checking Model Fit
Constant variance assumption: usual residual plots P
Normality assumption: usual qq residual plots
Correlation pattern: Variogram
Estimate correlations of all possible pairs of residuals at different time points
Pool all estimates at same absolute difference in time \(s\)
Variogram is a plot with \(y = 1 - \hat{h}(s, \rho)\) vs. \(s\) on the \(x\)-axis
Superimpose the theoretical variogram assumed by the model
7.7R Software
Nonlinear mixed effects model package of Pinheiro & Bates Q
For linear models, fitting functions are
lme for mixed effects models
gls for generalized least squares without random effects
For this version the rms package has Gls so that many features of rms can be used:
anova: all partial Wald tests, test of linearity, pooled tests
summary: effect estimates (differences in \(\hat{Y}\)) and confidence limits, can be plotted
plot, ggplot, plotp: continuous effect plots
nomogram: nomogram
Function: generate R function code for fitted model
latex: representation of fitted model
In addition, Gls has a bootstrap option (hence you do not use rms’s bootcov for Gls fits). To get regular gls functions named anova (for likelihood ratio tests, AIC, etc.) or summary use anova.gls or summary.gls * nlme package has many graphics and fit-checking functions * Several functions will be demonstrated in the case study
7.8 Case Study
Consider the dataset in Table~6.9 of Davis[davis-repmeas, pp. 161-163] from a multi-center, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (\(N=36\)), 5000 units of BotB (\(N=36\)), R 10,000 units of BotB (\(N=37\))
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
Dataset cdystonia from web site
7.8.1 Graphical Exploration of Data
Code
require(rms)require(data.table)options(prType='html') # for model print, summary, anova, validategetHdata(cdystonia)setDT(cdystonia) # convert to data.tablecdystonia[, uid :=paste(site, id)] # unique subject ID# Tabulate patterns of subjects' time pointsg <-function(w) paste(sort(unique(w)), collapse=' ')cdystonia[, table(tapply(week, uid, g))]
Vars Obs Unique IDs IDs in #1 IDs not in #1
baseline 7 109 109 NA NA
followup 3 522 108 108 0
Merged 9 523 109 109 0
Number of unique IDs in any data frame : 109
Number of unique IDs in all data frames: 108
Code
# Remove person with no follow-up recordboth <- both[!is.na(week)]dd <-datadist(both)options(datadist='dd')
7.8.2 Using Generalized Least Squares
We stay with baseline adjustment and use a variety of correlation S structures, with constant variance. Time is modeled as a restricted cubic spline with 3 knots, because there are only 3 unique interior values of week.
AIC computed above is set up so that smaller values are best. From this the continuous-time AR1 and exponential structures are tied for the best. For the remainder of the analysis use corCAR1, using Gls. Keselman et al. (1998) did a simulation study to study the reliability of AIC for selecting the correct covariance structure in repeated measurement models. In choosing from among 11 structures, AIC selected the correct structure 47% of the time. Gurka et al. (2011) demonstrated that fixed effects in a mixed effects model can be biased, independent of sample size, when the specified covariate matrix is more restricted than the true one.
\(\hat{\rho} = 0.8672\), the estimate of the correlation between two T measurements taken one week apart on the same subject. The estimated correlation for measurements 10 weeks apart is \(0.8672^{10} = 0.24\).
Code
v <-Variogram(a, form=~ week | uid)plot(v)
Check constant variance and normality assumptions: U
# To get results for week 8 for a different reference group# for treatment, use e.g. summary(a, week=4, treat='Placebo')# Compare low dose with placebo, separately at each timek1 <-contrast(a, list(week=c(2,4,8,12,16), treat='5000U'),list(week=c(2,4,8,12,16), treat='Placebo'))options(width=80)print(k1, digits=3)
week twstrs0 age sex Contrast S.E. Lower Upper Z Pr(>|z|)
1 2 46 56 F -6.31 2.10 -10.43 -2.186 -3.00 0.0027
2 4 46 56 F -5.91 1.82 -9.47 -2.349 -3.25 0.0011
3 8 46 56 F -4.90 2.01 -8.85 -0.953 -2.43 0.0150
4* 12 46 56 F -3.07 1.75 -6.49 0.361 -1.75 0.0795
5* 16 46 56 F -1.02 2.10 -5.14 3.092 -0.49 0.6260
Redundant contrasts are denoted by *
Confidence intervals are 0.95 individual intervals
Code
# Compare high dose with placebok2 <-contrast(a, list(week=c(2,4,8,12,16), treat='10000U'),list(week=c(2,4,8,12,16), treat='Placebo'))print(k2, digits=3)
week twstrs0 age sex Contrast S.E. Lower Upper Z Pr(>|z|)
1 2 46 56 F -6.89 2.07 -10.96 -2.83 -3.32 0.0009
2 4 46 56 F -6.64 1.79 -10.15 -3.13 -3.70 0.0002
3 8 46 56 F -5.49 2.00 -9.42 -1.56 -2.74 0.0061
4* 12 46 56 F -1.76 1.74 -5.17 1.65 -1.01 0.3109
5* 16 46 56 F 2.62 2.09 -1.47 6.71 1.25 0.2099
Redundant contrasts are denoted by *
Confidence intervals are 0.95 individual intervals
Although multiple d.f. tests such as total treatment effects or V treatment \(\times\) time interaction tests are comprehensive, their increased degrees of freedom can dilute power. In a treatment comparison, treatment contrasts at the last time point (single d.f. tests) are often of major interest. Such contrasts are informed by all the measurements made by all subjects (up until dropout times) when a smooth time trend is assumed.
7.8.3 Bayesian Proportional Odds Random Effects Model
Develop a \(y\)-transformation invariant longitudinal model W
Proportional odds model with no grouping of TWSTRS scores
Bayesian random effects model
Random effects Gaussian with exponential prior distribution for its SD, with mean 1.0
Compound symmetry correlation structure
Demonstrates a large amount of patient-to-patient intercept variability
Code
require(rmsb)options(mc.cores=parallel::detectCores() -1, rmsb.backend='cmdstan')bpo <-blrm(twstrs ~ treat *rcs(week, 3) +rcs(twstrs0, 3) +rcs(age, 4) * sex +cluster(uid), data=both, file='bpo.rds')# file= means that after the first time the model is run, it will not# be re-run unless the data, fitting options, or underlying Stan code changestanDx(bpo)
For each posterior draw compute the difference in means and get an exact (to within simulation error) 0.95 highest posterior density intervals for these differences.
Code
M <-Mean(bpo) # create R function that computes mean Y from X*betak <-contrast(bpo, list(week=wks, treat='10000U'),list(week=wks, treat='Placebo'),fun=M, cnames=paste('Week', wks))plot(k, which='diff') +theme(legend.position='bottom')
Serial correlation induced by Markov model is similar to AR(1) which we already know fits these data
Markov model is more likely to fit the data than the random effects model, which induces a compound symmetry correlation structure
Models state transitions
PO model at each visit, with Y from previous visit conditioned upon just like any covariate
Need to uncondition (marginalize) on previous Y to get the time-response profile we usually need
Semiparametric model is especially attractive because one can easily “uncondition” a discrete Y model, and the distribution of Y for control subjects can be any shape
Let measurement times be \(t_{1}, t_{2}, \dots, t_{m}\), and the measurement for a subject at time \(t\) be denoted \(Y(t)\)
\(g\) involves any number of regression coefficients for a main effect of \(t\), the main effect of time gap \(t_{i} - t_{i-1}\) if this is not collinear with absolute time, a main effect of the previous state, and interactions between these
Examples of how the previous state may be modeled in \(g\):
linear in numeric codes for \(Y\)
spline function in same
discontinuous bi-linear relationship where there is a slope for in-hospital outcome severity, a separate slope for outpatient outcome severity, and an intercept jump at the transition from inpatient to outpatient (or vice versa)
Markov model is quite flexible in handling time trends and serial correlation patterns
# Create a new variable to hold previous value of Y for the subject# For week 2, previous value is the baseline valuesetDT(both, key=c('uid', 'week'))both[, ptwstrs :=shift(twstrs), by=uid]both[week ==2, ptwstrs := twstrs0]dd <-datadist(both)bmark <-blrm(twstrs ~ treat *rcs(week, 3) +rcs(ptwstrs, 4) +rcs(age, 4) * sex,data=both, file='bmark.rds')# When adding partial PO terms for week and ptwstrs, z=-1.8, 5.04stanDx(bmark)
Frequencies of Missing Values Due to Each Variable
twstrs treat week ptwstrs age sex
0 0 0 5 0 0
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 517
LOO log L -1785±22.32
g 3.272 [2.938, 3.527]
C 0.828 [0.825, 0.83]
Draws 4000
LOO IC 3570.01±44.63
gp 0.416 [0.403, 0.429]
Dxy 0.656 [0.651, 0.661]
Chains 4
Effective p 88.87±4.69
EV 0.533 [0.497, 0.572]
Time 3.8s
B 0.117 [0.113, 0.121]
v 8.433 [6.855, 9.889]
p 18
vp 0.133 [0.124, 0.143]
Mode β
Mean β
Median β
S.E.
Lower
Upper
Pr(β>0)
Symmetry
treat=5000U
0.2210
0.2114
0.1986
0.5710
-0.8544
1.3681
0.6428
0.97
treat=Placebo
1.8312
1.8294
1.8324
0.5832
0.7605
3.0076
0.9992
1.03
week
0.4864
0.4866
0.4865
0.0850
0.3234
0.6503
1.0000
1.04
week'
-0.2878
-0.2877
-0.2864
0.0905
-0.4690
-0.1224
0.0003
0.96
ptwstrs
0.1997
0.2013
0.2011
0.0263
0.1475
0.2505
1.0000
1.02
ptwstrs'
-0.0621
-0.0659
-0.0654
0.0620
-0.1855
0.0568
0.1465
0.97
ptwstrs''
0.5325
0.5492
0.5490
0.2468
0.0945
1.0520
0.9862
1.04
age
-0.0295
-0.0282
-0.0279
0.0309
-0.0920
0.0290
0.1792
1.04
age'
0.1236
0.1206
0.1203
0.0863
-0.0543
0.2788
0.9215
0.98
age''
-0.5069
-0.4969
-0.4969
0.3407
-1.1953
0.1406
0.0707
1.02
sex=M
-0.4635
-0.3775
-0.3346
2.2919
-4.8029
3.9812
0.4368
1.01
treat=5000U × week
-0.0341
-0.0323
-0.0306
0.1113
-0.2482
0.1786
0.3955
0.98
treat=Placebo × week
-0.2719
-0.2715
-0.2699
0.1137
-0.4858
-0.0439
0.0088
0.98
treat=5000U × week'
-0.0340
-0.0355
-0.0377
0.1209
-0.2646
0.2043
0.3818
1.00
treat=Placebo × week'
0.1197
0.1193
0.1161
0.1222
-0.1216
0.3474
0.8423
1.04
age × sex=M
0.0112
0.0092
0.0085
0.0553
-0.0965
0.1149
0.5645
1.00
age' × sex=M
-0.0510
-0.0465
-0.0448
0.1577
-0.3571
0.2585
0.3812
1.00
age'' × sex=M
0.2614
0.2475
0.2399
0.6125
-1.0222
1.3862
0.6540
1.02
Code
a <-anova(bpo)a
Relative Explained Variation for twstrs. Approximate total model Wald χ2 used in denominators of REV:247.6 [205.4, 322.2].
REV
Lower
Upper
d.f.
treat (Factor+Higher Order Factors)
0.132
0.067
0.236
6
All Interactions
0.094
0.035
0.178
4
week (Factor+Higher Order Factors)
0.572
0.429
0.683
6
All Interactions
0.094
0.035
0.178
4
Nonlinear (Factor+Higher Order Factors)
0.022
0.000
0.078
3
twstrs0
0.660
0.504
0.751
2
Nonlinear
0.017
0.000
0.056
1
age (Factor+Higher Order Factors)
0.027
0.010
0.093
6
All Interactions
0.016
0.001
0.063
3
Nonlinear (Factor+Higher Order Factors)
0.023
0.004
0.075
4
sex (Factor+Higher Order Factors)
0.020
0.002
0.073
4
All Interactions
0.016
0.001
0.063
3
treat × week (Factor+Higher Order Factors)
0.094
0.035
0.178
4
Nonlinear
0.008
0.000
0.041
2
Nonlinear Interaction : f(A,B) vs. AB
0.008
0.000
0.041
2
age × sex (Factor+Higher Order Factors)
0.016
0.001
0.063
3
Nonlinear
0.014
0.000
0.052
2
Nonlinear Interaction : f(A,B) vs. AB
0.014
0.000
0.052
2
TOTAL NONLINEAR
0.059
0.033
0.157
8
TOTAL INTERACTION
0.110
0.055
0.206
7
TOTAL NONLINEAR + INTERACTION
0.144
0.099
0.274
11
TOTAL
1.000
1.000
1.000
17
Code
plot(a)
Let’s add subject-level random effects to the model. Smallness of the standard deviation of the random effects provides support for the assumption of conditional independence that we like to make for Markov models and allows us to simplify the model by omitting random effects.
Frequencies of Missing Values Due to Each Variable
twstrs treat week ptwstrs age sex
0 0 0 5 0 0
cluster(uid)
0
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 517
LOO log L -1786.85±22.38
g 3.248 [2.924, 3.526]
C 0.828 [0.825, 0.83]
Draws 4000
LOO IC 3573.7±44.77
gp 0.415 [0.402, 0.428]
Dxy 0.656 [0.65, 0.661]
Chains 4
Effective p 94.31±5
EV 0.531 [0.492, 0.569]
Time 4.7s
B 0.117 [0.114, 0.122]
v 8.304 [6.61, 9.697]
p 18
vp 0.133 [0.123, 0.142]
Cluster on uid
Clusters 108
σγ 0.1173 [0, 0.3792]
Mean β
Median β
S.E.
Lower
Upper
Pr(β>0)
Symmetry
treat=5000U
0.2228
0.2169
0.5674
-0.8585
1.3290
0.6558
1.02
treat=Placebo
1.8342
1.8358
0.5554
0.7210
2.9318
0.9998
0.99
week
0.4860
0.4853
0.0821
0.3324
0.6511
1.0000
1.01
week'
-0.2847
-0.2849
0.0870
-0.4602
-0.1260
0.0000
1.01
ptwstrs
0.2003
0.2003
0.0265
0.1445
0.2487
1.0000
0.98
ptwstrs'
-0.0655
-0.0642
0.0614
-0.1853
0.0512
0.1398
0.96
ptwstrs''
0.5453
0.5431
0.2431
0.0664
1.0149
0.9888
1.06
age
-0.0283
-0.0291
0.0320
-0.0904
0.0336
0.1920
1.00
age'
0.1225
0.1232
0.0885
-0.0439
0.2947
0.9120
1.04
age''
-0.5068
-0.5066
0.3506
-1.1973
0.1543
0.0648
0.97
sex=M
-0.4046
-0.3879
2.3564
-5.1096
3.9917
0.4323
0.99
treat=5000U × week
-0.0325
-0.0342
0.1095
-0.2554
0.1770
0.3812
1.00
treat=Placebo × week
-0.2717
-0.2714
0.1064
-0.4792
-0.0567
0.0065
1.01
treat=5000U × week'
-0.0374
-0.0350
0.1179
-0.2735
0.1817
0.3725
0.97
treat=Placebo × week'
0.1176
0.1187
0.1150
-0.1102
0.3456
0.8462
1.02
age × sex=M
0.0097
0.0089
0.0569
-0.0990
0.1204
0.5620
1.01
age' × sex=M
-0.0493
-0.0461
0.1641
-0.3710
0.2604
0.3832
0.97
age'' × sex=M
0.2610
0.2596
0.6432
-0.9998
1.4963
0.6558
1.06
The random effects SD is only 0.11 on the logit scale. Also, the standard deviations of all the regression parameter posterior distributions are virtually unchanged with the addition of random effects:
Code
plot(sqrt(diag(vcov(bmark))), sqrt(diag(vcov(bmarkre))),xlab='Posterior SDs in Conditional Independence Markov Model',ylab='Posterior SDs in Random Effects Markov Model')abline(a=0, b=1, col=gray(0.85))
So we will use the model omitting random effects.
Show the partial effects of all the predictors, including the effect of the previous measurement of TWSTRS. Also compute high dose:placebo treatment contrasts on these conditional estimates.
Code
ggplot(Predict(bmark))
Code
ggplot(Predict(bmark, week, treat))
Code
k <-contrast(bmark, list(week=wks, treat='10000U'),list(week=wks, treat='Placebo'),cnames=paste('Week', wks))k
week Contrast S.E. Lower Upper Pr(Contrast>0)
1 Week 2 2 -1.2863661 0.3894213 -2.0003197 -0.4892969 0.0005
2 Week 4 4 -0.7433413 0.2582757 -1.2744474 -0.2785185 0.0010
3 Week 8 8 0.2234072 0.3485378 -0.4682498 0.8784669 0.7408
4* Week 12 12 0.7129521 0.2582684 0.2521144 1.2348809 0.9960
5* Week 16 16 1.0831961 0.3989422 0.2746078 1.8176953 0.9965
Redundant contrasts are denoted by *
Intervals are 0.95 highest posterior density intervals
Contrast is the posterior mean
Using posterior means for parameter values, compute the probability that at a given week twstrs will be \(\geq 40\) when at the previous visit it was 40. Also show the conditional mean twstrs when it was 40 at the previous visit.
Turn to marginalized (unconditional on previous twstrs) B quantities
Capitalize on PO model being a multinomial model, just with PO restrictions
Manipulations of conditional probabilities to get the unconditional probability that twstrs=y doesn’t need to know about PO
Compute all cell probabilities and use the law of total probability recursively \[\Pr(Y_{t} = y | X) = \sum_{j=1}^{k} \Pr(Y_{t} = y | X, Y_{t-1} = j) \Pr(Y_{t-1} = j | X)\]
predict.blrm method with type='fitted.ind' computes the needed conditional cell probabilities, optionally for all posterior draws at once
Easy to get highest posterior density intervals for derived parameters such as unconditional probabilities or unconditional means
Hmisc package soprobMarkovOrdm function (in version 4.6) computes an array of all the state occupancy probabilities for all the posterior draws
Code
# Baseline twstrs to 42 in d# For each dose, get all the posterior draws for all state occupancy# probabilities for all visitylev <-sort(unique(both$twstrs))tlev <-c('Placebo', '10000U')R <-list()for(trt in tlev) { # separately by treatment d$treat <- trt u <-soprobMarkovOrdm(bmark, d, wks, ylev,tvarname='week', pvarname='ptwstrs') R[[trt]] <- u}dim(R[[1]]) # posterior draws x times x distinct twstrs values
[1] 4000 5 62
Code
# For each posterior draw, treatment, and week compute the mean TWSTRS# Then compute posterior mean of means, and HPD intervalRmean <- Rmeans <-list()for(trt in tlev) { r <- R[[trt]]# Mean Y at each week and posterior draw (mean from a discrete distribution) m <-apply(r, 1:2, function(x) sum(ylev * x)) Rmeans[[trt]] <- m# Posterior mean and median and HPD interval over draws u <-apply(m, 2, f) # f defined above u <-rbind(week=as.numeric(colnames(u)), u) Rmean[[trt]] <- u}r <-lapply(Rmean, function(x) as.data.frame(t(x)))for(trt in tlev) r[[trt]]$treat <- trtr <-do.call(rbind, r)ggplot(r, aes(x=week, y=Mean, color=treat)) +geom_line() +geom_ribbon(aes(ymin=Lower, ymax=Upper), alpha=0.2, linetype=0)
Use the same posterior draws of unconditional probabilities of all C values of TWSTRS to get the posterior distribution of differences in mean TWSTRS between high and low dose
Get posterior mean of all cell probabilities estimates at week 12 D
Distribution of TWSTRS conditional high dose, median age, mode sex
Not conditional on week 8 value
Code
p <- R$`10000U`[, '12', ] # 4000 x 62pmean <-apply(p, 2, mean)yvals <-as.numeric(names(pmean))plot(yvals, pmean, type='l', xlab='TWSTRS', ylab='')
7.9 Study Questions
Section 7.2
When should one model the time-response profile using discrete time?
Section 7.3
What makes generalized least squares and mixed effect models relatively robust to non-completely-random dropouts?
What does the last observation carried forward method always violate?
Section 7.4
Which correlation structure do you expect to fit the data when there are rapid repetitions over a short time span? When the follow-up time span is very long?
Section 7.8
What can go wrong if many correlation structures are tested in one dataset?
In a longitudinal intervention study, what is the most typical comparison of interest? Is it best to borrow information in estimating this contrast?
Davis, C. S. (2002). Statistical Methods for the Analysis of Repeated Measurements. Springer.
Diggle, P. J., Heagerty, P., Liang, K.-Y., & Zeger, S. L. (2002). Analysis of Longitudinal Data (second). Oxford University Press.
Donohue, M. C., Langford, O., Insel, P. S., van Dyck, C. H., Petersen, R. C., Craft, S., Sethuraman, G., Raman, R., Aisen, P. S., & Initiative, F. the A. D. N. (n.d.). Natural cubic splines for the analysis of Alzheimer’s clinical trials. Pharmaceutical Statistics, n/a(n/a). https://doi.org/10.1002/pst.2285
Gardiner, J. C., Luo, Z., & Roman, L. A. (2009). Fixed effects, random effects and GEE: What are the differences? Stat Med, 28, 221–239.
nice comparison of models; econometrics; different use of the term "fixed effects model"
Gurka, M. J., Edwards, L. J., & Muller, K. E. (2011). Avoiding bias in mixed model inference for fixed effects. Stat Med, 30(22), 2696–2707. https://doi.org/10.1002/sim.4293
Hand, D., & Crowder, M. (1996). Practical Longitudinal Data Analysis. Chapman & Hall.
Kenward, M. G., White, I. R., & Carpener, J. R. (2010). Should baseline be a covariate or dependent variable in analyses of change from baseline in clinical trials? (Letter to the editor). Stat Med, 29, 1455–1456.
sharp rebuke of liu09sho
Keselman, H. J., Algina, J., Kowalchuk, R. K., & Wolfinger, R. D. (1998). A comparison of two approaches for selecting covariance structures in the analysis of repeated measurements. Comm Stat - Sim Comp, 27, 591–604.
use of AIC and BIC for selecting the covariance structure in repeated measurements;serial data;longitudinal data;when chosing from 11 covariance patterns, AIC selected the correct structure 0.47 of the time; BIC was correct in 0.35
Liang, K.-Y., & Zeger, S. L. (2000). Longitudinal data analysis of continuous and discrete responses for pre-post designs. Sankhyā, 62, 134–148.
makes an error in assuming the baseline variable will have the same univariate distribution as the response except for a shift;baseline may have for example a truncated distribution based on a trial’s inclusion criteria;if correlation between baseline and response is zero, ANCOVA will be twice as efficient as simple analysis of change scores;if correlation is one they may be equally efficient
Lindsey, J. K. (1997). Models for Repeated Measurements. Clarendon Press.
Liu, G. F., Lu, K., Mogg, R., Mallick, M., & Mehrotra, D. V. (2009). Should baseline be a covariate or dependent variable in analyses of change from baseline in clinical trials? Stat Med, 28, 2509–2530.
seems to miss several important points, such as the fact that the baseline variable is often part of the inclusion/exclusion criteria and so has a truncated distribution that is different from that of the follow-up measurements;sharp rebuke in ken10sho
Peters, S. A., Bots, M. L., den Ruijter, H. M., Palmer, M. K., Grobbee, D. E., Crouse, J. R., O’Leary, D. H., Evans, G. W., Raichlen, J. S., Moons, K. G., Koffijberg, H., & METEOR study group. (2012). Multiple imputation of missing repeated outcome measurements did not add to linear mixed-effects models. J Clin Epi, 65(6), 686–695. https://doi.org/10.1016/j.jclinepi.2011.11.012
Pinheiro, J. C., & Bates, D. M. (2000). Mixed-Effects Models in S and S-PLUS. Springer.
Potthoff, R. F., & Roy, S. N. (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika, 51, 313–326.
included an AR1 example
Senn, S. (2006). Change from baseline and analysis of covariance revisited. Stat Med, 25, 4334–4344.
shows that claims that in a 2-arm study it is not true that ANCOVA requires the population means at baseline to be identical;refutes some claims of lia00lon;problems with counterfactuals;temporal additivity ("amounts to supposing that despite the fact that groups are difference at baseline they would show the same evolution over time");causal additivity;is difficult to design trials for which simple analysis of change scores is unbiased, ANCOVA is biased, and a causal interpretation can be given;temporally and logically, a "baseline cannot be a \(<\)i\(>\)response\(<\)/i\(>\) to treatment", so baseline and response cannot be modeled in an integrated framework as Laird and Ware’s model has been used;"one should focus clearly on “outcomes” as being the only values that can be influenced by treatment and examine critically any schemes that assume that these are linked in some rigid and deterministic view to “baseline” values. An alternative tradition sees a baseline as being merely one of a number of measurements capable of improving predictions of outcomes and models it in this way.";"You cannot establish necessary conditions for an estimator to be valid by nominating a model and seeing what the model implies unless the model is universally agreed to be impeccable. On the contrary it is appropriate to start with the estimator and see what assumptions are implied by valid conclusions.";this is in distinction to lia00lon
Simpson, S. L., Edwards, L. J., Muller, K. E., Sen, P. K., & Styner, M. A. (2010). A linear exponent AR(1) family of correlation structures. Stat Med, 29, 1825–1838.
Venables, W. N., & Ripley, B. D. (2003). Modern Applied Statistics with S (Fourth). Springer-Verlag.
Verbeke, G., & Molenberghs, G. (2000). Linear Mixed Models for Longitudinal Data. Springer.
```{r include=FALSE}require(Hmisc)options(qproject='rms', prType='html')require(qreport)getRs('qbookfun.r')hookaddcap()knitr::set_alias(w = 'fig.width', h = 'fig.height', cap = 'fig.cap', scap ='fig.scap')knitr::read_chunk('long.R')```# Modeling Longitudinal Responses using Generalized Least Squares {#sec-long}Some good general references on longitudinal data analysis are @davis-repmeas, @pinheiro-bates, @diggle-longit, @VR, @hand-crowder, @ver00lin, @lin97mod## Notation`r mrg(sound("gls-1"))`* $N$ subjects `r ipacue()`* Subject $i$ ($i=1,2,\ldots,N$) has $n_{i}$ responses measured at times $t_{i1}, t_{i2}, \ldots, t_{in_{i}}$* Response at time $t$ for subject $i$: $Y_{it}$* Subject $i$ has baseline covariates $X_{i}$* Generally the response measured at time $t_{i1}=0$ is a covariate in $X_{i}$ instead of being the first measured response $Y_{i0}$* Time trend in response is modeled with $k$ parameters so that the time "main effect" has $k$ d.f.* Let the basis functions modeling the time effect be $g_{1}(t), g_{2}(t), \ldots, g_{k}(t)$## Model Specification for Effects on $E(Y)$`r mrg(sound("gls-2"))`### Common Basis Functions* $k$ dummy variables for $k+1$ unique times (assumes no `r ipacue()` functional form for time but may spend many d.f.)* $k=1$ for linear time trend, $g_{1}(t)=t$* $k$--order polynomial in $t$* $k+1$--knot restricted cubic spline (one linear term, $k-1$ nonlinear terms)### Model for Mean Profile* A model for mean time-response profile without interactions between time `r ipacue()` and any $X$: <br> $E[Y_{it} | X_{i}] = X_{i}\beta + \gamma_{1}g_{1}(t) + \gamma_{2}g_{2}(t) + \ldots + \gamma_{k}g_{k}(t)$* Model with interactions between time and some $X$'s: add product terms for desired interaction effects* Example: To allow the mean time trend for subjects in group 1 (reference group) to be arbitrarily different from time trend for subjects in group 2, have a dummy variable for group 2, a time "main effect" curve with $k$ d.f. and all $k$ products of these time components with the dummy variable for group 2* Time should be modeled using indicator variables only when time is really discrete, e.g., when time is in weeks and subjects were followed at exactly the intended weeks. In general time should be modeled continuously (and nonlinearly if there are more than 2 followup times) using actual visit dates instead of intended dates [@donnat].### Model Specification for Treatment Comparisons`r mrg(sound("gls-3"))`* In studies comparing two or more treatments, a response is often `r ipacue()` measured at baseline (pre-randomization)* Analyst has the option to use this measurement as $Y_{i0}$ or as part of $X_{i}$::: {.callout-note collapse="true"}# Comments from Jim RochonFor RCTs, I draw a sharp line at the point when the intervention begins. The LHS [left hand side of the model equation] is reserved for something that is a response to treatment. Anything before this point can potentially be included as a covariate in the regression model. This includes the "baseline" value of the outcome variable. Indeed, the best predictor of the outcome at the end of the study is typically where the patient began at the beginning. It drinks up a lot of variability in the outcome; and, the effect of other covariates is typically mediated through this variable. I treat anything after the intervention begins as an outcome. In the western scientific method, an "effect" must follow the "cause" even if by a split second. Note that an RCT is different than a cohort study. In a cohort study, "Time 0" is not terribly meaningful. If we want to model, say, the trend over time, it would be legitimate, in my view, to include the "baseline" value on the LHS of that regression model. Now, even if the intervention, e.g., surgery, has an immediate effect, I would include still reserve the LHS for anything that might legitimately be considered as the response to the intervention. So, if we cleared a blocked artery and then measured the MABP, then that would still be included on the LHS. Now, it could well be that most of the therapeutic effect occurred by the time that the first repeated measure was taken, and then levels off. Then, a plot of the means would essentially be two parallel lines and the treatment effect is the distance between the lines, i.e., the difference in the intercepts. If the linear trend from baseline to Time 1 continues beyond Time 1, then the lines will have a common intercept but the slopes will diverge. Then, the treatment effect will the difference in slopes. One point to remember is that the estimated intercept is the value at time 0 that we predict from the set of repeated measures post randomization. In the first case above, the model will predict different intercepts even though randomization would suggest that they would start from the same place. This is because we were asleep at the switch and didn't record the "action" from baseline to time 1. In the second case, the model will predict the same intercept values because the linear trend from baseline to time 1 was continued thereafter. More importantly, there are considerable benefits to including it as a covariate on the RHS. The baseline value tends to be the best predictor of the outcome post-randomization, and this maneuver increases the precision of the estimated treatment effect. Additionally, any other prognostic factors correlated with the outcome variable will also be correlated with the baseline value of that outcome, and this has two important consequences. First, this greatly reduces the need to enter a large number of prognostic factors as covariates in the linear models. Their effect is already mediated through the baseline value of the outcome variable. Secondly, any imbalances across the treatment arms in important prognostic factors will induce an imbalance across the treatment arms in the baseline value of the outcome. Including the baseline value thereby reduces the need to enter these variables as covariates in the linear models.:::@sen06cha states that temporally and logically, a"baseline cannot be a _response_ to treatment", so baseline andresponse cannot be modeled in an integrated framework.`r quoteit('... one should focus clearly on \'outcomes\' as being the only values that can be influenced by treatment and examine critically any schemes that assume that these are linked in some rigid and deterministic view to \'baseline\' values. An alternative tradition sees a baseline as being merely one of a number of measurements capable of improving predictions of outcomes and models it in this way.')`The final reason that baseline cannot be modeled as the response at `r ipacue()`time zero is that many studies have inclusion/exclusion criteria thatinclude cutoffs on the baseline variable. In other words, thebaseline measurement comes from a truncated distribution. In generalit is not appropriate to model the baseline with the samedistributional shape as the follow-up measurements. Thus the approachesrecommended by @lia00lon and@liu09sho are problematic^[In addition to this, one of the paper's conclusions that analysis of covariance is not appropriate if the population means of the baseline variable are not identical in the treatment groups is not correct [@sen06cha]. See @ken10sho for a rebuke of @liu09sho.].## Modeling Within-Subject Dependence`r mrg(sound("gls-4"))`* Random effects and mixed effects models have become very popular `r ipacue()`* Disadvantages: + Induced correlation structure for $Y$ may be unrealistic + Numerically demanding + Require complex approximations for distributions of test statistics* Conditional random effects vs. (subject-) marginal models: + Random effects are subject-conditional + Random effects models are needed to estimate responses for individual subjects + Models without random effects are marginalized with respect to subject-specific effects + They are natural when the interest is on group-level (i.e., covariate-specific but not patient-specific) parameters (e.g., overall treatment effect) + Random effects are natural when there is clustering at more than the subject level (multi-level models)* Extended linear model (marginal; with no random effects) is a logical extension of the univariate model (e.g., few statisticians use subject random effects for univariate $Y$)* This was known as growth curve models and generalized least squares [@pot64gen; @gol89res] and was developed long before mixed effect models became popular* Pinheiro and Bates (Section~5.1.2) state that "in some applications,one may wish to avoid incorporating random effects in the model toaccount for dependence among observations, choosing to use thewithin-group component $\Lambda_{i}$ to directly modelvariance-covariance structure of the response."* We will assume that $Y_{it} | X_{i}$ has a multivariate normal `r ipacue()` distribution with mean given above and with variance-covariance matrix $V_{i}$, an $n_{i}\times n_{i}$ matrix that is a function of $t_{i1}, \ldots, t_{in_{i}}$* We further assume that the diagonals of $V_{i}$ are all equal* Procedure can be generalized to allow for heteroscedasticity over time or with respect to $X$ (e.g., males may be allowed to have a different variance than females)* This _extended linear model_ has the following assumptions: `r ipacue()` + all the assumptions of OLS at a single time point including correct modeling of predictor effects and univariate normality of responses conditional on $X$ + the distribution of two responses at two different times for the same subject, conditional on $X$, is bivariate normal with a specified correlation coefficient + the joint distribution of all $n_{i}$ responses for the $i^{th}$ subject is multivariate normal with the given correlation pattern (which implies the previous two distributional assumptions) + responses from any times for any two different subjects are uncorrelated| | Repeated Measures ANOVA | GEE | Mixed Effects Models | GLS | Markov | LOCF | Summary Statistic^[E.g., compute within-subject slope, mean, or area under the curve over time. Assumes that the summary measure is an adequate summary of the time profile and assesses the relevant treatment effect.] ||-----------------|--|--|--|--|--|--|--|| Assumes normality | × | | × | × | | | || Assumes independence of measurements within subject | ×^[Unless one uses the Huynh-Feldt or Greenhouse-Geisser correction] | ×^[For full efficiency, if using the working independence model] | | | | | || Assumes a correlation structure^[Or requires the user to specify one] | × | ×^[For full efficiency of regression coefficient estimates]| × | × | × | | || Requires same measurement times for all subjects | × | | | | | ? | || Does not allow smooth modeling of time to save d.f. | × | | | | | | || Does not allow adjustment for baseline covariates | × | | | | | | || Does not easily extend to non-continuous $Y$ | × | | | × | | | || Loses information by not using intermediate measurements | | | | | | ×^[Unless the last observation is missing] | × || Does not allow widely varying # observations per subject | × | ×^[The cluster sandwich variance estimator used to estimate SEs in GEE does not perform well in this situation, and neither does the working independence model because it does not weight subjects properly.] | | | | × | ×^[Unless one knows how to properly do a weighted analysis] || Does not allow for subjects to have distinct trajectories^[Or users population averages] | × | × | | × | × | × | || Assumes subject-specific effects are Gaussian | | | × | | | | || Badly biased if non-random dropouts | ? | × | | | | × | || Biased in general | | | | | | × | || Harder to get tests & CLs | | | ×^[Unlike GLS, does not use standard maximum likelihood methods yielding simple likelihood ratio $\chi^2$ statistics. Requires high-dimensional integration to marginalize random effects, using complex approximations, and if using SAS, unintuitive d.f. for the various tests.]| | |×^[Because there is no correct formula for SE of effects; ordinary SEs are not penalized for imputation and are too small] | || Requires large # subjects/clusters | | × | | | | | || SEs are wrong | ×^[If correction not applied] | | | | | × | || Assumptions are not verifiable in small samples | × | N/A | × | × | | × | || Does not extend to complex settings such as time-dependent covariates and dynamic ^[E.g., a model with a predictor that is a lagged value of the response variable] models| × | | × | × | | × | ? |: What Methods To Use for Repeated Measurements / Serial Data? ^[Thanks to Charles Berry, Brian Cade, Peter Flom, Bert Gunter, and Leena Choi for valuable input.] ^[GEE: generalized estimating equations; GLS: generalized least squares; LOCF: last observation carried forward.]* Markov models use ordinary univariate software and are very flexible `r ipacue()`* They apply the same way to binary, ordinal, nominal, and continuous Y* They require post-fitting calculations to get probabilities, means, and quantiles that are not conditional on the previous Y value@gar09fix compared several longitudinal data `r ipacue()`models, especially with regard to assumptions and how regressioncoefficients are estimated. @pet12mul have anempirical study confirming that the "use all available data"approach of likelihood--based longitudinal models makes imputation offollow-up measurements unnecessary.## Parameter Estimation Procedure`r mrg(sound("gls-5"))`* Generalized least squares `r ipacue()`* Like weighted least squares but uses a covariance matrix that is not diagonal* Each subject can have her own shape of $V_{i}$ due to each subject being measured at a different set of times* Maximum likelihood* Newton-Raphson or other trial-and-error methods used for estimating parameters* For small number of subjects, advantages in using REML (restricted maximum likelihood) instead of ordinary MLE [@diggle-longit, Section~5.3],[@pinheiro-bates, Chapter~5], @gol89res (esp. to get more unbiased estimate of the covariance matrix)* When imbalances are not severe, OLS fitted ignoring subject `r ipacue()` identifiers may be efficient + But OLS standard errors will be too small as they don't take intra-cluster correlation into account + May be rectified by substituting covariance matrix estimated from Huber-White cluster sandwich estimator or from cluster bootstrap* When imbalances are severe and intra-subject correlations are `r ipacue()` strong, OLS is not expected to be efficient because it gives equal weight to each observation + a subject contributing two distant observations receives $\frac{1}{5}$ the weight of a subject having 10 tightly-spaced observations## Common Correlation Structures`r mrg(sound("gls-6"))`* Usually restrict ourselves to _isotropic_ correlation structures `r ipacue()`--- correlation between responses within subject at two times dependsonly on a measure of distance between the two times, not theindividual times* We simplify further and assume depends on $|t_{1} - t_{2}|$* Can speak interchangeably of correlations of residuals within subjects or correlations between responses measured at different times on the same subject, conditional on covariates $X$* Assume that the correlation coefficient for $Y_{it_{1}}$ vs. $Y_{it_{2}}$ conditional on baseline covariates $X_{i}$ for subject $i$ is $h(|t_{1} - t_{2}|, \rho)$, where $\rho$ is a vector (usually a scalar) set of fundamental correlation parameters* Some commonly used structures when times are continuous and are `r ipacue()` not equally spaced [@pinheiro-bates, Section 5.3.3] (`nlme` correlation function names are at the right if the structure is implemented in `nlme`):| Structure | `nlme` Function ||------------------------------------|-----|| **Compound symmetry**: $h = \rho$ if $t_{1} \neq t_{2}$, 1 if $t_{1}=t_{2}$ ^[Essentially what two-way ANOVA assumes] | `corCompSymm` || **Autoregressive-moving average lag 1**: $h = \rho^{|t_{1} - t_{2}|} = \rho^s$ where $s = |t_{1}-t_{2}|$ | `corCAR1` || **Exponential**: $h = \exp(-s/\rho)$ | `corExp` || **Gaussian**: $h = \exp[-(s/\rho)^2]$ | `corGaus` || **Linear**: $h = (1 - s/\rho)[s < \rho]$ | `corLin` || **Rational quadratic**: $h = 1 - (s/\rho)^{2}/[1+(s/\rho)^{2}]$ | `corRatio` || **Spherical**: $h = [1-1.5(s/\rho)+0.5(s/\rho)^{3}][s < \rho]$ | `corSpher` || **Linear exponent AR(1)**: $h = \rho^{d_{min} + \delta\frac{s - d_{min}}{d_{max} - d_{min}}}$, 1 if $t_{1}=t_{2}$ | @sim10lin |: Some longitudinal data correlation structures {#tbl-long-structures}The structures 3-7 use $\rho$ as a scaling parameter, not assomething restricted to be in $[0,1]$## Checking Model Fit`r mrg(sound("gls-7"))`* Constant variance assumption: usual residual plots `r ipacue()`* Normality assumption: usual qq residual plots* Correlation pattern: **Variogram** + Estimate correlations of all possible pairs of residuals at different time points + Pool all estimates at same absolute difference in time $s$ + Variogram is a plot with $y = 1 - \hat{h}(s, \rho)$ vs. $s$ on the $x$-axis + Superimpose the theoretical variogram assumed by the model## `R` Software`r mrg(sound("gls-8"))`* Nonlinear mixed effects model package of Pinheiro \& Bates `r ipacue()`* For linear models, fitting functions are + `lme` for mixed effects models + `gls` for generalized least squares without random effects* For this version the rms package has `Gls` so that many features of rms can be used: + **`anova`**: all partial Wald tests, test of linearity, pooled tests + **`summary`**: effect estimates (differences in $\hat{Y}$) and confidence limits, can be plotted + **`plot, ggplot, plotp`**: continuous effect plots + **`nomogram`**: nomogram + **`Function`**: generate `R` function code for fitted model + **`latex`**: \LaTeX\ representation of fitted modelIn addition, `Gls` has a bootstrap option (hence you do not use rms's `bootcov` for `Gls` fits).<br>To get regular `gls` functions named `anova` (for likelihood ratio tests, AIC, etc.) or `summary` use`anova.gls` or `summary.gls`* `nlme` package has many graphics and fit-checking functions* Several functions will be demonstrated in the case study## Case Study`r mrg(sound("gls-9"))`Consider the dataset in Table~6.9 ofDavis[davis-repmeas, pp. 161-163] from a multi-center, randomizedcontrolled trial of botulinum toxin type B (BotB) in patients with cervicaldystonia from nine U.S. sites.* Randomized to placebo ($N=36$), 5000 units of BotB ($N=36$), `r ipacue()` 10,000 units of BotB ($N=37$)* Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)* TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began* Dataset `cdystonia` from web site### Graphical Exploration of Data```{r spaghetti,h=5,w=7,cap='Time profiles for individual subjects, stratified by study site and dose'}#| label: fig-long-spaghettirequire(rms)require(data.table)options(prType='html') # for model print, summary, anova, validategetHdata(cdystonia)setDT(cdystonia) # convert to data.tablecdystonia[, uid := paste(site, id)] # unique subject ID# Tabulate patterns of subjects' time pointsg <- function(w) paste(sort(unique(w)), collapse=' ')cdystonia[, table(tapply(week, uid, g))]# Plot raw data, superposing subjectsxl <- xlab('Week'); yl <- ylab('TWSTRS-total score')ggplot(cdystonia, aes(x=week, y=twstrs, color=factor(id))) + geom_line() + xl + yl + facet_grid(treat ~ site) + guides(color=FALSE)``````{r quartiles,cap='Quartiles of `TWSTRS` stratified by dose',w=5,h=4}#| label: fig-long-quartiles# Show quartilesg <- function(x) { k <- as.list(quantile(x, (1 : 3) / 4, na.rm=TRUE)) names(k) <- .q(Q1, Q2, Q3) k}cdys <- cdystonia[, g(twstrs), by=.(treat, week)]ggplot(cdys, aes(x=week, y=Q2)) + xl + yl + ylim(0, 70) + geom_line() + facet_wrap(~ treat, nrow=2) + geom_ribbon(aes(ymin=Q1, ymax=Q3), alpha=0.2)``````{r bootcls,cap='Mean responses and nonparametric bootstrap 0.95 confidence limits for population means, stratified by dose',w=5,h=4}#| label: fig-long-bootcls# Show means with bootstrap nonparametric CLscdys <- cdystonia[, as.list(smean.cl.boot(twstrs)), by = list(treat, week)]ggplot(cdys, aes(x=week, y=Mean)) + xl + yl + ylim(0, 70) + geom_line() + facet_wrap(~ treat, nrow=2) + geom_ribbon(aes(x=week, ymin=Lower, ymax=Upper), alpha=0.2)```#### Model with $Y_{i0}$ as Baseline Covariate```{r}baseline <- cdystonia[week ==0]baseline[, week :=NULL]setnames(baseline, 'twstrs', 'twstrs0')followup <- cdystonia[week >0, .(uid, week, twstrs)]setkey(baseline, uid)setkey(followup, uid, week)both <-Merge(baseline, followup, id =~ uid)# Remove person with no follow-up recordboth <- both[!is.na(week)]dd <-datadist(both)options(datadist='dd')```### Using Generalized Least Squares`r mrg(sound("gls-10"))`We stay with baseline adjustment and use a variety of correlation `r ipacue()`structures, with constant variance. Time is modeled as a restrictedcubic spline with 3 knots, because there are only 3 unique interiorvalues of `week`.```{r k}```AIC computed above is set up so that smaller values are best. Fromthis the continuous-time AR1 and exponential structures are tied forthe best. For the remainder of the analysis use `corCAR1`,using `Gls`. [ @kes98com did a simulation study to study the reliability of AIC for selecting the correct covariance structure in repeated measurement models. In choosing from among 11 structures, AIC selected the correct structure 47% of the time. @gur11avo demonstrated that fixed effects in a mixed effects model can be biased, independent of sample size, when the specified covariate matrix is more restricted than the true one.]{.aside}```{r l}```$\hat{\rho} = 0.8672$, the estimate of the correlation between two `r ipacue()`measurements taken one week apart on the same subject. The estimatedcorrelation for measurements 10 weeks apart is $0.8672^{10} = 0.24$.```{r fig-long-variogram,fig.cap='Variogram, with assumed correlation pattern superimposed',h=3.75,w=4.25}#| label: fig-long-variogram```Check constant variance and normality assumptions: `r ipacue()````{r fig-long-resid,h=6,w=7.5,cap='Three residual plots to check for absence of trends in central tendency and in variability. Upper right panel shows the baseline score on the $x$-axis. Bottom left panel shows the mean $\\pm 2\\times$ SD. Bottom right panel is the QQ plot for checking normality of residuals from the GLS fit.'}#| label: fig-long-resid```Now get hypothesis tests, estimates, and graphically interpret themodel.`r mrg(sound("gls-11"))````{r m}``````{r fig-long-anova,cap='Results of `anova.rms` from generalized least squares fit with continuous time AR1 correlation structure',w=5,h=4}#| label: fig-long-anova``````{r fig-long-pleffects,h=5.5,w=7,cap='Estimated effects of time, baseline `TWSTRS`, age, and sex'}#| label: fig-long-pleffects``````{r o}``````{r p}``````{r q}``````{r fig-long-contrasts,h=4,w=6,cap='Contrasts and 0.95 confidence limits from GLS fit'}#| label: fig-long-contrasts```Although multiple d.f. tests such as total treatment effects or `r ipacue()`treatment $\times$ time interaction tests are comprehensive, theirincreased degrees of freedom can dilute power. In a treatmentcomparison, treatment contrasts at the last time point (singled.f. tests) are often of major interest. Such contrasts areinformed by all the measurements made by all subjects (up untildropout times) when a smooth time trend is assumed.```{r nomogram,h=6.5,w=7.5,cap='Nomogram from GLS fit. Second axis is the baseline score.'}#| label: fig-long-nomogramn <- nomogram(a, age=c(seq(20, 80, by=10), 85))plot(n, cex.axis=.55, cex.var=.8, lmgp=.25) # Figure (*\ref{fig:longit-nomogram}*)```### Bayesian Proportional Odds Random Effects Model {#sec-long-bayes-re}* Develop a $y$-transformation invariant longitudinal model `r ipacue()`* Proportional odds model with no grouping of TWSTRS scores* Bayesian random effects model* Random effects Gaussian with exponential prior distribution for its SD, with mean 1.0* Compound symmetry correlation structure* Demonstrates a large amount of patient-to-patient intercept variability```{r bayesfit}``````{r bayesfit2}```* Show the final graphic (high dose:placebo contrast as function of time `r ipacue()`* Intervals are 0.95 highest posterior density intervals* $y$-axis: log-odds ratio```{r bayesfit3}``````{r bayesfit4}```For each posterior draw compute the difference in means and get anexact (to within simulation error) 0.95 highest posterior densityintervals for these differences.```{r bayesfit5,w=7,h=3.75}``````{r bayesfit6}```### Bayesian Markov Semiparametric Model {#sec-long-bayes-markov}* First-order Markov model `r ipacue()`* Serial correlation induced by Markov model is similar to AR(1) which we already know fits these data* Markov model is more likely to fit the data than the random effects model, which induces a compound symmetry correlation structure* Models state transitions* PO model at each visit, with Y from previous visit conditioned upon just like any covariate* Need to uncondition (marginalize) on previous Y to get the time-response profile we usually need* Semiparametric model is especially attractive because one can easily "uncondition" a discrete Y model, and the distribution of Y for control subjects can be any shape* Let measurement times be $t_{1}, t_{2}, \dots, t_{m}$, and the measurement for a subject at time $t$ be denoted $Y(t)$* First-order Markov model:\begin{array}{ccc} \Pr(Y(t_{i}) \geq y | X, Y(t_{i-1})) &=& \mathrm{expit}(\alpha_{y} + X\beta\\ &+& g(Y(t_{i-1}), t_{i}, t_{i} - t_{i-1}))\end{array}* $g$ involves any number of regression coefficients for a main effect of $t$, the main effect of time gap $t_{i} - t_{i-1}$ if this is not collinear with absolute time, a main effect of the previous state, and interactions between these* Examples of how the previous state may be modeled in $g$: + linear in numeric codes for $Y$ + spline function in same + discontinuous bi-linear relationship where there is a slope for in-hospital outcome severity, a separate slope for outpatient outcome severity, and an intercept jump at the transition from inpatient to outpatient (or _vice versa_)* Markov model is quite flexible in handling time trends and serial correlation patterns* Can allow for irregular measurement times:<br> [hbiostat.org/stat/irreg.html](https://hbiostat.org/stat/irreg.html)Fit the model and run standard Stan diagnostics.```{r mark1,h=6,w=7.5}```Note that posterior sampling is much more efficient without random effects.```{r mark2}```Let's add subject-level random effects to the model. Smallness of thestandard deviation of the random effects provides support for theassumption of conditional independence that we like to make for Markovmodels and allows us to simplify the model by omitting random effects.```{r mark3}``````{r mark4}```The random effects SD is only 0.11 on the logit scale. Also, thestandard deviations of all the regression parameter posterior distributions arevirtually unchanged with the addition of random effects:```{r mark4r,w=7,h=7}```So we will use the model omitting random effects.Show the partial effects of all the predictors, including the effectof the previous measurement of TWSTRS. Also compute high dose:placebotreatment contrasts on these conditional estimates.```{r mark5}``````{r mark5b}```Using posterior means for parameter values, compute the probabilitythat at a given week `twstrs` will be $\geq 40$ when at theprevious visit it was 40. Also show the conditional mean `twstrs`when it was 40 at the previous visit.```{r mark6}``````{r mark6b}```* Semiparametric models provide not only estimates of tendencies of Y `r ipacue()`but also estimate the whole distribution of Y* Estimate the entire conditional distribution of Y at week 12 for high-dose patients having `TWSTRS`=42 at week 8* Other covariates set to median/mode* Use posterior mean of all the cell probabilities* Also show pointwise 0.95 highest posterior density intervals* To roughly approximate simultaneous confidence bands make the pointwise limits sum to 1 like the posterior means do```{r mark6c}```* Repeat this showing the variation over 5 posterior draws `r ipacue()````{r mark6d}```* Turn to marginalized (unconditional on previous `twstrs`) `r ipacue()`quantities* Capitalize on PO model being a multinomial model, just with POrestrictions* Manipulations of conditional probabilities to get theunconditional probability that `twstrs`=y doesn't need to knowabout PO* Compute all cell probabilities and use the lawof total probability recursively$$\Pr(Y_{t} = y | X) = \sum_{j=1}^{k} \Pr(Y_{t} = y | X, Y_{t-1} = j) \Pr(Y_{t-1} = j | X)$$* `predict.blrm` method with `type='fitted.ind'` computes the needed conditional cell probabilities, optionally for all posterior draws at once* Easy to get highest posterior density intervals for derivedparameters such as unconditional probabilities or unconditionalmeans* `Hmisc` package `soprobMarkovOrdm` function (in version 4.6) computes an array of all the state occupancy probabilities for all the posterior draws```{r mark7}```* Use the same posterior draws of unconditional probabilities of all `r ipacue()`values of TWSTRS to get the posterior distribution of differences inmean TWSTRS between high and low dose```{r mark8}```* Get posterior mean of all cell probabilities estimates at week 12 `r ipacue()`* Distribution of TWSTRS conditional high dose, median age, mode sex* Not conditional on week 8 value```{r mark9}```## Study Questions**Section 7.2**1. When should one model the time-response profile using discrete time?**Section 7.3**1. What makes generalized least squares and mixed effect models relatively robust to non-completely-random dropouts?1. What does the last observation carried forward method always violate?**Section 7.4**1. Which correlation structure do you expect to fit the data when there are rapid repetitions over a short time span? When the follow-up time span is very long?**Section 7.8**1. What can go wrong if many correlation structures are tested in one dataset?1. In a longitudinal intervention study, what is the most typical comparison of interest? Is it best to borrow information in estimating this contrast?```{r echo=FALSE}saveCap('07')```