Ordinal Markov State Transition Model

Lee and Daniels (2007) Biometrics Model

Model in General

Regarding the absorbing states, the key is to understand gamma and how that works with Delta.  The gamma.mat below simulates the first state that is not exactly absorbing, but if you look at the data pr(y_{ij}=1 | y_{ij-1}=1) is very high.  In these data, it might be around 0.98.  I think there are problems with generating a deterministic absorbing state from this model.  The marginal and conditional models have to be cohesive and if you make gamma.mat too extreme, you will not be able to solve for Delta.  I played around with it a bit.  

 

gamma.mat <- rbind( c(10,  0,   0,   0, 0),
                    c(-25,  1.5,   0,   0, 0),
                    c(-25,  0,   1.5,   0, 0),
                    c(-25,  0,   0,   1.5, 0))

I think we will be able to force an absorbing state when generating the data, but I will want to see how that impacts marginal model parameters.

Handling of Pre-Randomization Ordinal State

Overall Modeling Issues

FH Current Global Summary

As shown in the analysis of the Violet 2 study, carrying death forward until the last planned day of followup for the whole study results in a huge violation of the proportional odds assumption with respect to time (study day). If one really wants to carry deaths forward (not recommended), the the partial proportional odds model (PPO) should be used. For that model one would have an exception to PO for the time variable (at least) but would still model treatment using only one parameter (assume PO for treatment). It is instructive to consider the limiting case where the ordinal outcome is binary and death is the only event. With small time intervals, a longitudinal binary logistic analysis has these properties:

Current recommendation:

Jonathan Schildcrout’s Summary of Multi-State Modeling Issues

I would like to review the question we are attempting to answer and the perceived challenges with analyses:

  1. Are we looking at a treatment by (smooth function of) time interaction? If yes, I’m interested in discussing because based on a note that David wrote, they are interested in a single outcome at, say day 15. It would seem to me that we will not be able to estimate that treatment effect more accurately than he could, but we could potentially estimate it more efficiently by using the longitudinal data.
  2. Do you know how many people are going to be in this dataset? Will it be 15 days of followup? Will there be missing data?
  3. Will there be absorbing states? Will it be one or two absorbing states.
  4. What else is there?

I feel like I have a pretty good sense of how the optimizer will perform, although with more states, the optimizer is going to have more problems. Some things I think I’ve learned:

  1. Dropping all observations after entering an absorbing state leads to an analysis of longitudinal ordinal outcomes for people who are alive. It presupposes that those dead people could theoretically change states. I think that I think this is not the right approach. Both of our analysis procedures reproduced the data generating mechanism, but that generating mechanism assumes there is a positive probability of changing states.
  2. Keeping the dead observations in the analysis lead to difficulty in understanding what the “true” parameter values are. The correlation structure is definitely very challenging in that scenario, and also I have concerns that the proportional odd assumption is still valid.
  3. With 2) being said, I think that the plots that I showed you that include the 1 and 2 absorbing states show the “truth” under the assumptions of the data generating model parameters combined with an overriding absorbing state combined with the assumption that you are using a proportional odds model to fit the data.
  4. I think that I think that keeping dead observations is the right thing to do. The question then becomes how to model those data.

Modeling data with an absorbing state:

  1. I have major concerns that about getting the correlation structure right which is necessary to ensure that most likelihood based estimators will be valid.

  2. Because of 1) my top 3 choices for analyses would be

    1. Multistate models that are then marginalized in order to estimate the treatment effect over time. I’ve never done this, and it would take some time for me to learn but I think it is doable
    2. A GEE type estimator
    3. The marginalized transition model (which is a likelihood based procedure but can have a misspecified dependence structure and still get valid parameter estimates) but possibly with robust SEs since I am worried about the correlation structure. That being said, we would have to force the fitter to NOT allow for 0 probabilities of leaving the dead state. It would be almost right.

It does not seem that OMTMs are mathematically well suited for models with absorbing states (assuming we want to keep all observations that occur after the absorbing state occurs), and this is caused by the constraint that the marginal mean (probability of being in state k) must equal the marginalized conditional mean.

Thought Experiment: Longitudinal vs Cross-Sectional Analysis

Let’s say you do a cross-sectional analysis at the 28 day timepoint and compare it to a longitudinal model that includes a flexible treatment by time interaction and isthen summarized on day 28… some questions:

FH: The two should be consistent since the time trend is flexibly modeled.  The all-times analysis needs to stop at death.  So what about the single time analysis?  Tradition has it that death before day 28 is considered the worst outcome.  So that’s effectively carrying death forward.  But is a single time analysis comparable to a longitudinal one?   And if death were the only endpoint you were considering time to event you woudn’t need to carry death forward. 

Use of a Continuous Model

Data

Code

Strategy and Plans

Bayes

Choice of Prior

Planned Deviations from Trial Design

Data Simulation

Checking Fit of PO Model