rmsb Package Examples Using rstan

Author
Affiliation

Department of Biostatistics
Vanderbilt University School of Medicine
VICTR Research Methods Program

Published

March 11, 2024

1 Overview and Setup

The rmsb package is the Bayesian companion to the rms package, and uses many of its functions for post-model-fitting analysis and graphics. The sole fitting function in rmsb at present is blrm. blrm is for Bayesian binary and ordinal proportional odds logistic regression and the Peterson & Harrell (1990) partial proportional odds model that relaxes the proportional odds assumption. It is the analog and generalization of rms::lrm and for ordinal responses is intended for outcomes with up to a several hundred ordinal levels. The Bayesian approach has a number of advantage over traditional frequentist models, including

  1. the use of exact calculations (to within simulation error) instead of large sample (e.g., normal theory) approximations to p-values and confidence intervals
  2. exact and more intuitive inference when random effects are included in the model
  3. the ability to make probability statements about parameters and combinations of parameters, which includes computations of the probabilities of assertions such as “the effect of \(x_1\) exceeds 1.2 and the effect of \(x_2\) exceeds 0.7”
  4. capturing more sources of uncertainty. For example, the blrm function automatically computes highest posterior density intervals on a variety of statistical indexes such as the Brier score and AUROC (\(c\)-index). Note: By default these intervals are computed using only 400 posterior draws to save time. For a blrm fit object f you can specify how many samples to draw, to get more accurate intervals, by specifying for example print(f, ns=2000).
  5. the ability to incorporate external information or beliefs about linear combinations of parameters using prior distributions on contrasts
  6. by using Stan language to specify the likelihood components, one can not only do posterior distribution sampling but can quickly compute maximum likelihood estimates (Bayesian posterior modes)

blrm uses uninformative priors for \(\beta\) parameters that are not used in contrasts appearing in the pcontrast argument. For other \(\beta\)s, priors are put on contrasts specified in pcontrast, capitalizing on Stan automatically propagating priors on linear combinations of parameters to priors on individual parameters. This allows priors to be specified on more of a user-friendly scale, and allows one to put priors on specific comparisons, nonlinear effects, interactions, parts of interactions, and more. This versatility is shown in a series of examples here. One example is the common situation in which there is a time \(\times\) treatment interaction and the prior for the treatment effect is needed to be applied to a specific time.

When there are random effects, the analyst may specify the mean of the exponential distribution that serves as the prior for the standard deviation of the random effects, with the default mean being 1.0, a reasonable number for the logit scale. You can also use a half-\(t\) distribution for the SD of random effects.

Regarding uncertainties about statistical indexes of model performance such as AUROC, the Bayesian posterior distributions computed by rmsb account for coefficient uncertainty for the sample at hand’s ability to uncover the true unknown data generating coefficients and how strongly those coefficients can predict outcomes in the sample. The posterior distributions do not pertain to sample-to-sample variability of model fit or model validation.

blrm uses Stan code written by Ben Goodrich of Columbia University and Frank Harrell. This code is precompiled when the rmsb package is built when using rstan for which you must install the rstan package to use rmsb. You can use cmdstan and the cmdstanr R package instead. This is generally recommended, as cmdstan and cmdstanr make use of the latest versions of Stan. Here are the steps I used for installing these on Linux, using the latest version of cmdstan as of 2023-09-25 (2.33.1):

cd ~
sudo R
install.packages('cmdstanr', repos='https://mc-stan.org/r-packages',
                  lib='/usr/local/lib/R/site-library')
# Default location for cmdstan is in `~/.cmdstan` in the user's home directory
# This is not such a good idea
cmdstanr::install_cmdstan(cores=10, dir='/usr/local/bin')
# When running sudo, default is /root/.cmdstan which is not easily accessible

Add the following line in .zshrc or .bashrc so that cmdstanr will know where to find cmdstan:

export CMDSTAN=/usr/local/bin/cmdstan-2.34.1

Instead of that, I have the location defined in cmdstan.loc in ~/.Rprofile.

cmdstanr will try to put compiled Stan code in the same directory where cmdstan is stored. With the above setup, this location is not user-writeable, so it will not work. The blrm function by default puts compiled code in the .rmsb directory under the user’s home directory. Specify options(rmsbdir='some another location') to change this. The default behavior puts all compiled code in a system-wide location. If you desire to archive a project and reproduce the result even if the Stan code inside rmsb is updated, you may want to specify a project-specific location fo rmsbdir.

Here is a typical setup code chunk for using rmsb. When using rstan, it is not recommended to use options(auto_write=TRUE) as when running a series of regressions, the rstan package tends to recompile Stan code even when the code doesn’t change.

```{r setup}
require(rmsb)
options(mc.cores = parallel::detectCores() - 1)   # use max # CPUs less 1
```

If you want to use cmdstan by default, avoiding having to use the backend argument to blrm, you can add this to the setup chunk:

options(rmsb.backend='cmdstan')

This vignette is run separately to use rstan instead of cmdstanr. The variable wstan below defines which system to run. So that the file= arguments to blrm can store results in different files, a little function rfile is created below. It checks wstan and when using cmdstan adds "c" to the end of the main part of the file name. rfile also automatically quotes the base file name specified by the user.

wstan <- c('cmdstan', 'rstan')[2]
rfile <- function(f)
  paste0(as.character(substitute(f)), if(wstan == 'cmdstan') 'c', '.rds')
require(rmsb)
knitrSet(lang='quarto', w=7, h=7)
options(prType='html')     # for rms output: print, anova, summary, etc.
options(mc.cores = parallel::detectCores() - 1)   # use max # CPUs less 1
if(wstan == 'cmdstan') {
  Sys.getenv('CMDSTAN')
  # cmdstanr::cmdstan_path()
  # Sometimes RStudio doesn't find the environment variable
  cat('cmdstan.loc=', cmdstan.loc, '\n')
  cmdstanr::set_cmdstan_path(cmdstan.loc)
  options(rmsb.backend='cmdstan')
    }

1.1 Running Fits Only When Something Changes

You’ll see file='...' in the longer running of the blrm calls below (actually the code uses rfile just defined to construct the file name). If the file already exists and none of the data nor the options sent to rstan/cmdstanr nor the underlying Stan code have changed from what were used to create the fit object stored in that file (as judged by their md5 hash), that saved fit object is returned immediately without running the rstan code, often saving a great deal of execution time. This works well with the workflow of long R markdown reports making it so that only portions of Bayesian analysis that have changed are run. Note that using the knitr cache=TRUE option does not work well as the cache files for this script were about a GB in size, and the system does not accurately recognize when a model fit hasn’t changed and doesn’t need to be run when rstan is involved.

A restored fit object does not contain the rstan or cmdstanr object, saving tens of megabytes of storage. Standard Stan diagnostics are stored in the fit object separately, and it is assumed that if the user wanted to run rstan::loo loo=TRUE would have been specified to blrm so that loo is run and its (small) result stored in the blrm fit object. If you want to run pairs to get more graphical diagnostics, intead of relying on the rstan object always being available, specify pairs=TRUE or pairs='filename.png' to blrm to graph the pair plots. The latter is recommended, and one can put knitr::include_graphics('filename.png') in the R code chunk to render the graph in the report even if blrm was not re-run.

When file='filename.rds' is specified to blrm and the file does not exist or analysis inputs have changed since the file was created, the blrm fit object will be saved in saveRDS .rds binary format in your current working directory at the completion of blrm’s work. The rstan component is omitted from the saved file.

The utility function fitIf available from here is another way to efficiently manage Bayesian simulation workflow. The fitIf approach runs the analysis only if the file doesn’t already exist. If it exists but the data or the model have changed, the fitIf approach is not intelligent enough to re-run the analysis (unlike the file='...' approach above). Whether using fitIf or file=, you lose the ability to run the rms::stanDxplot(..., rstan=TRUE) on restored objects, so the stanDxplot function tries to find an existing trace plot image file corresponding to the current R markdown chunk when the fit object no longer has the rstan component. For most purposes this doesn’t matter, because running using the defaults shows the needed non-burnin samples which are always stored in fit objects. The file=filename.rds approach is preferred.

1.2 Priors

It is often difficult to specify priors for parameters, especially when there are nonlinear effects (e.g., splines) and interactions in the model. We need a way to specify priors on the original \(X\) and \(Y\) scales. Fortunately Stan provides an elegant solution.

As discussed here Stan allows one to specify priors on transformations of model parameters, and these priors propagate back to the original parameters. It is easier to specify a prior for the effect of increasing age from 30 to 60 that it is to specify a prior for the age slope. It may be difficult to specify a prior for an age \(\times\) treatment interaction, but much easier to specify a prior for how different the treatment effect is for a 30 year old and a 60 year old. By specifying priors on one or more contrasts one can easily encode outside information / information borrowing / shrinkage.

The rms contrast function provides a general way to implement contrasts up to double differences, and more details about computations are provided in that link. The approach used for specifying priors for contrast in rmsb::blrm uses the same process but is even more general. Both contrast and blrm compute design matrices at user-specified predictor settings, and the contrast matrices (matrices multipled by \(\hat{\beta}\)) are simply differences in such design matrices. Thinking of contrasts as differences in predicted values frees the user from having to care about how parameters map to estimands, and allows an R predict(fit, type='x') function do the hard work. See here for several interesting examples.

Beginning with rmsb version 1.0-0, prior standard deviations are not specified for individual parameters, and priors are only specified for contrasts through the pcontrast and npcontrast arguments. Since blrm transforms the design matrix through a QR decomposition to speed up posterior sampling, it also transforms the contrast matrices the same way so that priors for contrasts are in the right space inside the Stan code. Parameters not used in any contrast will have non-informative prior distributions.

For non-proportional odds effects in a constrained partial PO model, priors are specified through contrasts using the npcontrast argument, which is a list of the same structure as pcontrast but corresponds to the model given to the second argument to blrm.

Instead of specifying prior standard deviations directly, it is convenient to solve for SDs that correspond to a specified probability for a specified interval of an effect in the model. Effects for the PO model are usually expressed as odds ratios (OR). For the case where the prior median for the OR is 1.0 (prior mean or median log(OR)=0.0) it is useful to solve for the prior SD \(\sigma\) so that \(\Pr(\text{OR} > r) = a = \Pr(\text{OR} < \frac{1}{r})\), leading to \(a = \frac{|\log(r)|}{\Phi^{-1}(1-a)}\), computed by the psigma function below. Another function . is defined as an abbreviation for list() for later usage with pcontrast.

psigma <- function(r, a, inline=FALSE, pr=! inline) {
  sigma <- abs(log(r)) / qnorm(1 - a)
  dir <- if(r > 1.) '>' else '<'
  x <- if(inline) paste0('$\\Pr(\\text{OR}', dir, r, ') =', a,
                         ' \\Rightarrow \\sigma=', round(sigma, 3), '$')
  else paste0('Pr(OR ', dir, ' ', r, ') = ', a, ' ⇒ σ=', round(sigma, 3))
  if(inline) return(x)
  if(pr) {
    cat('\n', x, '\n\n', sep='')
    return(invisible(sigma))
  }
  sigma
}
. <- function(...) list(...)

1.3 Example: 10-level Ordinal Outcome

Simulate a dataset with three predictors and one 10-level ordinal outcome. Run the frequentist proportional odds model then the Bayesian one.

set.seed(1)
n <- 500
x1 <- runif(n, -1, 1)
x2 <- runif(n, -1, 1)
x3 <- sample(0 : 1, n, TRUE)
y <- x1 + 0.5 * x2 + x3 + rnorm(n)
y <- as.integer(cut2(y, g=10))
dd <- datadist(x1, x2, x3); options(datadist='dd')
f <- lrm(y ~ x1 + pol(x2, 2) + x3, eps=1e-7) # eps to check against Stan
f

Logistic Regression Model

lrm(formula = y ~ x1 + pol(x2, 2) + x3, eps = 1e-07)

Frequencies of Responses

 1  2  3  4  5  6  7  8  9 10 
50 50 50 50 50 50 50 50 50 50 
Model Likelihood
Ratio Test
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 500 LR χ2 235.04 R2 0.379 C 0.740
max |∂log L/∂β| 2×10-14 d.f. 4 R24,500 0.370 Dxy 0.480
Pr(>χ2) <0.0001 R24,495 0.373 γ 0.480
Brier 0.181 τa 0.433
β S.E. Wald Z Pr(>|Z|)
y≥2   2.0552  0.1951 10.53 <0.0001
y≥3   1.0832  0.1678 6.45 <0.0001
y≥4   0.4006  0.1606 2.49 0.0126
y≥5  -0.1790  0.1599 -1.12 0.2629
y≥6  -0.7246  0.1626 -4.46 <0.0001
y≥7  -1.2867  0.1689 -7.62 <0.0001
y≥8  -1.8982  0.1788 -10.62 <0.0001
y≥9  -2.6288  0.1953 -13.46 <0.0001
y≥10  -3.6408  0.2282 -15.95 <0.0001
x1   1.5694  0.1532 10.24 <0.0001
x2   0.7901  0.1368 5.78 <0.0001
x22  -0.2213  0.2578 -0.86 0.3908
x3   1.6341  0.1707 9.57 <0.0001

Before getting posterior distributions of parameters, use Stan to just get maximum likelihood estimates and compare them with those from lrm. Do this for increasingly flat priors for the \(\beta\)s associated with x2. Running method='optimizing' is a quick way to study the effect of priors on the posterior modes for non-intercepts when there are no random effects in the model.

# Define a function that creates a `pcontrast` for `blrm`
# Skepticism of prior is specified by making changes in Y be small
# as x2 goes from -1 to 0 to 1
# Pr(OR > 2) = p
con <- function(p)
  list(sd=psigma(2, p),
        c1=.(x2=0), c2=.(x2=-1), c3=.(x2=1), c4=.(x2=0),
        contrast=expression(c1-c2, c3-c4))
k <- NULL
for(p in c(.01, .05, .1, .2)) {
    g <- blrm(y ~ x1 + pol(x2, 2) + x3, method='optimizing',
              pcontrast=con(p))
    cat('-2 log likelihood:', g$deviance, '\n')
    k <- rbind(k, g$coefficients)
}

Pr(OR > 2) = 0.01 ⇒ σ=0.298

-2 log likelihood: 2051.073 

Pr(OR > 2) = 0.05 ⇒ σ=0.421

-2 log likelihood: 2048.264 

Pr(OR > 2) = 0.1 ⇒ σ=0.541

-2 log likelihood: 2047.165 

Pr(OR > 2) = 0.2 ⇒ σ=0.824

-2 log likelihood: 2046.707 
k
         y>=2     y>=3      y>=4       y>=5       y>=6      y>=7      y>=8
[1,] 1.984898 1.023570 0.3511311 -0.2194911 -0.7567008 -1.310668 -1.913568
[2,] 2.007019 1.041640 0.3652443 -0.2088649 -0.7492906 -1.306323 -1.912470
[3,] 2.020607 1.053093 0.3746705 -0.2012455 -0.7433725 -1.302028 -1.909910
[4,] 2.037102 1.067216 0.3865845 -0.1913002 -0.7352747 -1.295710 -1.905515
          y>=9     y>=10       x1        x2        x2^2       x3
[1,] -2.632588 -3.631440 1.563158 0.5561955 -0.08207806 1.611129
[2,] -2.635873 -3.639861 1.565010 0.6521535 -0.12172075 1.619725
[3,] -2.635620 -3.642300 1.565981 0.7000570 -0.14824743 1.624327
[4,] -2.633785 -3.643290 1.567788 0.7482528 -0.18206292 1.629629
# Compare with ordinary MLEs and deviance
f$deviance
[1] 2302.585 2067.549
coef(f)
      y>=2       y>=3       y>=4       y>=5       y>=6       y>=7       y>=8 
 2.0551529  1.0831900  0.4005868 -0.1790073 -0.7246249 -1.2866643 -1.8982180 
      y>=9      y>=10         x1         x2       x2^2         x3 
-2.6287996 -3.6408392  1.5694325  0.7900802 -0.2212900  1.6340800 

Fit the model with Dirichlet priors on intercepts and wide normal priors on the \(\beta\)s. Show the model fit summary. Note that the indexes of predictive discrimination/accuracy include 0.95 highest posterior density intervals. In frequentist inference we pretend that quantities such as AUROC and \(R^2\) are estimated without error, which is far from the case.

In several places you will see an index named Symmetry. This is a measure of the symmetry of a posterior distribution. Values farther from 1.0 indicate asymmetry, which indicates that the use of standard errors and the use of a normal approximation for the posterior distribution are not justified. The symmetry index is the ratio of the gap between the posterior mean and the 0.95 quantile of the posterior distribution to the gap between the 0.05 quantile and the mean.

bs <- blrm(y ~ x1 + pol(x2, 2) + x3, file=rfile(bs))
bs

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.233 for Intercepts

blrm(formula = y ~ x1 + pol(x2, 2) + x3, file = rfile(bs))

Frequencies of Responses

 1  2  3  4  5  6  7  8  9 10 
50 50 50 50 50 50 50 50 50 50 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 500 LOO log L -1046.73±13.18 g 1.586 [1.367, 1.756] C 0.738 [0.735, 0.74]
Draws 4000 LOO IC 2093.46±26.36 gp 0.114 [0.085, 0.138] Dxy 0.477 [0.471, 0.481]
Chains 4 Effective p 12.97±0.24 EV 0.149 [0.105, 0.189]
Time 3.3s B 0.077 [0.076, 0.079] v 1.932 [1.436, 2.369]
p 4 vp 0.014 [0.008, 0.021]
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥2   2.0552   2.0646   2.0610  0.1874   1.7124   2.4492  1.0000  1.07
y≥3   1.0832   1.0863   1.0838  0.1640   0.7806   1.4090  1.0000  1.01
y≥4   0.4006   0.4028   0.4045  0.1595   0.0943   0.7124  0.9950  0.98
y≥5  -0.1790  -0.1785  -0.1791  0.1582  -0.4864   0.1302  0.1305  0.99
y≥6  -0.7246  -0.7245  -0.7237  0.1620  -1.0614  -0.4222  0.0000  0.95
y≥7  -1.2867  -1.2896  -1.2871  0.1670  -1.6044  -0.9664  0.0000  0.97
y≥8  -1.8982  -1.9037  -1.9031  0.1761  -2.2498  -1.5744  0.0000  1.01
y≥9  -2.6288  -2.6392  -2.6370  0.1915  -3.0257  -2.2730  0.0000  0.97
y≥10  -3.6408  -3.6587  -3.6574  0.2219  -4.0876  -3.2213  0.0000  0.98
x1   1.5695   1.5784   1.5752  0.1547   1.2962   1.8932  1.0000  1.07
x2   0.7896   0.7969   0.7985  0.1385   0.5326   1.0763  1.0000  0.98
x22  -0.2213  -0.2237  -0.2184  0.2632  -0.7177   0.3120  0.1940  1.02
x3   1.6342   1.6431   1.6410  0.1674   1.3052   1.9576  1.0000  1.01
# Show more detailed analysis of model performance measures
blrmStats(bs, pl=TRUE)
Indexes computed for a random sample of 400 of 4000 posterior draws

gp, B, EV, and vp are for intercept 1 out of 9 intercepts

           Dxy     C     g    gp     B    EV     v    vp
Mean     0.477 0.738 1.579 0.114 0.077 0.148 1.916 0.014
SE       0.003 0.002 0.108 0.013 0.001 0.023 0.264 0.003
Lower    0.470 0.735 1.364 0.092 0.076 0.104 1.410 0.009
Upper    0.481 0.740 1.762 0.143 0.079 0.188 2.370 0.021
Symmetry 0.425 0.425 1.061 1.186 1.485 1.163 1.160 1.331

Dxy: 2*(C - 0.5)   C: concordance probability
g: Gini mean |difference| on linear predictor (lp)
gp: Gini on predicted probability        B: Brier score
EV: explained variation on prob. scale   v: var(lp)   vp: var(prob)

Show basic Stan diagnostics. Had stanDxplots(bs, rstan=TRUE) been used, intercepts would have been shifted from what is in g because of subtractions of covariate means before passing data to rstan.

stanDxplot(bs)
stanDx(bs)
Iterations: 2000 on each of 4 chains, with 4000 posterior distribution samples saved

For each parameter, n_eff is a crude measure of effective sample size
and Rhat is the potential scale reduction factor on split chains
(at convergence, Rhat=1)
      n_eff  Rhat
y>=2   6811 1.000
y>=3   6444 1.001
y>=4   5832 1.001
y>=5   7082 1.000
y>=6   8185 1.000
y>=7   8546 1.000
y>=8   8629 1.000
y>=9   8109 1.000
y>=10  7082 1.000
x1     5291 1.000
x2     8495 0.999
x2^2  10105 0.999
x3     6898 1.000

Here are the posterior distributions, calculated using kernel density estimates from posterior draws. Posterior models, shown as vertical lines, are parameter values that maximize the log posterior density (using the Stan optimizer in the original model fit) so do not necessarily coincide with the peak of the kernel density estimates.

plot(bs)
# Also show 2-d posterior density contour for two collinear terms
plot(bs, c('x2', 'x2^2'), bivar=TRUE)   # assumes ellipse
plot(bs, c('x2', 'x2^2'), bivar=TRUE, bivarmethod='kernel')   # kernel density

# Print frequentist side-by-side with Bayesian posterior mean, median, mode

cbind(MLE=coef(f), t(bs$param))
             MLE       mode       mean     median
y>=2   2.0551529  2.0551518  2.0645680  2.0610097
y>=3   1.0831900  1.0831610  1.0863461  1.0837730
y>=4   0.4005868  0.4005549  0.4028010  0.4044707
y>=5  -0.1790073 -0.1790191 -0.1785259 -0.1791037
y>=6  -0.7246249 -0.7246384 -0.7245167 -0.7237201
y>=7  -1.2866643 -1.2866790 -1.2895615 -1.2871112
y>=8  -1.8982180 -1.8982290 -1.9036927 -1.9031421
y>=9  -2.6287996 -2.6288109 -2.6392292 -2.6369528
y>=10 -3.6408392 -3.6408434 -3.6586657 -3.6574485
x1     1.5694325  1.5695441  1.5784067  1.5751504
x2     0.7900802  0.7896154  0.7968585  0.7984516
x2^2  -0.2212900 -0.2213424 -0.2237217 -0.2184153
x3     1.6340800  1.6341991  1.6431114  1.6409941
# Compare covariance matrix of posterior draws with MLE
round(diag(vcov(f)) / diag(vcov(bs)), 2)
 y>=2  y>=3  y>=4  y>=5  y>=6  y>=7  y>=8  y>=9 y>=10    x1    x2  x2^2    x3 
 1.08  1.05  1.01  1.02  1.01  1.02  1.03  1.04  1.06  0.98  0.98  0.96  1.04 
range(vcov(f) / vcov(bs))
[1] -3.501791 15.668364

Next show frequentist and Bayesian contrasts. For the Bayesian contrast the point estimate is the posterior mean, and the 0.95 highest posterior density interval is computed. Instead of a p-value, the posterior probability that the contrast is positive is computed.

contrast(f,  list(x1=0, x3=1), list(x1=.25, x3=0))
            x2 Contrast      S.E.     Lower    Upper    Z Pr(>|z|)
1 -0.001559955 1.241722 0.1720205 0.9045679 1.578876 7.22        0

Confidence intervals are 0.95 individual intervals
k <- contrast(bs, list(x1=0:1, x3=1), list(x1=.25, x3=0))
k
            x2 Contrast      S.E.     Lower    Upper Pr(Contrast>0)
1 -0.001559955 1.248510 0.1685154 0.9345433 1.590467              1
2 -0.001559955 2.826916 0.2117025 2.4281141 3.246289              1

Intervals are 0.95 highest posterior density intervals
Contrast is the posterior mean 

For Bayesian contrasts we can also plot the posterior densities for the contrasts, and 2-d highest-density contour.

plot(k)
plot(k, bivar=TRUE)   # applicable when exactly 2 contrasts
plot(k, bivar=TRUE, bivarmethod='kernel')

Compute posterior probabilities for various assertions about unknown true parameter values. The PostF function is a function generator that effectively evaluates the assertion to a 0/1 value and computes the mean of these binary values over posterior draws. As is the case with inference about the quadratic effect of x2 below, when the assertion does not evaluate to a binary 0/1 or logical TRUE/FALSE value, it is taken as a quantity that is derived from one or more model parameters, and a posterior density is drawn for the derived parameter. We use that to get a posterior distribution on the vertex of the quadratic x2 effect.

P <- PostF(bs, pr=TRUE)   # show new short legal R names
 Original Name Short Name
 y>=2          a1        
 y>=3          a2        
 y>=4          a3        
 y>=5          a4        
 y>=6          a5        
 y>=7          a6        
 y>=8          a7        
 y>=9          a8        
 y>=10         a9        
 x1            b1        
 x2            b2        
 x2^2          b3        
 x3            b4        
P(b3 > 0 & b1 > 1.5)
[1] 0.1255
P(b3 > 0)
[1] 0.194
P(abs(b3) < 0.25)        # evidence for small |nonlinearity|
[1] 0.50875
mean(bs$draws[, 'x2^2'] > 0, na.rm=TRUE)    # longhand calculation
[1] 0.194
# Plot posterior distribution for the vertex of the quadratic x2 effect
# This distribution should be wide because the relationship is linear
# (true value of b3 is zero)
plot(P(-b2 / (2 * b3)))

# Recreate the P function using original parameter names
# (which may not be legal R name)
P <- PostF(bs, name='orig')
P(`x2^2` > 0)
[1] 0.194
P(`x2^2` > 0 & x1 > 1.5)
[1] 0.1255
# Remove rstan results from fit.  Compute  savings in object size.
# Note: this will only be accurate when running the fits for
# the first time (not when restoring shortened forms of them from
# disc)
# Result: 33.8MB before, 0.5MB after
s1 <- format(object.size(bs), 'MB')
bs$rstan <- NULL
s2 <- format(object.size(bs), 'MB')
cat('Before:', s1, '  After:', s2, '\n')
Before: 32.7 Mb   After: 0.5 Mb 

1.4 Bayesian Wilcoxon Test

Since the proportional odds ordinal (PO) logistic model is a generalization of Wilcoxon/Kruskal-Wallis tests one can use Bayesian proportional odds regression to get the Bayesian equivalent to the Wilcoxon test. Even if not adjusting for covariates (impossible with the Wilcoxon test) there are advantages to putting this in a modeling framework as detailed in Section 7.6 of BBR. A major advantage is estimation ability. One can estimate group-specific means, quantiles, and exceedance probabilities. And Bayesian inference provides exact uncertainty intervals (highest posterior density intervals in what follows) for these.

The PO model does not require there to be any ties among the Y values, so it handles continuous data very well (the orm function in the rms package efficiently handles many thousands of distinct Y levels requiring many thousands of intercepts in the model). Let’s re-analyze the calprotectin data in Section 7.3.1 of BBR to mimic the frequentist PO analysis in Section 7.6.

# Fecal Calprotectin: 2500 is above detection limit
# When detection limits occur at a single value, the PO
# model easily handles this in terms of estimating group
# differences (but not for estimating the mean Y)
calpro <- c(2500, 244, 2500, 726, 86, 2500, 61, 392, 2500, 114, 1226,
            2500, 168, 910, 627, 2500, 781, 57, 483, 30, 925, 1027,
            2500, 2500, 38, 18)

# Endoscopy score: 1 = No/Mild, 2=Mod/Severe Disease
# Would have been far better to code dose as 4 ordinal levels
endo <- c(2, 1, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2,
          2, 2, 2, 2, 2, 1, 1)
endo <- factor(endo, 1 : 2,
               c("No or Mild Activity", "Moderate or Severe Activity"))

dd <- datadist(endo, calpro); options(datadist='dd')
bcalpro <- blrm(calpro ~ endo, file=rfile(bcalpro))
print(bcalpro, intercepts=TRUE)

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.134 for Intercepts

blrm(formula = calpro ~ endo, file = rfile(bcalpro))

Frequencies of Responses

  18   30   38   57   61   86  114  168  244  392  483  627  726  781  910  925 
   1    1    1    1    1    1    1    1    1    1    1    1    1    1    1    1 
1027 1226 2500 
   1    1    8 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 26 LOO log L -94.31±8.76 g 1.31 [0.44, 2.127] C 0.854 [0.854, 0.854]
Draws 4000 LOO IC 188.62±17.52 gp 0.048 [0, 0.136] Dxy 0.708 [0.708, 0.708]
Chains 4 Effective p 31.23±3.75 EV 0.068 [0, 0.215]
Time 0.6s B 0.037 [0.034, 0.047] v 2.143 [0.124, 4.853]
p 1 vp 0.005 [0, 0.021]
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥30   2.0968   2.4982   2.3055  1.3331   0.0911   5.1442  0.9935  1.55
y≥38   1.3395   1.4489   1.3823  0.8938  -0.3217   3.1611  0.9635  1.31
y≥57   0.8678   0.8791   0.8640  0.7370  -0.5291   2.3155  0.8845  1.06
y≥61   0.4733   0.4398   0.4396  0.6835  -0.9080   1.7903  0.7342  1.05
y≥86   0.1122   0.0469   0.0597  0.6878  -1.2780   1.4044  0.5340  0.94
y≥114  -0.1956  -0.2853  -0.2630  0.6890  -1.7111   1.0057  0.3450  0.92
y≥168  -0.4710  -0.5783  -0.5556  0.6898  -1.9773   0.7235  0.1990  0.92
y≥244  -0.7653  -0.8960  -0.8753  0.7204  -2.3299   0.5024  0.0988  0.92
y≥392  -1.0953  -1.2342  -1.1978  0.7775  -2.7881   0.2582  0.0450  0.86
y≥483  -1.4155  -1.5624  -1.5257  0.8288  -3.3281  -0.0589  0.0238  0.87
y≥627  -1.6849  -1.8458  -1.8012  0.8547  -3.4735  -0.1588  0.0100  0.91
y≥726  -1.9227  -2.0941  -2.0553  0.8757  -3.8890  -0.4314  0.0048  0.91
y≥781  -2.1399  -2.3261  -2.2921  0.9007  -4.0747  -0.5265  0.0022  0.91
y≥910  -2.3439  -2.5424  -2.5073  0.9216  -4.4539  -0.8167  0.0013  0.88
y≥925  -2.5396  -2.7570  -2.7118  0.9362  -4.6796  -1.0304  0.0008  0.87
y≥1027  -2.7313  -2.9660  -2.9315  0.9467  -4.8977  -1.2027  0.0003  0.89
y≥1226  -2.9224  -3.1787  -3.1506  0.9573  -5.0870  -1.3459  0.0000  0.89
y≥2500  -3.1166  -3.4024  -3.3740  0.9668  -5.3827  -1.6208  0.0000  0.89
endo=Moderate or Severe Activity   2.7585   2.9369   2.9201  0.9739   1.2135   5.0283  0.9990  1.14
# print.blrm defaults to not showing intercepts if more than 9 of them
summary(bcalpro)
Effects   Response: calpro
Low High Δ Effect S.E. Lower 0.95 Upper 0.95
endo --- No or Mild Activity:Moderate or Severe Activity 2 1 -2.93700 0.9739 -5.02800 -1.2140
Odds Ratio 2 1 0.05303 0.00655 0.2971

One can see that the posterior probability of a positive group difference exceeds 0.99.

Now compute the posterior mean estimate of the mean and median calprotectin levels for the unknown data generating process, stratified by group and compare with sample estimates.

# Sample estimates
tapply(calpro, endo, mean)
        No or Mild Activity Moderate or Severe Activity 
                    400.000                    1372.944 
tapply(calpro, endo, median)
        No or Mild Activity Moderate or Severe Activity 
                       87.5                       976.0 
# Now compute estimates and 0.95 HPD intervals assuming PO
# The first method is exact
newdata <- data.frame(endo=levels(endo))
bar <- Mean(bcalpro)
predict(bcalpro, newdata, fun=bar)
$linear.predictors
[1]  309.0384 1348.9097

$lower
[1]  80.24419 925.12332

$upper
[1]  623.564 1737.434
quant <- Quantile(bcalpro)
med <- function(lp, ...) quant(lp=lp, ...)
Predict(bcalpro, endo, fun=med)
                         endo      yhat    lower     upper
1         No or Mild Activity  93.66912  18.0000  236.3188
2 Moderate or Severe Activity 985.27466 576.0837 1588.8058

Response variable (y):  

Limits are 0.95 confidence limits

The contrast function in rms now allows one to get posterior distributions of differences in nonlinearly transformed parameters, as follows.

k <- contrast(bcalpro, list(endo=levels(endo)[1]),
                       list(endo=levels(endo)[2]), fun=bar)
k

Posterior Summaries for First X Settings

  Posterior Mean Posterior Median Lower 0.95 HPD Upper 0.95 HPD
1       309.0384         275.2051       80.24419        623.564

Posterior Summaries for Second X Settings

  Posterior Mean Posterior Median Lower 0.95 HPD Upper 0.95 HPD
1        1348.91         1347.228       925.1233       1737.434

Posterior Summaries of First - Second

  Posterior Mean Posterior Median Lower 0.95 HPD Upper 0.95 HPD
1      -1039.871        -1049.429      -1531.074      -533.5887
plot(k, which='diff')

Peeking ahead a later section, we can use the constrained partial proportional odds model to assess the proportional odds assumption. Let’s assume that departures from proportional odds (constant increments in log odds) are modeled as linear in square root of calprotectin level. There is only one predictor (endo), so there is only one variable that might act non-proportionally. Hence the second formula has the same right-hand-side as the first formula in the blrm call.

bcalp <- blrm(calpro ~ endo, ~ endo, cppo=sqrt)
# use cppo=function(y) y for linear x by y interaction
bcalp

Bayesian Constrained Partial Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.134 for Intercepts

blrm(formula = calpro ~ endo, ppo = ~endo, cppo = sqrt)

Frequencies of Responses

  18   30   38   57   61   86  114  168  244  392  483  627  726  781  910  925 
   1    1    1    1    1    1    1    1    1    1    1    1    1    1    1    1 
1027 1226 2500 
   1    1    8 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 26 LOO log L -86.56±9.26 g 2.028 [0.98, 3.163] C 0.854 [0.854, 0.854]
Draws 4000 LOO IC 173.13±18.53 gp 0.06 [0, 0.158] Dxy 0.708 [0.708, 0.708]
Chains 4 Effective p 23.7±4.95 EV 0.098 [0, 0.278]
Time 1.4s B 0.038 [0.034, 0.051] v 5.028 [0.256, 10.404]
p 1 vp 0.007 [0, 0.028]
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
endo=Moderate or Severe Activity   4.0121   5.0094   5.0034  1.6156   1.8261  8.5293  0.9988  0.92
endo=Moderate or Severe Activity x f(y)  -0.0598  -0.0999  -0.1043  0.0607  -0.2093  0.0572  0.0620  1.63

The probability that the effect of moderate or severe activity decreases for higher levels of calprotectin is 0.99. Let’s graph the logit of the group-stratified empirical CDFs to see some visual evidence for this.

Ecdf(~ calpro, group=endo, fun=qlogis)

Non-parallelism is readily seen, indicating non-proportional odds. Note that allowing for a systematic departure from proportional odds is like having unequal group variances in a 2-sample \(t\)-test, but is more general.

With the constrained partial proportional odds model one can examine for a given y the evidence for a group effect in the tail of the distribution of Y beyond y. For ordinal logistic models the group effect is captured by the relative odds that Y \(\geq\) y. We can use the rms contrast function to do the calculations for varying y. contrast will evaluate the cppo function as needed (which is a scaled and centered \(\sqrt{}\)).

ys <- seq(100, 2500, by=100)
k <- contrast(bcalp, list(endo='Moderate or Severe Activity'),
                     list(endo='No or Mild Activity'),  ycut=ys)
xl <- 'Calprotectin Cutoff'
par(mfrow=c(1,2))
with(k, plot(ycut, Contrast, xlab=xl, ylab='log OR',    type='l'))
with(k, plot(ycut, PP,       xlab=xl, ylab='P(OR > 1)', type='l'))

1.5 Binary Regression with Restricted Cubic Splines

Turn to the support dataset and fit a binary logistic model to predict the probability of in-hospital death of critically ill adults. blrm keeps posterior sampling efficient by orthonormalizing the design matrix before doing the sampling (this is done internally in the Stan code). This allows for arbitrary collinearities, for example in the basis functions used in restricted cubic splines. When there are such collinearities, expect to see some disagreements in estimates between blrm and lrm, because the latter does not do orthonormalization (only normalization to mean 0 variance 1). Collinearity implies that there are many different solutions to the equations, all giving almost the same predicted values.

getHdata(support)
dd <- datadist(support); options(datadist='dd')
f <- lrm(hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 5),
                 data=support, eps=1e-4, x=TRUE, y=TRUE)
specs(f)
lrm(formula = hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 
    5), data = support, x = TRUE, y = TRUE, eps = 1e-04)

        Label                              Assumption
dzgroup dzgroup                            category  
crea    Serum creatinine Day 3             rcspline  
meanbp  Mean Arterial Blood Pressure Day 3 rcspline  
        Parameters                                                                      
dzgroup  ARF/MOSF w/Sepsis COPD CHF Cirrhosis Coma Colon Cancer Lung Cancer MOSF w/Malig
crea     0.59998 0.8999 1.2 1.7998 5.5996                                               
meanbp   47 65.725 78 106 128.05                                                        
        d.f.
dzgroup 7   
crea    4   
meanbp  4   
f

Logistic Regression Model

lrm(formula = hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 
    5), data = support, x = TRUE, y = TRUE, eps = 1e-04)
Frequencies of Missing Values Due to Each Variable
hospdead  dzgroup     crea   meanbp 
       0        0        3        0 
Model Likelihood
Ratio Test
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 997 LR χ2 278.86 R2 0.360 C 0.820
0 744 d.f. 15 R215,997 0.233 Dxy 0.640
1 253 Pr(>χ2) <0.0001 R215,566.4 0.372 γ 0.641
max |∂log L/∂β| 3×10-9 Brier 0.138 τa 0.243
β S.E. Wald Z Pr(>|Z|)
Intercept   6.9789   1.5068 4.63 <0.0001
dzgroup=COPD   -0.6235   0.3382 -1.84 0.0653
dzgroup=CHF   -2.3452   0.4855 -4.83 <0.0001
dzgroup=Cirrhosis   0.3552   0.3626 0.98 0.3273
dzgroup=Coma   1.6365   0.3273 5.00 <0.0001
dzgroup=Colon Cancer   -0.9825   0.6296 -1.56 0.1187
dzgroup=Lung Cancer   0.0808   0.3424 0.24 0.8135
dzgroup=MOSF w/Malig   0.7160   0.2664 2.69 0.0072
crea   -2.9954   1.1432 -2.62 0.0088
crea'   261.5451   94.6489 2.76 0.0057
crea''  -576.5474  224.5354 -2.57 0.0102
crea'''   338.1780  149.7214 2.26 0.0239
meanbp   -0.1035   0.0231 -4.47 <0.0001
meanbp'   0.2508   0.1923 1.30 0.1922
meanbp''   -0.4008   0.7294 -0.55 0.5827
meanbp'''   0.0499   0.7022 0.07 0.9433
# Compute the apparent standard error of Dxy (not accounting for overfitting)
# for comparison with the Bayesian HPD interval for Dxy
rcorr.cens(predict(f), support$hospdead)
       C Index            Dxy           S.D.              n        missing 
  8.202431e-01   6.404862e-01   2.998483e-02   9.970000e+02   3.000000e+00 
    uncensored Relevant Pairs     Concordant      Uncertain 
  9.970000e+02   3.764640e+05   3.087920e+05   0.000000e+00 
Function(f)
function (dzgroup = "ARF/MOSF w/Sepsis", crea = 1.1999512, meanbp = 78) 
{
    6.9789403 - 0.62348171 * (dzgroup == "COPD") - 2.3451715 * 
        (dzgroup == "CHF") + 0.35516492 * (dzgroup == "Cirrhosis") + 
        1.6364993 * (dzgroup == "Coma") - 0.98246981 * (dzgroup == 
        "Colon Cancer") + 0.08076708 * (dzgroup == "Lung Cancer") + 
        0.71602209 * (dzgroup == "MOSF w/Malig") - 2.9954464 * 
        crea + 10.463337 * pmax(crea - 0.59997559, 0)^3 - 23.065276 * 
        pmax(crea - 0.89990234, 0)^3 + 13.529103 * pmax(crea - 
        1.1999512, 0)^3 - 0.90432144 * pmax(crea - 1.7998047, 
        0)^3 - 0.022843078 * pmax(crea - 5.5996094, 0)^3 - 0.10352776 * 
        meanbp + 3.8184801e-05 * pmax(meanbp - 47, 0)^3 - 6.1008621e-05 * 
        pmax(meanbp - 65.725, 0)^3 + 7.5995675e-06 * pmax(meanbp - 
        78, 0)^3 + 1.4835639e-05 * pmax(meanbp - 106, 0)^3 + 
        3.8861354e-07 * pmax(meanbp - 128.05, 0)^3
}
<environment: 0x122eb5e68>
bsup <- blrm(hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 5), 
                 data=support, file=rfile(bsup))
specs(bsup)
blrm(formula = hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 
    5), data = support, file = rfile(bsup))

        Label                              Assumption
dzgroup dzgroup                            category  
crea    Serum creatinine Day 3             rcspline  
meanbp  Mean Arterial Blood Pressure Day 3 rcspline  
        Parameters                                                                      
dzgroup  ARF/MOSF w/Sepsis COPD CHF Cirrhosis Coma Colon Cancer Lung Cancer MOSF w/Malig
crea     0.59998 0.8999 1.2 1.7998 5.5996                                               
meanbp   47 65.725 78 106 128.05                                                        
        d.f.
dzgroup 7   
crea    4   
meanbp  4   
Function(bsup)   # by default uses posterior mode parameter values
function (dzgroup = "ARF/MOSF w/Sepsis", crea = 1.1999512, meanbp = 78) 
{
    7.1819715 - 0.65017441 * (dzgroup == "COPD") - 2.4366506 * 
        (dzgroup == "CHF") + 0.35102057 * (dzgroup == "Cirrhosis") + 
        1.6698906 * (dzgroup == "Coma") - 1.1250499 * (dzgroup == 
        "Colon Cancer") + 0.073792661 * (dzgroup == "Lung Cancer") + 
        0.72589271 * (dzgroup == "MOSF w/Malig") - 3.0328939 * 
        crea + 10.595342 * pmax(crea - 0.59997559, 0)^3 - 23.343529 * 
        pmax(crea - 0.89990234, 0)^3 + 13.678622 * pmax(crea - 
        1.1999512, 0)^3 - 0.90697929 * pmax(crea - 1.7998047, 
        0)^3 - 0.023455737 * pmax(crea - 5.5996094, 0)^3 - 0.10686332 * 
        meanbp + 4.0017231e-05 * pmax(meanbp - 47, 0)^3 - 6.5429844e-05 * 
        pmax(meanbp - 65.725, 0)^3 + 1.0061021e-05 * pmax(meanbp - 
        78, 0)^3 + 1.5009722e-05 * pmax(meanbp - 106, 0)^3 + 
        3.4187033e-07 * pmax(meanbp - 128.05, 0)^3
}
<environment: 0x34172bb20>
# To add an intercept use e.g. Function(bsup, intercept=coef(g, 'mode')[5])
bsup

Bayesian Logistic Model

Dirichlet Priors With Concentration Parameter 0.541 for Intercepts

blrm(formula = hospdead ~ dzgroup + rcs(crea, 5) + rcs(meanbp, 
    5), data = support, file = rfile(bsup))
Frequencies of Missing Values Due to Each Variable
hospdead  dzgroup     crea   meanbp 
       0        0        3        0 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 997 LOO log L -442.22±18.37 g 1.822 [1.535, 2.113] C 0.813 [0.808, 0.818]
0 744 LOO IC 884.44±36.75 gp 0.249 [0.222, 0.27] Dxy 0.626 [0.615, 0.637]
1 253 Effective p 16.74±1.17 EV 0.292 [0.241, 0.338]
Draws 4000 B 0.14 [0.138, 0.142] v 2.835 [2.083, 3.839]
Chains 4 vp 0.055 [0.046, 0.066]
Time 6.8s
p 15
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
Intercept   6.9782   7.1820   7.1618   1.5583   4.2990   10.3221  1.0000  1.09
dzgroup=COPD   -0.6236   -0.6502   -0.6495   0.3478   -1.3350   0.0260  0.0275  0.94
dzgroup=CHF   -2.3453   -2.4367   -2.4170   0.5142   -3.4506   -1.4889  0.0000  0.89
dzgroup=Cirrhosis   0.3552   0.3510   0.3565   0.3600   -0.3639   1.0419  0.8388  0.97
dzgroup=Coma   1.6363   1.6699   1.6701   0.3263   1.0209   2.3010  1.0000  1.02
dzgroup=Colon Cancer   -0.9830   -1.1250   -1.0730   0.6780   -2.5428   0.0733  0.0325  0.82
dzgroup=Lung Cancer   0.0808   0.0738   0.0781   0.3492   -0.6174   0.7700  0.5862  0.95
dzgroup=MOSF w/Malig   0.7159   0.7259   0.7289   0.2674   0.2212   1.2507  0.9962  1.00
crea   -2.9958   -3.0329   -3.0319   1.1594   -5.3186   -0.7871  0.0060  1.02
crea'   261.5778   264.8448   263.3614   95.1322   66.8350   444.1393  0.9982  1.00
crea''  -576.6280  -583.5027  -578.9675  225.2396  -1048.4224  -155.5220  0.0040  1.00
crea'''   338.2340   341.9155   339.4421  149.8539   31.0573   622.9413  0.9890  1.01
meanbp   -0.1035   -0.1069   -0.1060   0.0237   -0.1548   -0.0635  0.0000  0.90
meanbp'   0.2507   0.2629   0.2552   0.1951   -0.1176   0.6473  0.9132  1.04
meanbp''   -0.4001   -0.4298   -0.4190   0.7403   -1.8733   1.0236  0.2838  0.98
meanbp'''   0.0493   0.0661   0.0574   0.7149   -1.2753   1.5248  0.5370  0.99
stanDx(bsup)
Iterations: 2000 on each of 4 chains, with 4000 posterior distribution samples saved

For each parameter, n_eff is a crude measure of effective sample size
and Rhat is the potential scale reduction factor on split chains
(at convergence, Rhat=1)
                     n_eff  Rhat
Intercept             3735 1.000
dzgroup=COPD          6895 1.000
dzgroup=CHF           3802 1.001
dzgroup=Cirrhosis     7144 1.000
dzgroup=Coma          6715 1.000
dzgroup=Colon Cancer  5186 0.999
dzgroup=Lung Cancer   6257 1.000
dzgroup=MOSF w/Malig  8396 1.000
crea                  6980 1.000
crea'                 6693 1.000
crea''                7176 1.000
crea'''               6357 1.000
meanbp                6578 1.000
meanbp'               5208 1.000
meanbp''              5183 1.000
meanbp'''             6688 1.000
stanDxplot(bsup)
plot(bsup)

Show approximate relative explained variation (REV) and compare this with Wald statistics from the frequentist lrm model.

plot(rexVar(bsup, support))
plot(anova(f, test='LR'), 'proportion chisq')
singular information matrix in lrm.fit (rank= 12 ).  Offending variable(s):
crea''' 

Note: To get a Bayesian equivalent of a likelihood ratio test for comparing two models use the rms function compareBmods.

Now compute odds ratios over default inter-quartile ranges for continuous predictors, based on posterior mode parameters. Also show 0.95 HPD intervals. Note that unlike the print method, the plot method for summary doesn’t actually compute HPD intervals, but approximates them by assuming normality and using the standard deviation of the posterior samples. Compare the plot with the ordinary lrm result.

s <- summary(bsup)
s
Effects   Response: hospdead
Low High Δ Effect S.E. Lower 0.95 Upper 0.95
crea 0.8999 1.9 1.00 1.30200 0.2659 0.80650 1.83100
Odds Ratio 0.8999 1.9 1.00 3.67500 2.24000 6.24000
meanbp 64.7500 107.0 42.25 -0.45050 0.2388 -0.94000 -0.01253
Odds Ratio 64.7500 107.0 42.25 0.63730 0.39060 0.98750
dzgroup --- COPD:ARF/MOSF w/Sepsis 1.0000 2.0 -0.65020 0.3478 -1.33500 0.02602
Odds Ratio 1.0000 2.0 0.52200 0.26320 1.02600
dzgroup --- CHF:ARF/MOSF w/Sepsis 1.0000 3.0 -2.43700 0.5142 -3.45100 -1.48900
Odds Ratio 1.0000 3.0 0.08745 0.03173 0.22560
dzgroup --- Cirrhosis:ARF/MOSF w/Sepsis 1.0000 4.0 0.35100 0.3600 -0.36390 1.04200
Odds Ratio 1.0000 4.0 1.42100 0.69500 2.83500
dzgroup --- Coma:ARF/MOSF w/Sepsis 1.0000 5.0 1.67000 0.3263 1.02100 2.30100
Odds Ratio 1.0000 5.0 5.31200 2.77600 9.98400
dzgroup --- Colon Cancer:ARF/MOSF w/Sepsis 1.0000 6.0 -1.12500 0.6780 -2.54300 0.07332
Odds Ratio 1.0000 6.0 0.32460 0.07864 1.07600
dzgroup --- Lung Cancer:ARF/MOSF w/Sepsis 1.0000 7.0 0.07379 0.3492 -0.61740 0.77000
Odds Ratio 1.0000 7.0 1.07700 0.53930 2.16000
dzgroup --- MOSF w/Malig:ARF/MOSF w/Sepsis 1.0000 8.0 0.72590 0.2674 0.22120 1.25100
Odds Ratio 1.0000 8.0 2.06700 1.24800 3.49300
plot(s)
plot(summary(bsup))

Draw partial effect plots with 0.95 HPD intervals. Point estimates are posterior modes (which can be easily changed).

ggplot(Predict(bsup))

Compute estimated mortality probabilities at all levels of dzgroup adjusting covariates to medians/modes. Need funint=FALSE to tell Predict that fun is a simple 1-1 function of the linear predictor (unlike Mean, Quantile, etc.).

Predict(bsup, dzgroup, fun=plogis, funint=FALSE)
            dzgroup     crea meanbp       yhat       lower      upper
1 ARF/MOSF w/Sepsis 1.199951     78 0.11580077 0.063318903 0.17507947
2              COPD 1.199951     78 0.06615031 0.024986522 0.11516369
3               CHF 1.199951     78 0.01257493 0.002459092 0.02633342
4         Cirrhosis 1.199951     78 0.15974571 0.064560039 0.26012441
5              Coma 1.199951     78 0.40594732 0.229380199 0.57465442
6      Colon Cancer 1.199951     78 0.04758856 0.003998088 0.10493481
7       Lung Cancer 1.199951     78 0.12539631 0.052144158 0.19918173
8      MOSF w/Malig 1.199951     78 0.21347303 0.116144186 0.33710536

Response variable (y):  

Adjust to: crea=1.2 meanbp=78  

Limits are 0.95 confidence limits

Draw a nomogram from posterior mode parameter values.

p <- nomogram(bsup, fun=plogis, funlabel='P(death)')
plot(p)

For comparison here is a nomogram based on maximum likelihood estimates of parameters rather than posterior modes.

plot(nomogram(f, fun=plogis, funlabel='P(death)'))

2 Partial Proportional Odds Model

The proportional odds (PO) assumption is a parallelism assumption reflecting the belief that the effect of baseline variables on, say, \(Y \geq 3\) is the same effect on \(Y \geq 4\). To relax that assumption, Peterson & Harrell (1990) developed the partial proportional odds model (PPO) model. The blrm function accepts a second model formula in the argument named ppo that specifies the subset of predictors for which PO is not to be assumed but for which the model is effectively polytomous (multinomial). Note that for frequentist modeling the R VGAM package handles the PPO model (as will be shown below). VGAM is more flexible than what blrm can do in allowing for all sorts of model restrictions.

The presence of the second formula triggers fitting a PPO model. The default is the unconstrained PPO model, which has \((k - 2) \times q\) extra parameter for \(k\) category \(Y\) and \(q\) equal to the number of columns in the induced design matrix from the second formula. This is normally too many parameters. More typically, the constrained PPO model (Peterson & Harrell (1990) Eq. 6) is used. This model is fitted when a function is provided as the cppo argument. Generalizing Peterson and Harrell Eq. 6 the cppo function can be a continuous function of \(Y\) as well as being a discontinuous function such as an indicator variable that allows a category of \(Y\) (typically the first or last) to have a special effect of selected covariates. When \(Y\) is continuous, cppo would typically be the continuous value of \(Y\) or a monotonic function of it. This induces a model that is akin to having a treatment \(\times\) time interaction in a Cox proportional hazards model, or systematic heteroskedasticity in a linear model. When Y is very skewed, it may be more reasonable to use something like cppo = function(y) y^(1/3). Note that post-fitting functions for estimation and prediction are not implemented for the unconstrained partial PO model.

Note that the cppo function used to trigger the use of the constrained PPO model is never evaluated at the lowest value of Y. That is because in the data likelihood, the probability element for observations at the minimum Y is one minus the probability element at the second lowest value of Y. So you can’t give a special effect at the minimum Y. If for example you have a scale ordered as death, sick, well and you want to allow for a special effect of a covariate on death, reverse the order of levels in the factor variable representing Y and specify cppo=function(y) y == 'dead'.

2.1 Unconstrained Partial PO Model

Let’s look first at the unconstrained PPO model, which is more suitable for categorical Y with not too many categories. Consider a \(2\times 3\) table of proportions (2 treatment groups, 3 ordered outcome levels) where the treatment effect is not in PO. We will fit a PO model and see how well it tries to reconstruct the 6 proportions, then fit a PPO model. As of 2020-05-11 blrm has not implemented predict-type functions for PPO models, so you will se predictions done the long way (which does better show how PPO works). Note: the VGAM vgam function parameterizes PPO effects by \(y\)-specific covariate effects whereas blrm like Peterson and Harrell parameterize the model by estimating increments in log odds for the covariate effect for \(y \geq y\) over and above the effect for \(y \geq 2\) if \(y\) has values 1, 2, …, \(k\). Bayesian model specification is detailed here.

Not shown here is that blrm also allows for random effects with PPO models to handle longitudinal data if a compound symmetry correlation pattern is reasonable.

p0 <- c(.4, .2, .4)
p1 <- c(.3, .1, .6)
m  <- 50             # observations per cell
m0 <- p0 * m         # from proportions to frequencies
m1 <- p1 * m
x  <- c(rep(0, m), rep(1, m))
y0 <- c(rep(1, m0[1]), rep(2, m0[2]), rep(3, m0[3]))
y1 <- c(rep(1, m1[1]), rep(2, m1[2]), rep(3, m1[3]))
y  <- c(y0, y1)
table(x, y)
   y
x    1  2  3
  0 20 10 20
  1 15  5 30
# A PO model cannot reproduce the original proportions
f <- lrm(y ~ x)
f

Logistic Regression Model

lrm(formula = y ~ x)
Model Likelihood
Ratio Test
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 100 LR χ2 2.96 R2 0.034 C 0.574
1 35 d.f. 1 R21,100 0.019 Dxy 0.149
2 15 Pr(>χ2) 0.0855 R21,82.9 0.023 γ 0.290
3 50 Brier 0.226 τa 0.091
max |∂log L/∂β| 9×10-9
β S.E. Wald Z Pr(>|Z|)
y≥2   0.3177  0.2725 1.17 0.2437
y≥3  -0.3177  0.2725 -1.17 0.2437
x   0.6601  0.3864 1.71 0.0876
predict(f, data.frame(x=c(0, 1)), type='fitted.ind')
        y=1       y=2       y=3
1 0.4212365 0.1575270 0.4212365
2 0.2733366 0.1418996 0.5847638
require(VGAM)
fv <- vgam(y ~ x, cumulative(reverse=TRUE, parallel=TRUE))
coef(fv)
(Intercept):1 (Intercept):2             x 
    0.3177002    -0.3176987     0.6600579 
predict(fv, data.frame(x=c(0, 1)), type='response')
          1         2         3
1 0.4212363 0.1575270 0.4212367
2 0.2733369 0.1418997 0.5847635
# Now fit a PPO model that will reproduce all cell proportions
fvppo <- vgam(y ~ x, cumulative(reverse=TRUE, parallel=FALSE))
coef(fvppo)
(Intercept):1 (Intercept):2           x:1           x:2 
    0.4054651    -0.4054651     0.4418328     0.8109302 
predict(fvppo, data.frame(x=c(0, 1)), type='response')  # perfect recovery
    1   2   3
1 0.4 0.2 0.4
2 0.3 0.1 0.6
# Function to manually compute cell probablities
pprop <- function(co, type, centered=FALSE) {
  x <- if(centered) c(-0.5, 0.5) else 0:1
  switch(type,
         vgam = {
                   pge2 <- plogis(co[1] + x * co[3])
                 peq3 <- plogis(co[2] + x * co[4])
                 rbind(c(1 - pge2[1], pge2[1] - peq3[1], peq3[1]),
             c(1 - pge2[2], pge2[2] - peq3[2], peq3[2]))
         },      
        blrm = {
            pge2 <- plogis(co[1] + x * co[3])
            peq3 <- plogis(co[2] + x * (co[3] + co[4]))
            rbind(c(1 - pge2[1], pge2[1] - peq3[1], peq3[1]),
                  c(1 - pge2[2], pge2[2] - peq3[2], peq3[2]))
        } )
}          

co <- coef(vgam(y ~ x, cumulative(reverse=TRUE, parallel=FALSE)))
pprop(co, type='vgam')
     [,1] [,2] [,3]
[1,]  0.4  0.2  0.4
[2,]  0.3  0.1  0.6
# Now try blrm
# First fit an onconstrained partial PO model
b <- blrm(y ~ x, ~ x, method='opt')
coef(b)
      y>=2       y>=3          x     x:y>=3 
 0.4053301 -0.4055359  0.4420296  0.3689901 
pprop(coef(b), type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.4000324 0.1999846 0.3999830
[2,] 0.2999870 0.1000085 0.6000045
# Fit a constrained PPO model
b <- blrm(y ~ x, ~ x, cppo=function(y) y, method='opt')
coef(b)
      y>=2       y>=3          x   x x f(y) 
 0.4055066 -0.4054537 -0.2965554  0.3691441 
pprop(coef(b), type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.3999900 0.2000072 0.4000027
[2,] 0.4727891 0.1096672 0.4175437
# First mimic PO model by penalizing PPO term to nearly zero
# Quickly get maximum likelihood estimates (posterior modes)
b <- blrm(y ~ x, ~x, priorsdppo=0.01, method='opt')
coef(b)
         y>=2          y>=3             x        x:y>=3 
 3.177034e-01 -3.176981e-01  6.600456e-01  1.560706e-05 
pprop(coef(b), type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.4212356 0.1575276 0.4212369
[2,] 0.2733387 0.1418969 0.5847645
# Now really fit PPO model, at first only getting MLE
# Do full posterior sampling
b <- blrm(y ~ x, ~ x, priorsdppo=1000, method='opt')
coef(b)   # also the posterior mode
      y>=2       y>=3          x     x:y>=3 
 0.4053554 -0.4055614  0.4419826  0.3690735 
coef(b)[3] + coef(b)[4]
        x 
0.8110561 
bppo <- blrm(y ~ x, ~ x, priorsdppo=1000, file=rfile(bppo))
# take differences in last 2 coefficients to get our scheme
# Check recovery of proportions, using posterior mode/mean/median
pprop(coef(bppo, 'mode'),   type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.4000263 0.1999968 0.3999769
[2,] 0.2999916 0.1000013 0.6000071
pprop(coef(bppo, 'mean'),   type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.3939012 0.2134349 0.3926640
[2,] 0.2895964 0.1126369 0.5977667
pprop(coef(bppo, 'median'), type='blrm')
          [,1]      [,2]      [,3]
[1,] 0.3933998 0.2127767 0.3938235
[2,] 0.2920347 0.1143166 0.5936487
bppo

Bayesian Partial Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.541 for Intercepts

blrm(formula = y ~ x, ppo = ~x, priorsdppo = 1000, file = rfile(bppo))
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 100 LOO log L -101.73±4.64 g 0.254 [0.002, 0.604] C 0.605 [0.355, 0.645]
1 35 LOO IC 203.47±9.27 gp 0.056 [0, 0.132] Dxy 0.209 [-0.29, 0.29]
2 15 Effective p 4.05±0.35 EV 0.02 [0, 0.076]
3 50 B 0.229 [0.225, 0.238] v 0.096 [0, 0.36]
Draws 4000 vp 0.005 [0, 0.017]
Chains 4
Time 0.8s
p 1
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥2   0.4054   0.4309   0.4330  0.2898  -0.1075  1.0130  0.9330  1.03
y≥3  -0.4056  -0.4361  -0.4313  0.2939  -1.0415  0.1007  0.0690  0.98
x   0.4420   0.4664   0.4525  0.4222  -0.3840  1.2763  0.8740  1.02
x:y≥3   0.3691   0.3659   0.3579  0.3080  -0.2151  1.0030  0.8920  1.09

2.2 Constrained Partial PO Model

Consider the same dataset analyzed above. Specify a constrained PPO model that in this particular case is really unconstrained because it has a total of two parameters to handle the group effect, and there are only \(k=3\) levels of Y.

# cppo function specifies that there is a special effect of x for y=3
bcppo <- blrm(y ~ x, ~ x, cppo=function(y) y == 3, file=rfile(bcppo))
cppo <- bcppo$cppo
cppo
function(y) y == 3
b <- coef(bcppo, 'mode')
rbind(Mode=b, Mean=coef(bcppo))
          y>=2       y>=3         x  x x f(y)
Mode 0.4053557 -0.4055616 0.4419821 0.3690744
Mean 0.4286434 -0.4269191 0.4631795 0.3500644
# Compute the 4 cumulative probabilities using the posterior mode (MLE)
# b[3] and b[4] are multiplied x (hence drop out of the x=0 row)
L <- rbind('x=0' = c(b[1], b[2]),
           'x=1' = c(b[1] + b[3] + cppo(2) * b[4], b[2] + b[3] + cppo(3) * b[4]))
plogis(L) 
         y>=2      y>=3
x=0 0.5999737 0.3999768
x=1 0.7000084 0.6000071

Now consider the severity of nausea data from Peterson & Harrell (1990).

d0 <- data.frame(tx=0, y=c(rep(0, 43), rep(1, 39), rep(2, 13), rep(3, 22),
                           rep(4, 15), rep(5, 29)))
d1 <- data.frame(tx=1, y=c(rep(0, 7), rep(1, 7), rep(2, 3), rep(3, 12),
                           rep(4, 15), rep(5, 14)))
d <- rbind(d0, d1)
d$tx <- factor(d$tx, 0:1, c('No cisplatin', 'cisplatin'))
dd <- datadist(d); options(datadist='dd')
with(d, table(tx, y))
              y
tx              0  1  2  3  4  5
  No cisplatin 43 39 13 22 15 29
  cisplatin     7  7  3 12 15 14
# Allow for a different effect of tx at y=5
g <- function(y) y==5    # max(y)=5 and y is discrete
# Check against maximum likelihood estimates in Peterson & Harrell
f <- blrm(y ~ tx, ~ tx, cppo=g, data=d, method='opt')
# Compute the treatment effect log(OR) for y=1, 2, 3, 4, 5
k <- coef(f)
k
               y>=1                y>=2                y>=3                y>=4 
          0.9976240          -0.0128604          -0.3256193          -1.0130431 
               y>=5        tx=cisplatin tx=cisplatin x f(y) 
         -1.5466172           1.0639315          -0.6292570 
k[6] + g(1:5) * k[7]   # matches paper
[1] 1.0639315 1.0639315 1.0639315 1.0639315 0.4346745
# Now get posterior distributions of parameters
fp <- blrm(y ~ tx, ~ tx, cppo=g, data=d, file=rfile(bnausea))
rbind(coef(f), coef(fp, 'mode'), coef(fp, 'mean'))
         y>=1       y>=2       y>=3      y>=4      y>=5 tx=cisplatin
[1,] 0.997624 -0.0128604 -0.3256193 -1.013043 -1.546617     1.063931
[2,] 0.997624 -0.0128604 -0.3256193 -1.013043 -1.546617     1.063931
[3,] 1.008690 -0.0075393 -0.3241521 -1.016647 -1.573323     1.078324
     tx=cisplatin x f(y)
[1,]          -0.6292570
[2,]          -0.6292570
[3,]          -0.6546599
k <- coef(fp)          # posterior means
k[6] + g(1:5) * k[7]   # close to paper
[1] 1.0783243 1.0783243 1.0783243 1.0783243 0.4236643
dat <- data.frame(tx=levels(d$tx))
# Get posterior mean and 0.95 HPD intervals for treatment
# effects at all levels of y
contrast(fp, list(tx='cisplatin'), list(tx='No cisplatin'), ycut=1:5)
         Contrast      S.E.      Lower    Upper Pr(Contrast>0)
1   y=1 1.0783243 0.2917569  0.4996871 1.650133         1.0000
2*  y=2 1.0783243 0.2917569  0.4996871 1.650133         1.0000
3*  y=3 1.0783243 0.2917569  0.4996871 1.650133         1.0000
4*  y=4 1.0783243 0.2917569  0.4996871 1.650133         1.0000
5   y=5 0.4236643 0.3646012 -0.2515952 1.187793         0.8785

Redundant contrasts are denoted by *

Intervals are 0.95 highest posterior density intervals
Contrast is the posterior mean 
Predict(fp, tx)
            tx       yhat      lower     upper
1 No cisplatin -0.0075393 -0.2940538 0.3041612
2    cisplatin  1.0707850  0.5935530 1.6202088

Response variable (y): log odds 

Limits are 0.95 confidence limits

If the response variable is discrete and has character strings for the ordered factor levels, you can use these strings in the cppo function definition. For example, suppose that Y was a factor variable with these levels: "ok", "in pain", "stroke", or "death". To allow a treatment to a have a different effect for the last two levels while adjusting for a covariate age that is assumed to operate in proportional odds, one can code

f <- blrm(y ~ age + tx, ~ tx, cppo=function(y) y %in% c('stroke', 'death'))

There are special arguments to some of the rms functions for getting estimates or predictions for partial PO models. For the Predict function you can specify ycut or kint to specify the response variable value such that the logit or probability of \(Y\geq y\) is being estimated. For contrast (full name contrast.rms) you can specify the argument y as a scalar or vector. When it is a scalar, that value of the y cutoff is used for all contrasts. When it is a vector, it is assume to either (1) have length equal to the number of contrasts being specified so that the appropriate value of y is used for each contrast, or (2) when only one contrast is requested, the contrast is repeated length(y) times to estimate the effect of varying y. This was done in the last part of the example above.

When the response variable is continuous, it is more flexible to specify ycut to Predict than to specify kint, because for ycut you can specify a value of Y that did not occur in the data (when the cppo function is continuous) to take into account the degree of non-proportional odds at that value.

3 Longitudinal Data Examples: Random Effects

3.1 Schizophrenia Dataset from mixor Package

The R mixor package fits frequentist random effects proportional odds models. Let’s analyze the first dataset discussed in the mixor package vignette. The outcome is a 4-level severity of illness scale. Unfortunately, mixor is no longer on CRAN, so we access the dataset on the Vanderbilt Department of Biostatistics data repository, and use the ordinal package to fit the random intercepts model.

require(ordinal)
getHdata(schizophrenia)
d <- schizophrenia
f <- clmm2(factor(imps79o) ~ sqrt(Week) * TxDrug, random=factor(id),
           link='logistic', Hess=TRUE, data=d)
summary(f)
Cumulative Link Mixed Model fitted with the Laplace approximation

Call:
clmm2(location = factor(imps79o) ~ sqrt(Week) * TxDrug, random = factor(id), 
    data = d, Hess = TRUE, link = "logistic")

Random effects:
                Var  Std.Dev
factor(id) 3.606788 1.899154

Location coefficients:
                  Estimate  Std. Error z value   Pr(>|z|)  
sqrt(Week)          -0.7678    0.0145   -52.8160 < 2.22e-16
TxDrug              -0.0681    0.0143    -4.7666 1.8733e-06
sqrt(Week):TxDrug   -1.1982    0.0146   -82.1808 < 2.22e-16

No scale coefficients

Threshold coefficients:
    Estimate  Std. Error z value  
1|2   -5.8488    0.0143  -408.0444
2|3   -2.8233    0.0144  -196.6312
3|4   -0.7143    0.0828    -8.6313

log-likelihood: -1708.111 
AIC: 3430.221 
Condition number of Hessian: 46.31894 
sqrt(diag(vcov(f)))
              1|2               2|3               3|4        sqrt(Week) 
       0.01433368        0.01435845        0.08275618        0.01453735 
           TxDrug sqrt(Week):TxDrug                   
       0.01428859        0.01458059        0.01362287 

Fit the same model using the Bayesian approach.

bmixor <- blrm(imps79o ~ sqrt(Week) * TxDrug + cluster(id), data=d,
               file=rfile(bmixor))
bmixor

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.455 for Intercepts

blrm(formula = imps79o ~ sqrt(Week) * TxDrug + cluster(id), data = d, 
    file = rfile(bmixor))

Frequencies of Responses

  1   2   3   4 
190 474 412 527 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 1603 B 0.198 [0.194, 0.202] g 1.944 [1.786, 2.134] C 0.769 [0.769, 0.771]
Draws 4000 gp 0.339 [0.321, 0.357] Dxy 0.538 [0.537, 0.541]
Chains 4 EV 0.398 [0.358, 0.443]
Time 9.8s v 2.984 [2.47, 3.548]
p 3 vp 0.094 [0.084, 0.104]
Cluster on id
Clusters 437
σγ 1.947 [1.7136, 2.183]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥2   5.8696   5.8598  0.3398   5.2370   6.5712  1.0000  1.07
y≥3   2.8380   2.8339  0.2935   2.3052   3.4482  1.0000  1.03
y≥4   0.7203   0.7182  0.2752   0.1604   1.2310  0.9960  1.00
Week  -0.7662  -0.7664  0.1307  -1.0125  -0.5062  0.0000  1.03
TxDrug  -0.0713  -0.0650  0.3141  -0.7029   0.5157  0.4155  0.97
Week × TxDrug  -1.2049  -1.2074  0.1517  -1.4952  -0.9060  0.0000  1.00

The posterior median for \(\sigma_\gamma\) is 1.947 which compares well with the ordinal estimate of 1.899. The \(\hat{\beta}\)s also compare very well. Note that the model is stated differently, which makes two of the intercepts have different meanings across packages.

3.2 Simulated Random Effects Longitudinal Data

Let’s generate some data with repeatedly measured outcome per subject where the outcome is binary and the random effects have a \(N(0, 0.25^2)\) distribution. 500 subjects have 10 measurements each.

n <- 500   # subjects
set.seed(2)
re <- rnorm(n) * 0.25   # worked fine also with rnorm(n) * 4
X <- runif(n)   # baseline covariate, will be duplicated over repeats
m <- 10         # measurements per subject

id <- rep(1 : n, each = m)
x  <- X[id]
L <- x + re[id]   # actual logit
y <- ifelse(runif(n * m) <= plogis(L), 1, 0)
f <- lrm(y ~ x, x=TRUE, y=TRUE)     # ordinary fit
f     # now use cluster sandwich covariance estimator:

Logistic Regression Model

lrm(formula = y ~ x, x = TRUE, y = TRUE)
Model Likelihood
Ratio Test
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 5000 LR χ2 93.67 R2 0.025 C 0.581
0 1908 d.f. 1 R21,5000 0.018 Dxy 0.162
1 3092 Pr(>χ2) <0.0001 R21,3539.7 0.026 γ 0.162
max |∂log L/∂β| 1×10-13 Brier 0.232 τa 0.076
β S.E. Wald Z Pr(>|Z|)
Intercept  0.0087  0.0566 0.15 0.8783
x  0.9432  0.0983 9.60 <0.0001
g <- robcov(f, id)  # covariance matrix adjusted for clustering
g

Logistic Regression Model

lrm(formula = y ~ x, x = TRUE, y = TRUE)
Model Likelihood
Ratio Test
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 5000 LR χ2 93.67 R2 0.025 C 0.581
0 1908 d.f. 1 R21,5000 0.018 Dxy 0.162
1 3092 Pr(>χ2) <0.0001 R21,3539.7 0.026 γ 0.162
Cluster on id Brier 0.232 τa 0.076
Clusters 500
max |∂log L/∂β| 1×10-13
β S.E. Wald Z Pr(>|Z|)
Intercept  0.0087  0.0613 0.14 0.8876
x  0.9432  0.1056 8.93 <0.0001

We first fit an inappropriate Bayesian model in which the random effects are omitted.

# Note: loo defaults to FALSE when n > 1000 as in this case
# Need loo for compareBmods
breo <- blrm(y ~ x, loo=TRUE, file=rfile(breo))
breo

Bayesian Logistic Model

Dirichlet Priors With Concentration Parameter 0.541 for Intercepts

blrm(formula = y ~ x, loo = TRUE, file = rfile(breo))
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 5000 LOO log L -3279.34±18.97 g 0.326 [0.261, 0.387] C 0.581 [0.581, 0.581]
0 1908 LOO IC 6558.67±37.94 gp 0.076 [0.062, 0.091] Dxy 0.162 [0.162, 0.162]
1 3092 Effective p 1.97±0.02 EV 0.019 [0.012, 0.025]
Draws 4000 B 0.232 [0.232, 0.232] v 0.081 [0.05, 0.111]
Chains 4 vp 0.004 [0.003, 0.006]
Time 21.6s
p 1
Mode β Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
Intercept  0.0087  0.0096  0.0086  0.0547  -0.0962  0.1148  0.5660  1.00
x  0.9433  0.9436  0.9422  0.0957   0.7624  1.1312  1.0000  1.02

Now use a proper Bayesian random effects model. The prior distribution for the standard deviation \(\sigma_{\gamma}\) of the random effects (\(\gamma\)s) is assumed to be exponential when psigma=2, and we will use the default mean of 1.0.

bre <- blrm(y ~ x + cluster(id), psigma=2, loo=TRUE, file=rfile(bre))
bre

Bayesian Logistic Model

Dirichlet Priors With Concentration Parameter 0.541 for Intercepts

blrm(formula = y ~ x + cluster(id), psigma = 2, loo = TRUE, file = rfile(bre))
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 5000 LOO log L -3273.21±19.25 g 0.333 [0.267, 0.405] C 0.581 [0.581, 0.581]
0 1908 LOO IC 6546.43±38.5 gp 0.077 [0.062, 0.094] Dxy 0.162 [0.162, 0.162]
1 3092 Effective p 84.32±0.67 EV 0.019 [0.012, 0.027]
Draws 4000 B 0.232 [0.232, 0.232] v 0.084 [0.054, 0.123]
Chains 4 vp 0.005 [0.003, 0.007]
Time 38.1s
p 1
Cluster on id
Clusters 500
σγ 0.294 [0.1646, 0.4233]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
Intercept  0.0110  0.0114  0.0629  -0.1052  0.1402  0.5792  0.99
x  0.9581  0.9578  0.1087   0.7517  1.1712  1.0000  1.00
plot(bre)

Before delving more into the random effects model, let’s compare this new model with the previous model that erroneously omitted the random effects.

compareBmods(breo, bre)
Method: stacking
------
       weight
model1 0.070 
model2 0.930 

Roughly speaking, of the two models, the one with random effects has a probability of 0.93 of being the correct one. See rstan::loo and loo::loo.array for details.

Now let’s get into more details from the random effects model fit.

# Plot distribution of the 500 estimated random effects (posterior medians)
hist(bre$gammas, xlab='Estimated Random Effects', nclass=40)

Now generate similar data except for a bimodal random effects distribution. This will fool the random effects normal prior into having a wider variance for a single normal distribution but will still result in estimated random effects that are somewhat realistic.

n <- 500
set.seed(3)
re <- c(rnorm(n/2, mean=-1.75), rnorm(n/2, mean=1.75)) * 0.25
cat('SD of real random effects:', round(sd(re), 4), '\n')
SD of real random effects: 0.5115 
X <- runif(n)   # baseline covariate, will be duplicated over repeats
m <- 10         # measurements per subject

id <- rep(1 : n, each = m)
x  <- X[id]
L <- x + re[id]   # actual logit
y <- ifelse(runif(n * m) <= plogis(L), 1, 0)
breb <- blrm(y ~ x + cluster(id), file=rfile(breb))
breb

Bayesian Logistic Model

Dirichlet Priors With Concentration Parameter 0.541 for Intercepts

blrm(formula = y ~ x + cluster(id), file = rfile(breb))
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 5000 B 0.233 [0.233, 0.234] g 0.339 [0.246, 0.435] C 0.578 [0.578, 0.578]
0 1928 gp 0.078 [0.055, 0.098] Dxy 0.156 [0.156, 0.156]
1 3072 EV 0.02 [0.009, 0.03]
Draws 4000 v 0.088 [0.038, 0.133]
Chains 4 vp 0.005 [0.002, 0.007]
Time 20.5s
p 1
Cluster on id
Clusters 500
σγ 0.5835 [0.4839, 0.6798]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
Intercept  -0.0116  -0.0133  0.0830  -0.1656  0.1619  0.4353  1.00
x   1.0763   1.0761  0.1489   0.7915  1.3678  1.0000  0.98
par(mfrow=c(2, 2))
hist(breb$gammas, xlab='Estimated Random Effects', nclass=40, main='')
hist(re,       xlab='Real Random Effects',      nclass=40, main='')
plot(re, breb$gammas, xlab='Real', ylab='Estimated')
abline(a=0, b=1)

3.3 Absorbing State in Mixed Effects Ordinal Regression

blrm is not designed to handle this situation but let’s see how it performs.

For an ordinal outcome y=0, 1, 2, 3, 4, 5 suppose that y=5 represents an absorbing state such as death. Suppose that subjects are observed for 10 days, and if death occurs within those days, all later values of y for that subject are set to 5. Generate repeated outcomes under a \(N(0, 0.25^2)\) random effects model with two treatments: a and b. The b:a odds ratio is 0.65 and the cell probabilities are 0.3, 0.3, 0.1, 0.1, 0.1, 0.1 corresponding to y=0-5, when the random effect is zero.

# Generate data as if there is no absorbing state
n <- 1000
set.seed(6)
pa <- c(.3, .3, .1, .1, .1, .1)     # P(Y=0-5 | tx=a, random effect=0)
pb <- pomodm(p=pa, odds.ratio=0.65) # P(Y=0-5 | tx=b, re=0)   # Hmisc
round(pb, 3)
[1] 0.397 0.300 0.084 0.078 0.072 0.067
re <- rnorm(n) * 0.25
tx <- c(rep('a', n/2), rep('b', n/2))   # will be duplicated over repeats
m <- 10         # measurements per subject

id   <- rep(1 : n, each = m)
time <- rep(1 : m, n)
or   <- exp(log(0.65) * (tx[id] == 'b') + re[id])
y   <- integer(n * m)
for(j in 1 : (n * m)) {
  p    <- pomodm(p=pa, odds.ratio=or[j])
  y[j] <- sample(0:5, 1, p, replace=TRUE)
}
Tx <- tx[id]
table(Tx, y)
   y
Tx     0    1    2    3    4    5
  a 1517 1448  506  516  491  522
  b 2008 1524  424  379  320  345

The first Bayesian proportional odds model fitted is the one that exactly matches the data generation model, as we have not yet imposed an absorbing state, so that outcomes with y < 5 can appear after a y=5 outcome for the subject.

bst <- blrm(y ~ Tx + cluster(id), file=rfile(bst))
bst

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.345 for Intercepts

blrm(formula = y ~ Tx + cluster(id), file = rfile(bst))

Frequencies of Responses

   0    1    2    3    4    5 
3525 2972  930  895  811  867 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 10000 B 0.226 [0.226, 0.226] g 0.239 [0.199, 0.279] C 0.594 [0.594, 0.594]
Draws 4000 gp 0.054 [0.045, 0.063] Dxy 0.189 [0.189, 0.189]
Chains 4 EV 0.013 [0.009, 0.017]
Time 69.5s v 0.058 [0.039, 0.077]
p 1 vp 0.003 [0.002, 0.004]
Cluster on id
Clusters 1000
σγ 0.307 [0.2386, 0.3711]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥1   0.8700   0.8700  0.0320   0.8088   0.9319  1.0000  1.01
y≥2  -0.3991  -0.3996  0.0304  -0.4559  -0.3369  0.0000  0.98
y≥3  -0.8562  -0.8566  0.0313  -0.9164  -0.7955  0.0000  1.04
y≥4  -1.4114  -1.4114  0.0337  -1.4750  -1.3427  0.0000  1.00
y≥5  -2.1776  -2.1775  0.0413  -2.2571  -2.0974  0.0000  1.04
Tx=b  -0.4768  -0.4762  0.0411  -0.5539  -0.3954  0.0000  0.96
stanDx(bst)
Iterations: 2000 on each of 4 chains, with 4000 posterior distribution samples saved

For each parameter, n_eff is a crude measure of effective sample size
and Rhat is the potential scale reduction factor on split chains
(at convergence, Rhat=1)
       n_eff  Rhat
y>=1    4000 1.000
y>=2    3392 1.000
y>=3    3128 1.000
y>=4    3224 1.000
y>=5    4474 1.000
Tx=b    3970 1.000
sigmag   918 1.005
stanDxplot(bst, 'ALL')

If time were to be added to the above model, you’ll see that its regression coefficient is very small (\(\hat{\beta}=0.009\) in this case), in alignment with the data generating model.

Now assume that state y=5 is an absorbing state. Change observations after the first y=5 within subject to also have y=5.

require(data.table)
g <- function(x) if(length(x)) min(x, na.rm=TRUE) else 99L
u <- data.table(id, time, Tx, y, key='id')
# Add variable 'first' which is time of first y=5 for subject (99 if never)
w <- u[, .(first=g(time[y == 5])), by=id]
d <- u[w]

# Show distribution of first time of y=5
table(d[time == 1, first])

  1   2   3   4   5   6   7   8   9  10  99 
 82  70  66  70  50  55  39  51  43  43 431 
# Set all observations after the first y=5 to also have y=5
z <- d
z[time > first, y:=5]
table(u$y); table(d$y); table(z$y)

   0    1    2    3    4    5 
3525 2972  930  895  811  867 

   0    1    2    3    4    5 
2628 2072  652  609  558 3481 

   0    1    2    3    4    5 
2628 2072  652  609  558 3481 
bcf <- blrm(y ~ Tx + cluster(id), data=z, file=rfile(bcf))
bcf

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.345 for Intercepts

blrm(formula = y ~ Tx + cluster(id), data = z, file = rfile(bcf))

Frequencies of Responses

   0    1    2    3    4    5 
2628 2072  652  609  558 3481 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 10000 B 0.252 [0.247, 0.256] g 0.376 [0.255, 0.473] C 0.608 [0.608, 0.608]
Draws 4000 gp 0.088 [0.062, 0.112] Dxy 0.216 [0.216, 0.216]
Chains 4 EV 0.033 [0.015, 0.051]
Time 56.5s v 0.145 [0.065, 0.224]
p 1 vp 0.008 [0.004, 0.012]
Cluster on id
Clusters 1000
σγ 1.7546 [1.6425, 1.8528]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥1   2.0829   2.0813  0.0859   1.9131   2.2447  1.0000  1.02
y≥2   0.8515   0.8510  0.0840   0.6851   1.0092  1.0000  1.02
y≥3   0.4561   0.4549  0.0833   0.2986   0.6200  1.0000  1.04
y≥4   0.0501   0.0491  0.0830  -0.1102   0.2090  0.7240  1.06
y≥5  -0.3851  -0.3855  0.0826  -0.5390  -0.2214  0.0000  1.05
Tx=b  -0.7563  -0.7569  0.1199  -0.9961  -0.5266  0.0000  1.00
hist(bcf$gammas, xlab='Estimated Random Effects', nclass=40, main='')

The regression coefficient for treatment is too large (the true value is log(0.65) = -0.446). The standard deviation of random effects is large (the true value is 0.25), reflecting increased dependence of outcomes without subject due to the duplication of y=5 records. However the data being analyzed were not formally generated with the model that has a treatment odds ratio of 0.65. Repeated correlated ordinal outcomes were generated with that odds ratio and with a random effect standard deviation of 0.25, but then the outcomes were overridden in the following fashion: The first time within a subject that y=5 causes suppression of all later records.

The histogram of estimated subject random effects (posterior medians) shows some bimodality with heavy right tail due to the y=5 absorbing state. Let s also plot the random effects against the time of death (99 if the subject did not die, recoded here to 15).

t5 <- subset(z, time == 1)$first
t5 <- ifelse(t5 == 99, 15, t5)
plot(t5, bcf$gammas, xlab='Time of y=5', ylab='Random Effect')

What happens with time is added to this model?

bcft <- blrm(y ~ Tx + time + cluster(id), data=z, file=rfile(bcft))
bcft

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.345 for Intercepts

blrm(formula = y ~ Tx + time + cluster(id), data = z, file = rfile(bcft))

Frequencies of Responses

   0    1    2    3    4    5 
2628 2072  652  609  558 3481 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 10000 B 0.244 [0.24, 0.249] g 0.986 [0.9, 1.067] C 0.636 [0.634, 0.636]
Draws 4000 gp 0.208 [0.192, 0.221] Dxy 0.271 [0.269, 0.272]
Chains 4 EV 0.14 [0.12, 0.158]
Time 74.6s v 0.741 [0.614, 0.872]
p 2 vp 0.033 [0.028, 0.037]
Cluster on id
Clusters 1000
σγ 1.9426 [1.8385, 2.0623]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥1   0.8522   0.8528  0.0999   0.6588   1.0402  1.0000  0.95
y≥2  -0.4566  -0.4557  0.0997  -0.6434  -0.2577  0.0000  0.94
y≥3  -0.8905  -0.8897  0.1006  -1.0900  -0.6994  0.0000  0.93
y≥4  -1.3515  -1.3498  0.1018  -1.5565  -1.1614  0.0000  0.91
y≥5  -1.8733  -1.8729  0.1037  -2.0891  -1.6856  0.0000  0.95
Tx=b  -0.8333  -0.8354  0.1296  -1.0745  -0.5629  0.0000  1.03
time   0.2614   0.2614  0.0076   0.2471   0.2763  1.0000  0.98

We see that the slope of time is very large, but the treatment effect and random effect standard deviation are still very large.

Look at random effects again.

hist(bcft$gammas, xlab='Estimated Random Effects', nclass=40, main='')
plot(t5, bcft$gammas, xlab='Time of y=5', ylab='Random Effect')

Next we truncate patient records so that y=5 is not carried forward.

zt <- z[time <= first]
bnc <- blrm(y ~ Tx + cluster(id), data=zt, file=rfile(bnc))
bnc

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.345 for Intercepts

blrm(formula = y ~ Tx + cluster(id), data = zt, file = rfile(bnc))

Frequencies of Responses

   0    1    2    3    4    5 
2628 2072  652  609  558  569 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 7088 B 0.231 [0.231, 0.231] g 0.239 [0.193, 0.278] C 0.592 [0.592, 0.592]
Draws 4000 gp 0.054 [0.045, 0.064] Dxy 0.185 [0.185, 0.185]
Chains 4 EV 0.013 [0.008, 0.018]
Time 48.3s v 0.058 [0.037, 0.078]
p 1 vp 0.003 [0.002, 0.004]
Cluster on id
Clusters 1000
σγ 0.3093 [0.2113, 0.4056]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥1   0.8315   0.8306  0.0405   0.7510   0.9087  1.0000  1.05
y≥2  -0.4170  -0.4168  0.0384  -0.4932  -0.3449  0.0000  1.00
y≥3  -0.8797  -0.8797  0.0393  -0.9585  -0.8081  0.0000  1.02
y≥4  -1.4323  -1.4324  0.0425  -1.5181  -1.3526  0.0000  0.94
y≥5  -2.2179  -2.2171  0.0511  -2.3176  -2.1161  0.0000  0.95
Tx=b  -0.4758  -0.4758  0.0489  -0.5711  -0.3835  0.0000  1.01

Finally, add time to the above model.

bnct <- blrm(y ~ Tx + time + cluster(id), data=zt, file=rfile(bnct))
bnct

Bayesian Proportional Odds Ordinal Logistic Model

Dirichlet Priors With Concentration Parameter 0.345 for Intercepts

blrm(formula = y ~ Tx + time + cluster(id), data = zt, file = rfile(bnct))

Frequencies of Responses

   0    1    2    3    4    5 
2628 2072  652  609  558  569 
Mixed Calibration/
Discrimination Indexes
Discrimination
Indexes
Rank Discrim.
Indexes
Obs 7088 B 0.231 [0.231, 0.232] g 0.252 [0.207, 0.304] C 0.548 [0.547, 0.55]
Draws 4000 gp 0.058 [0.047, 0.069] Dxy 0.096 [0.095, 0.101]
Chains 4 EV 0.013 [0.009, 0.019]
Time 49.3s v 0.059 [0.038, 0.085]
p 2 vp 0.003 [0.002, 0.004]
Cluster on id
Clusters 1000
σγ 0.3212 [0.2235, 0.4201]
Mean β Median β S.E. Lower Upper Pr(β>0) Symmetry
y≥1   0.8162   0.8171  0.0521   0.7141   0.9195  1.0000  1.02
y≥2  -0.4338  -0.4328  0.0513  -0.5330  -0.3318  0.0000  1.01
y≥3  -0.8972  -0.8972  0.0528  -0.9974  -0.7933  0.0000  0.99
y≥4  -1.4515  -1.4519  0.0550  -1.5628  -1.3486  0.0000  0.98
y≥5  -2.2379  -2.2380  0.0621  -2.3623  -2.1199  0.0000  1.00
Tx=b  -0.4804  -0.4807  0.0495  -0.5724  -0.3772  0.0000  0.97
time   0.0047   0.0047  0.0080  -0.0102   0.0204  0.7208  0.99

The time effect is very weak, and adding it did not change the already-accurate (with respect to the first part of the data generating mechanism) treatment effect posterior mean.

4 Censored Data

The blrm function handles left-, right-, and interval-censored ordinal categorical or continuous Y. This opens up numerous possibilities, for example

  • one can analyze the usual right-censored time-to-event outcome but assuming, for example, prpoportional odds instead of proportional hazards
  • there can be more than one lower limit of detectability for a lab measurement
  • in a longitudinal study certain ranges of the outcome scale may not be assessed on certain days

As an example of the third situation, suppose that Y is defined as follows:

Level of Y Meaning
0 best quality of life
1 very good QOL
2 fair QOL
3 poor QOL
4 myocardial infarction
5 stroke
6 death

Suppose that Y were assessed weekly and that the clinical events of MI, stroke, or death are always known when they occur. But suppose that QOL is only assessed once per month. Instead of dealing with complex missing data methods, consider Y to be partially assessed by the use of left censoring. On weeks of non-assessment of QOL consider Y to just be known to be < 4 when the participant is event-free.

blrm uses the Ocens function (“ordinal censoring”) to handle censored Y. The notation is Ocens(a, b) where a is the lowest value that Y might be, and b is the highest value it might be for a certain observation. If a=b, the observation is uncensored. If in a given week a patient has a clinical event, that event overrides any level of QOL and will result in Y=5, for example, for stroke. For a week for which a clinical event does not occur, we know that Y < 4. When QOL is assessed, we know which of Y=0,1,2,3 pertains. When QOL is not assessed, Y = Ocens(0, 4). It should be clear that if QOL were not assessed for any participant, the dependent variable is really a 4-level outcome (3 clinical outcomes, with Y=0 denoting that no bad outcome occurred for the participant).

In full-likelihood models such as our extended PO model, censored data are easily handled. One just has to compute the contribution to the log-likelihood for each observation from the information it provides. An observation interval-censored with Y \(\in [3,4]\) has a likelihood equal to the model’s probability that Y is between 3 and 4 inclusive. For a cumulative probability model this easily derived from \(P(Y \geq 3) - P(Y > 4)\) which is a difference in expits.

Here is an example where we simulate a dataset with a 6-level ordinal response variable with no censored values, and fit the PO model to it. Then the dataset is duplicated and all the observations with y=1 or 2 are left censored at <= 2 (which is the same as interval censoring on [1,2]), all those with y=3 or 4 are interval censored in [3,4], and all those with y=5 or 6 are right censored at y >= 5. The model is re-fitted to see if the posterior mean regression coefficients remain relatively unchanged.

set.seed(1)
n <- 500
x <- rnorm(n)
y <- as.integer(cut2(x + rnorm(n), g=6))
f <- blrm(y ~ x, file=rfile(bnocens))
m <- length(x)
y2 <- y[1 : m]
a <- b <- y2
# Left censor obs with y <= 2
i <- y2 <= 2
a[i] <- 1
b[i] <- 2
# Interval censor obs with y = 3 or 4
i <- y2 == 3 | y2 == 4
a[i] <- 3
b[i] <- 4
# Right censor obs with y = 5 or 6
i <- y2 >= 5
a[i] <- 5
b[i] <- 6
table(y2, paste0('[', a, ',', b, ']'))
   
y2  [1,2] [3,4] [5,6]
  1    84     0     0
  2    83     0     0
  3     0    83     0
  4     0    84     0
  5     0     0    83
  6     0     0    83
Y <- Ocens(c(y, a), c(y, b))
x <- c(x, x[1 : m])
g <- blrm(Y ~ x, file=rfile(bcens))
rbind('No cens:mode'=coef(f, 'mode'),
      'No cens:mean'=coef(f, 'mean'),
      'Cens:mode'   =coef(g, 'mode'),
            'Cens:mean'   =coef(g, 'mean'))
                 y>=2      y>=3         y>=4      y>=5      y>=6        x
No cens:mode 2.199356 0.9533115 -0.012488917 -1.015303 -2.288427 1.532671
No cens:mean 2.207585 0.9556180 -0.011967579 -1.017153 -2.294094 1.538793
Cens:mode    2.231574 0.9764212 -0.006143320 -1.024424 -2.313397 1.580871
Cens:mean    2.236795 0.9773123 -0.004673949 -1.025635 -2.317207 1.584381

5 Multiple Imputation

When possible, full joint Bayesian modeling of (possibly missing) covariates and the outcome variable should be used to get exact inference in the presence of missing covariate values. Another good approach is to use multiple imputation with stacking of posterior draws after running the Bayesian model for each completed dataset. When doing posterior inference on the stacked posterior draws the uncertainty from multiple imputation is fully taken into account, and so is the change in posterior distribution. Frequentist inference requires complex adjustments, as multiple imputation alters the sampling distribution of model parameter estimates. For example, regression coefficient estimates that have a normal distribution with complete data may have a \(t\)-like distribution after multiple imputation.

blrm works with the stackMI function to make posterior stacking easier. It works with Hmisc::aregImpute and the mice package. stackMI is the analog of the Hmisc::fit.mult.impute but is much simpler due to the use of the Bayesian paradigm.

Here is an example adapted from the aregImpute help file. The aregImpute result is stored so that the random seed-initiated multiple imputation process will not be re-run if not necessary. That will allow the Stan code to not be run again until the underlying data changes or the code in the chunks changes.

set.seed(2)
n <- 1000
x1 <- factor(sample(c('a','b','c'), n, TRUE))
x2 <- (x1=='b') + 3*(x1=='c') + rnorm(n,0,2)
x3 <- rnorm(n)
xbeta <- 0.35 * (x2 + 1 * (x1 == 'c') + 0.2 * x3)
y  <- ifelse(runif(n) <= plogis(xbeta), 1, 0)
x1[1:250]   <- NA
x2[251:350] <- NA
d <- data.frame(x1,x2,x3,y, stringsAsFactors=TRUE)

mithere <- file.exists('mi.rds')
mi <- if(mithere) readRDS('mi.rds') else
  aregImpute(~ y + x1 + x2 + x3, nk=3, data=d, B=10, n.impute=5, pr=FALSE)
if(! mithere) saveRDS(mi, 'mi.rds', compress='xz')
mi

Multiple Imputation using Bootstrap and PMM

aregImpute(formula = ~y + x1 + x2 + x3, data = d, n.impute = 5, 
    nk = 3, pr = FALSE, B = 10)

n: 1000     p: 4    Imputations: 5      nk: 3 

Number of NAs:
  y  x1  x2  x3 
  0 250 100   0 

   type d.f.
y     l    1
x1    c    2
x2    s    1
x3    s    2

Transformation of Target Variables Forced to be Linear

R-squares for Predicting Non-Missing Values for Each Variable
Using Last Imputations of Predictors
   x1    x2 
0.283 0.376 
# Note: The following model will be re-run every time aregImpute runs
# because imputations have randomness
bmi <- stackMI(y ~ x1 + x2 + x3, blrm, mi, data=d, refresh=50, file=rfile(bmi))
stanDx(bmi)
Diagnostics for each of 5 imputations

Iterations: 2000 on each of 4 chains, with 4000 posterior distribution samples saved

For each parameter, n_eff is a crude measure of effective sample size
and Rhat is the potential scale reduction factor on split chains
(at convergence, Rhat=1)
                        n_eff  Rhat
Imputation 1: Intercept  4364 1.000
Imputation 1: x1=b       4063 1.001
Imputation 1: x1=c       4438 1.000
Imputation 1: x2         4734 0.999
Imputation 1: x3         5099 0.999
Imputation 2: Intercept  4785 1.000
Imputation 2: x1=b       4413 1.000
Imputation 2: x1=c       4205 1.000
Imputation 2: x2         4748 1.001
Imputation 2: x3         4562 1.000
Imputation 3: Intercept  3738 1.000
Imputation 3: x1=b       4741 1.000
Imputation 3: x1=c       4677 1.000
Imputation 3: x2         4980 0.999
Imputation 3: x3         4869 1.000
Imputation 4: Intercept  4448 1.000
Imputation 4: x1=b       4223 1.000
Imputation 4: x1=c       4586 1.000
Imputation 4: x2         4620 0.999
Imputation 4: x3         4561 1.001
Imputation 5: Intercept  4822 1.000
Imputation 5: x1=b       4614 1.000
Imputation 5: x1=c       4549 1.000
Imputation 5: x2         4936 0.999
Imputation 5: x3         4884 1.000
stanDxplot(bmi, which='x1=c', rev=TRUE, stripsize=5)

plot(bmi)

One can see that the 5 individual posterior distributions for the frequently missing variables x1 and x2 vary a lot, but not so for the never missing variable x3.

Computations done on the bmi object will automatically use the full stacked posterior distribution.

6 Scaling and Intepretation of Priors on \(\beta\)s

blrm orthonormalizes the data design matrix using the QR decomposition to greatly improve posterior distribution sampling in the case of collinearities among predictors (especially among spline basis functions). This created complexities for rmsb prior to version 1.0-0. With version 1.0-0 priors are specified only on contrasts, and these contrasts are automatically translated to QR space, so users do not need to account for QR.

One exception is that for non-proportional odds parameters one must use the keepsep argument to hold them out of QR, and the prior variance for regression coefficients (here \(\tau\)s) is provided. In the future this will be handled through contrasts also.

7 Speed of blrm For Large Numbers of Y Levels

When there is a large number of intercepts in the model, the speed of blrm will decrease. What about the speed of using blrm just to get (potentially penalized) maximum likelihood estimates? Let’s try fitting a progressively more continuous dependent variable.

set.seed(1)
n <- 1000
x <- rnorm(n)
y <- x + rnorm(n)
for(g in c(2, 4, 8, 16, 32, 64, 128, 256)) {
  cat('\n', g, 'distinct values of y\n')
  yg <- cut2(y, g=g)
  print(system.time(f <- blrm(yg ~ x, method='optimizing')))
}

 2 distinct values of y
   user  system elapsed 
  0.009   0.000   0.009 

 4 distinct values of y
   user  system elapsed 
  0.008   0.000   0.009 

 8 distinct values of y
   user  system elapsed 
  0.009   0.000   0.010 

 16 distinct values of y
   user  system elapsed 
  0.009   0.000   0.009 

 32 distinct values of y
   user  system elapsed 
  0.009   0.000   0.010 

 64 distinct values of y
   user  system elapsed 
  0.009   0.000   0.009 

 128 distinct values of y
   user  system elapsed 
  0.009   0.000   0.010 

 256 distinct values of y
   user  system elapsed 
  0.011   0.000   0.012 

This is impressive. For g=256 compare with the execution time of the Newton-Raphson method making optimum use of sparse matrices. Also compare coefficients. When sampling is done, the default Dirichlet distribution concentration parameter for the intercepts is selected to make the posterior means agree with maximum likelihood estimates, sacrificing some performance of posterior modes. When method='optimizing', instead a concentration parameter of 1.0 for the Dirichlet prior distribution for intercepts is always used, which seems to make optimization agree more with maximum likelihood estimates. This optimization is used to get posterior modes when random effects are not present.

system.time(g <- orm(yg ~ x))
   user  system elapsed 
  0.038   0.001   0.039 
plot(coef(g), coef(f), xlab='Coefficients from orm',
     ylab='Coefficients from blrm')
abline(a=0, b=1, col=gray(0.8))

See how long it takes to do posterior sampling with Stan when there are 16, 64, or 128 levels of y.

for(g in c(16, 64, 128)) {
  cat('\n', g, 'distinct values of y\n')
  yg <- cut2(y, g=g)
  print(system.time(h <- blrm(yg ~ x)))
}

 16 distinct values of y
   user  system elapsed 
 14.735   2.025   8.451 

 64 distinct values of y
   user  system elapsed 
 17.264   2.051   9.429 

 128 distinct values of y
   user  system elapsed 
 24.483   2.108  11.260 

8 Computing Environment

 R version 4.3.2 (2023-10-31)
 Platform: aarch64-apple-darwin20 (64-bit)
 Running under: macOS Sonoma 14.3.1
 
 Matrix products: default
 BLAS:   /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib 
 LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0
 
 attached base packages:
 [1] splines   stats4    stats     graphics  grDevices utils     datasets 
 [8] methods   base     
 
 other attached packages:
 [1] data.table_1.15.2 ordinal_2023.12-4 VGAM_1.1-10       rmsb_1.1-0       
 [5] rms_6.8-0         Hmisc_5.1-2      
 
To cite R in publications use:

R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

To cite the Hmisc package in publications use:

Harrell Jr F (2024). Hmisc: Harrell Miscellaneous. R package version 5.1-2, https://hbiostat.org/R/Hmisc/.

To cite the rms package in publications use:

Harrell Jr FE (2024). rms: Regression Modeling Strategies. R package version 6.8-0, https://github.com/harrelfe/rms, https://hbiostat.org/R/rms/.

To cite the rmsb package in publications use:

Harrell F (2024). rmsb: Bayesian Regression Modeling Strategies. R package version 1.1-0, https://hbiostat.org/R/rmsb/.

To cite the VGAM package in publications use:

Yee TW (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. Springer, New York, USA.

To cite the data.table package in publications use:

Barrett T, Dowle M, Srinivasan A, Gorecki J, Chirico M, Hocking T (2024). data.table: Extension of 'data.frame'. R package version 1.15.2, https://CRAN.R-project.org/package=data.table.

9 References

Peterson, B., & Harrell, F. E. (1990). Partial proportional odds models for ordinal response variables. Appl Stat, 39, 205–217.