class: center, middle, inverse, title-slide # Overview of Maximum Likelihood Estimation ### ### 2022-02-25 --- # Review * Maximum Likelihood estimation is a technique for estimating parameters and drawing statistical inferences `$$L(B) = \prod_{i = 1}^n f_i(Y_i, B)$$` * The first derivative of the log-likelihood function is the **score function `\(U(\theta)\)`** * The negative of the second derivative of the log-likelihood function is the **Fisher information** --- * The information function is the expected value of the negative curvature in the log-likelihood: if the log-likelihood function has a distinct peak, we can easily discriminate between a good parameter estimate and a bad one. <img src="data:image/png;base64,#MLELecture_files/figure-html/unnamed-chunk-1-1.png" width="864" /> --- # Review .pull-left[ Likelihood Ratio Test (LR): `$$LR = -2log(\text{L at } H_0/\text{L at MLE})$$` Wald Test (W): `$$W = \frac{(\hat{B} - B_0)^2}{Var(\hat{B})}$$` Score Test (S): `$$S = \frac{U(B_0)}{I(B_0)}$$` ] .pull-right[ <img src="data:image/png;base64,#liktest.png" width="150%" style="display: block; margin: auto;" /> ] --- # Review * The decision on which test to use is based on statistical and computational properties. * From the **statistical point of view**: LR is the best statistic followed by S and W. In logistic regression problems, W is sensitive to problems in the estimated variance-covariance matrix of the full model * From the **computational point of view**: Estimation of LR and W requires estimating all the unknown parameters. Additionally, estimation of LR requires two models. --- # Confidence Intervals **What test should form the basis for the confidence interval?** * The Wald test is the most frequently used. + The interval based on the Wald test is given by `\(b_i \pm z_{1-\alpha/2}s\)` + The Wald statistic might not always be good due to problems of the W in the estimation of the variance and covariance matrix + Wald-based statistics are convenient for deriving confidence intervals for linear or more complex combination of model's parameters * LR and score-based confidence intervals also exist. However, they are computationally more intensive than the confidence interval based on the Wald statistic --- # Boostrap Confidence Regions * Confidence intervals for functions of the vector of parameters `\(B\)` can be computed using **bootstrap percentile** confidence limits. + from each sample with replacement of the original dataset compute the MLE of `\(B\)`, `\(b\)`. + compute the quantity of interest `\(g(b)\)` + sort `\(g(b)\)` and compute the desired percentiles * The method is suitable for obtaining pointwise confidence band for non linear functions * Other more complex bootstrap scheme exists --- * Carpenter et al wrote an article about bootstrap confidence interval describing when should we use them, which one should we pick, and how should we calculate the bootstrap confidence interval * The picture below is taken directly from the paper <img src="data:image/png;base64,#bootstrapEx.png" width="80%" style="display: block; margin: auto;" /> --- ```r library(rms) set.seed(15) n <- 200 x1 <- rnorm(n) logit <- x1/2 y <- ifelse(runif(n) <= plogis(logit), 1, 0) dd <- datadist(x1); options(datadist = "dd") f <- lrm(y ~ pol(x1, 2), x = TRUE, y = TRUE) f ``` ``` ## Logistic Regression Model ## ## lrm(formula = y ~ pol(x1, 2), x = TRUE, y = TRUE) ## ## Model Likelihood Discrimination Rank Discrim. ## Ratio Test Indexes Indexes ## Obs 200 LR chi2 16.37 R2 0.105 C 0.642 ## 0 97 d.f. 2 g 0.680 Dxy 0.285 ## 1 103 Pr(> chi2) 0.0003 gr 1.973 gamma 0.285 ## max |deriv| 3e-09 gp 0.156 tau-a 0.143 ## Brier 0.231 ## ## Coef S.E. Wald Z Pr(>|Z|) ## Intercept -0.0842 0.1823 -0.46 0.6441 ## x1 0.5902 0.1580 3.74 0.0002 ## x1^2 0.1557 0.1136 1.37 0.1708 ## ``` --- ```r X <- cbind(Intercept = 1, predict(f, data.frame(x1 = c(1,5)), type = "x")) Xdif <- X[2,,drop=FALSE] - X[1,,drop=FALSE] Xdif ``` ``` ## Intercept pol(x1, 2)x1 pol(x1, 2)x1^2 ## 2 0 4 24 ``` ```r b <- bootcov(f, B = 1000) boot.log.odds.ratio <- b$boot.Coef %*% t(Xdif) sd(boot.log.odds.ratio) ``` ``` ## [1] 2.767793 ``` ```r # summary() uses the bootstrap covariance matrix summary(b, x1 = c(1,5))[1, "S.E."] ``` ``` ## [1] 2.767793 ``` --- ```r print(contrast(b, list(x1 = 5), list(x1 = 1), fun = exp)) ``` ``` ## Contrast S.E. Lower Upper Z Pr(>|z|) ## 11 6.09671 2.767793 1.137882 12.13644 2.2 0.0276 ## ## Confidence intervals are 0.95 bootstrap nonparametric percentile intervals ``` ```r hist(boot.log.odds.ratio, nclass = 100, xlab = "log(OR)", main = "Distribution of 1000 bootstrap x=1:5 log odds ratio") ``` <img src="data:image/png;base64,#MLELecture_files/figure-html/unnamed-chunk-6-1.png" width="864" /> --- ```r x1s <- seq(0, 5, length = 100) pwald <- Predict(f, x1 = x1s) psand <- Predict(robcov(f), x1 = x1s) pbootcov <- Predict(b, x1 = x1s, usebootcoef = FALSE) pbootnp <- Predict(b, x1 = x1s) pbootbca <- Predict(b, x1 = x1s, boot.type = "bca") pbootbas <- Predict(b, x1 = x1s, boot.type = "basic") psimult <- Predict(b, x1 = x1s, conf.type = "simultaneous") ``` <img src="data:image/png;base64,#bootci.png" width="80%" style="display: block; margin: auto;" /> --- # AIC & BIC * Suppose we have data from one sample and we develop two models. The -2 loglikelihood for models 1 and 2 are `\(L_1\)` and `\(L_2\)` * We observed `\(L_1 < L_2\)`. * Which of the two models is the best? -- * Model 1 can provide a better fit for the data, but it might require a larger number of paramters * If model 1 is over fitting then it can results in worse results in a new dataset --- * AIC would choose the model by comparing `\(L_1 + 2p_1\)` with `\(L_2 + 2p_2\)` and selecting the model with the lowest value * Similar to AIC, BIC would select a model by accounting for the likelihood and the number of parameters * BIC would choose the model by comparing `\(L_1 + p_1logn\)` with `\(L_2 + p_2logn\)` and selecting the model with the lowest value * Several authors have studied the AIC, BIC and other likelihood penalties. Some highlights: + AIC have _"lower probability of correct model selection"_ in linear regression settings (Zheng and Loh) + *"Our experience with large dataset in sociology is that the AIC selects models that are too big even when the sample size is large, including effects that are counterintuitive or not borne out by subsequent research"* (Kass and Raftery) + There are cases where AIC yields consistent model selection but BIC does not (Kass and Raftery) * The corrected AIC improves AIC performance in small samples: `$$AIC_c = AIC + \frac{2p(p+1)}{n-p-1}$$` --- # Testing if model `\(M_1\)` is better than model `\(M_2\)` * To test if model `\(M_1\)` is better than model `\(M_2\)` we could: + combine `\(M_1\)` and `\(M_2\)` in a single model `\(M_1\)` + `\(M_2\)` + test whether `\(M_1\)` adds predictive information to `\(M_2\)` `\((H_0: M_1+M_2>M_2)\)` * test whether `\(M_2\)` adds predictive information to `\(M_1\)` `\((H_0: M_1+M_2>M_1)\)` <img src="data:image/png;base64,#table.png" width="80%" style="display: block; margin: auto;" /> --- # Unitless Index of Adequacy of a Subset of Predictors `$$A = \frac{LR^s}{LR}$$` * `\(LR^s\)` is - 2 log-likelihood for testing the importance of the subset of predictors of interest (excluding the other predictors from the model). * `\(LR^s\)` is - 2 log-likelihood for testing the full model (i.e., the model with both sets of predictors) * `\(A\)` is the proportion of the log likelihood explained by the subset of predictors compared to the proportion of likelihood explained by the full set of predictors -- * When `\(A = 0\)`, the subset does not have predictive information by itself * When `\(A = 1\)` the subset contains all the predictive information found in the wholw set of predictors --- # Unitless Index of Predictive Ability 1. Best (lowest) possible `\(-2LL\)`: `\(L^* = -2LL\)` for a hypothetical model that perfectly predicts the outcome 2. Achieved `\(-2LL\)`: `\(L = -2LL\)` for the fitted model 3. Worst `\(-2LL\)`: `\(L^{0} = - 2LL\)` for a model that has no predictive information * The fraction of `\(-2LL\)` explained that was capable of being explained is `$$\frac{L^{0} - L}{L^{0} - L^{*}} = \frac{LR}{L^{0}-L^{*}}$$` * We can penalise this measure by accounting for the number of parameters `\(p\)`: `$$R^{2} = \frac{LR - 2p}{L^{0}-L^{*}}$$` --- * A partial `\(R^2\)` index can also be defined where we consider the amount of likelihood explained by a single factor instead of the full model `$$R_{partial}^{2} = \frac{LR_{partial} - 2}{L^{0}-L^{*}}$$` * Multiple authors have pointed out difficulties with the `\(R^2\)` in a logistic model. Different `\(R^2\)` measures have been provided. One of these measures is: `$$R^{2}_{LR} = 1 - exp\{LR/n\} = 1 - \lambda^{2/n}$$` where `\(\lambda\)` is the null model likelihood divided by the fitted model likelihood * Cragg, Uhler and Nagelkerke suggested dividing `\(R^{2}_{LR}\)` by its maximum attainable value to derive a measure ranging from 0 to 1: `$$R^{2}_{N} = \frac{1 - exp\{LR/n\}}{1-exp\{L^0/n\}}$$` --- # Penalized Maximum Likelihood * A general formula for penalized likelihood; `$$logL - \frac{1}{2}\lambda\sum_{i = 1}^p(s_i\beta_i)^2$$` where `\(s_i\)` are scale factors chosen to make `\(s_i\beta_i\)` unitless. * Usual methods can be used to find `\(\hat{\beta}^{P}\)` that maximizes the log-likelihood. If we do not wish to shrink all the parameters we can set the scale constant to 0. * **Choice of scaling `\(s_i\)`.** Most authors standardize the data first so they do not have the scale factors in their equation. A common choice is to use the standard deviation of each column of he design matrix. This choice is problematic for non linear term and for dummy variables. * For a categorical predictors with `\(c\)` levels the amount of shrinkage and the predicted values depend on which level was chosen as the reference. An alternative penalty function `\(\sum_{i}(\beta_i-\bar{\beta})^2\)` that shrinks the coefficient towards the mean has been proposed. --- **Effective number of parameters.** Effective number of parameters changes for each `\(\lambda\)` due to shrinkage. The degrees of freedom can be calculated as: `$$trace\left[I\left(\hat{\beta}^P\right)V\left(\hat{\beta}^P\right)\right]$$` **Choosing `\(\lambda\)`.** To choose `\(\lambda\)` we can use the modified AIC `$$LR \chi^2 - 2\text{ effective d.f.}$$` where `\(LR \chi^2\)` is the likelihood ratio `\(\chi^2\)` for the penalized model, but ignoring the penalty function. The `\(\lambda\)` that maximizes the AIC will often be a good choice. --- ```r set.seed(191) x1 <- rnorm (100) y <- x1 + rnorm (100) pens <- df <- aic <- c(0, 0.07, 0.5, 2, 6, 15, 60) all <- nl <- list() df.tot <- NULL for(penalize in 1:2){ for(i in 1:length(pens)){ f <- ols(y ~ rcs(x1, 5), penalty = list(simple = if(penalize==1)pens[i] else 0 , nonlinear = pens[i])) df[i] <- f$stat["d.f."] aic[i] <- AIC(f) nam <- paste(if(penalize == 1) "all" else "nl", "penalty:", pens[i], sep= "") nam <- as.character(pens[i]) p <- Predict(f, x1 = seq(-2.5, 2.5, length = 100), conf.int = FALSE) if(penalize == 1) all[[nam]] <- p else nl[[nam]] <- p } df.tot <- rbind(df.tot, rbind(df=df, aic=aic)) } ``` --- | | | | | | | | | |:---|--------:|----------:|----------:|----------:|----------:|----------:|----------:| |df | 4.0000| 3.213591| 2.706069| 2.302730| 2.029282| 1.822758| 1.513609| |aic | 270.6653| 269.154045| 268.222855| 267.565943| 267.288988| 267.552915| 270.805033| |df | 4.0000| 3.219149| 2.728126| 2.344807| 2.109741| 1.960863| 1.684421| |aic | 270.6653| 269.167108| 268.287933| 267.718681| 267.441197| 267.347475| 267.892073| <img src="data:image/png;base64,#MLELecture_files/figure-html/unnamed-chunk-11-1.png" width="504" style="display: block; margin: auto;" /> --- # Reference * Harrell, FE Jr (2016). Regression Modeling Strategies. Springer Series in Statistics. Cham, Switzerland: Springer International Publishing * Carpenter, J and Bithell, J (2000). Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians. Statistics in Medicine (19). pp 1141-1164 * Kass, RE and Reftery AE (1995). Bayes Factors. Journal of American Statistical Association (71). pp 773-795 * Zheng, X and Loh, W (1995). Consistent variable seletion in linear models. Journal of American Statistical Association (90). pp 151-156