1 Introduction
1.1 Hypothesis Testing, Estimation, and Prediction
Even when only testing \(H_{0}\) a model based approach has advantages:
- Permutation and rank tests not as useful for estimation A
- Cannot readily be extended to cluster sampling or repeated measurements
- Models generalize tests
- 2-sample \(t\)-test, ANOVA \(\rightarrow\)
multiple linear regression - paired \(t\)-test \(\rightarrow\)
linear regression with fixed effects for subjects (block on subjects); linear mixed model with random per-subject intercepts - Wilcoxon, Kruskal-Wallis, Spearman \(\rightarrow\)
proportional odds (PO) ordinal logistic model - Wilcoxon signed-rank test \(\rightarrow\)
replace with rank-difference test \(\rightarrow\)
PO model blocking on subject; ordinal mixed model - log-rank \(\rightarrow\) Cox
- 2-sample \(t\)-test, ANOVA \(\rightarrow\)
- Models not only allow for multiplicity adjustment but for shrinkage of estimates
- Statisticians comfortable with \(P\)-value adjustment but fail to recognize that the difference between the most different treatments is badly biased
Statistical estimation is usually model-based
- Relative effect of increasing cholesterol from 200 to 250 B mg/dl on hazard of death, holding other risk factors constant
- Adjustment depends on how other risk factors relate to hazard
- Usually interested in adjusted (partial) effects, not unadjusted (marginal or crude) effects
1.2 Examples of Uses of Predictive Multivariable Modeling
- Financial performance, consumer purchasing, loan pay-back C
- Ecology
- Product life
- Employment discrimination
- Medicine, epidemiology, health services research
- Probability of diagnosis, time course of a disease
- Checking that a previously developed summary index (e.g., BMI) adequately summarizes its component variables
- Developing new summary indexes by how variables predict an outcome
- Comparing non-randomized treatments
- Getting the correct estimate of relative effects in randomized studies requires covariable adjustment if model is nonlinear
- Crude odds ratios biased towards 1.0 if sample heterogeneous
- Estimating absolute treatment effect (e.g., risk difference)
- Use e.g. difference in two predicted probabilities
- Cost-effectiveness ratios
- incremental cost / incremental ABSOLUTE benefit
- most studies use avg. cost difference / avg. benefit, which may apply to no one
1.3 Misunderstandings about Prediction vs. Classification
- Many analysts desire to develop “classifiers” instead of D predictions
- Outside of, for example, visual or sound pattern recognition, classification represents a premature decision
- See this blog for details
- Suppose that
- response variable is binary
- the two levels represent a sharp dichotomy with no gray zone (e.g., complete success vs. total failure with no possibility of a partial success)
- one is forced to assign (classify) future observations to only these two choices
- the cost of misclassification is the same for every future observation, and the ratio of the cost of a false positive to the cost of a false negative equals the (often hidden) ratio implied by the analyst’s classification rule
- Then classification is still sub-optimal for driving the development of a predictive instrument as well as for hypothesis testing and estimation
- Classification and its associated classification accuracy measure—the proportion classified “correctly”—are very sensitive to the relative frequencies of the outcome variable. If a classifier is applied to another dataset with a different outcome prevalence, the classifier may no longer apply.
- Far better is to use the full information in the data to develop a probability model, then develop classification rules on the basis of estimated probabilities
- \(\uparrow\) power, \(\uparrow\) precision, \(\uparrow\) decision making
- Classification is more problematic if response variable is ordinal or continuous or the groups are not truly distinct (e.g., disease or no disease when severity of disease is on a continuum); dichotomizing it up front for the analysis is not appropriate
- minimum loss of information (when dichotomization is at the median) is large
- may require the sample size to increase many–fold to compensate for loss of information Fedorov et al. (2009)
- Two-group classification represents artificial forced choice
- best option may be “no choice, get more data”
- Unlike prediction (e.g., of absolute risk), classification implicitly uses utility (loss; cost of false positive or false negative) functions
- Hidden problems:
- Utility function depends on variables not collected (subjects’ preferences) that are available only at the decision point
- Assumes every subject has the same utility function
- Assumes this function coincides with the analyst’s
- Formal decision analysis uses
- optimum predictions using all available data
- subject-specific utilities, which are often based on variables not predictive of the outcome
- ROC analysis is misleading except for the special case of mass one-time group decision making with unknowable utilities1
1 To make an optimal decision you need to know all relevant data about an individual (used to estimate the probability of an outcome), and the utility (cost, loss function) of making each decision. Sensitivity and specificity do not provide this information. For example, if one estimated that the probability of a disease given age, sex, and symptoms is 0.1 and the “cost”of a false positive equaled the “cost” of a false negative, one would act as if the person does not have the disease. Given other utilities, one would make different decisions. If the utilities are unknown, one gives the best estimate of the probability of the outcome to the decision maker and let her incorporate her own unspoken utilities in making an optimum decision for her.
Besides the fact that cutoffs do not apply to individuals, only to groups, individual decision making does not utilize sensitivity and specificity. For an individual we can compute \(\textrm{Prob}(Y=1 | X=x)\); we don’t care about \(\textrm{Prob}(Y=1 | X>c)\), and an individual having \(X=x\) would be quite puzzled if she were given \(\textrm{Prob}(X>c | \textrm{future unknown Y})\) when she already knows \(X=x\) so \(X\) is no longer a random variable.
Even when group decision making is needed, sensitivity and specificity can be bypassed. For mass marketing, for example, one can rank order individuals by the estimated probability of buying the product, to create a lift curve. This is then used to target the \(k\) most likely buyers where \(k\) is chosen to meet total program cost constraints.
See Vickers (2008), Briggs & Zaretzki (2008), Gail & Pfeiffer (2005), Bordley (2007), Fan & Levine (2007), Gneiting & Raftery (2007).
Accuracy score used to drive model building should be a continuous score that utilizes all of the information in the data.
In summary:
- Classification is a forced choice — a decision. E
- Decisions require knowledge of the cost or utility of making an incorrect decision.
- Predictions are made without knowledge of utilities.
- A prediction can lead to better decisions than classification. For example suppose that one has an estimate of the risk of an event, \(\hat{P}\). One might make a decision if \(\hat{P} < 0.10\) or \(\hat{P} > 0.90\) in some situations, even without knowledge of utilities. If on the other hand \(\hat{P} = 0.6\) or the confidence interval for \(P\) is wide, one might
- make no decision and instead opt to collect more data
- make a tentative decision that is revisited later
- make a decision using other considerations such as the infusion of new resources that allow targeting a larger number of potential customers in a marketing campaign
The Dichotomizing Motorist
- The speed limit is 60. F
- I am going faster than the speed limit.
- Will I be caught?
An answer by a dichotomizer:
- Are you going faster than 70? G
An answer from a better dichotomizer:
- If you are among other cars, are you going faster than 73? H
- If you are exposed are your going faster than 67?
Better:
- How fast are you going and are you exposed? I
Analogy to most medical diagnosis research in which +/- diagnosis is a false dichotomy of an underlying disease severity:
- The speed limit is moderately high. J
- I am going fairly fast.
- Will I be caught?
1.4 Planning for Modeling
K
- Chance that predictive model will be used (Reilly & Evans (2006))
- Response definition, follow-up
- Variable definitions
- Observer variability
- Missing data
- Preference for continuous variables
- Subjects
- Sites
What can keep a sample of data from being appropriate for modeling:
- Most important predictor or response variables not collected L
- Subjects in the dataset are ill-defined or not representative of the population to which inferences are needed
- Data collection sites do not represent the population of sites
- Key variables missing in large numbers of subjects
- Data not missing at random
- No operational definitions for key variables and/or measurement errors severe
- No observer variability studies done
What else can go wrong in modeling?
- The process generating the data is not stable. M
- The model is misspecified with regard to nonlinearities or interactions, or there are predictors missing.
- The model is misspecified in terms of the transformation of the response variable or the model’s distributional assumptions.
- The model contains discontinuities (e.g., by categorizing continuous predictors or fitting regression shapes with sudden changes) that can be gamed by users.
- Correlations among subjects are not specified, or the correlation structure is misspecified, resulting in inefficient parameter estimates and overconfident inference.
- The model is overfitted, resulting in predictions that are too extreme or positive associations that are false.
- The user of the model relies on predictions obtained by extrapolating to combinations of predictor values well outside the range of the dataset used to develop the model.
- Accurate and discriminating predictions can lead to behavior changes that make future predictions inaccurate.
Iezzoni (1994) lists these dimensions to capture, for patient outcome studies:
- age N
- sex
- acute clinical stability
- principal diagnosis
- severity of principal diagnosis
- extent and severity of comorbidities
- physical functional status
- psychological, cognitive, and psychosocial functioning
- cultural, ethnic, and socioeconomic attributes and behaviors
- health status and quality of life
- patient attitudes and preferences for outcomes
General aspects to capture in the predictors:
- baseline measurement of response variable O
- current status
- trajectory as of time zero, or past levels of a key variable
- variables explaining much of the variation in the response
- more subtle predictors whose distributions strongly differ between levels of the key variable of interest in an observational study
1.5 Choice of the Model
- In biostatistics and epidemiology and most other areas we P usually choose model empirically
- Model must use data efficiently
- Should model overall structure (e.g., acute vs. chronic)
- Robust models are better
- Should have correct mathematical structure (e.g., constraints on probabilities)
1.6 Model uncertainty / Data-driven Model Specification
Standard errors, C.L., \(P\)-values, \(R^2\) wrong if computed as Q if the model pre-specified
Stepwise variable selection is widely used and abused
Bootstrap can be used to repeat all analysis steps to properly penalize variances, etc.
Ye (1998): “generalized degrees of freedom” (GDF) for any “data mining” or model selection procedure based on least squares
- Example: 20 candidate predictors, \(n=22\), forward stepwise, best 5-variable model: GDF=14.1
- Example: CART, 10 candidate predictors, \(n=100\), 19 nodes: GDF=76
See Luo et al. (2006) for an approach involving adding noise to \(Y\) to improve variable selection
Another example: \(t\)-test to compare two means
Basic test assumes equal variance and normal data distribution
Typically examine the two sample distributions to decide whether to transform \(Y\) or switch to a different test
Examine the two SDs to decide whether to use the standard test or switch to a Welch \(t\)-test
Final confidence interval for mean difference is conditional on the final choices being correct
Ignores model uncertainty
Confidence interval will not have the claimed coverage
Get proper coverage by adding parameters for what you don’t know
- Bayesian \(t\)-test: parameters for variance ratio and for d.f. of a \(t\)-distribution for the raw data (allows heavy tails)
1.6.1 Model Uncertainty and Model Checking
As the Bayesian \(t\)-test exemplifies, there are advantages of a continuous approach to modeling instead of engaging in dichotomous goodness-of-fit (GOF) assessments. Some general comments:
- In a frequentist setting, GOF checking can inflate type I assertion probability \(\alpha\) and make confidence intervals falsely narrow. In a Bayesian setting, posterior distributions and resulting uncertainty intervals can be too narrow.
- Rather than accepting or not accepting a proposed model on the basis of a GOF assessment, embed the proposed model inside a more general model that relaxes the assumptions, and use AIC or a formal test to decide between the two. Comparing only two pre-specified models will result in minimal model uncertainty.
- More general model could include nonlinear terms and interactions
- It could also relax distributional assumptions, as done with the non-normality parameter in the Bayesian \(t\)-test
- Often the sample size is not large enough to allow model assumptions to be relaxed without overfitting; AIC assesses whether additional complexities are “good for the money”. If a more complex model results in worse predictions due to overfitting, it is doubtful that such a model should be used for inference.
- Instead of focusing on model assumption checking, focus on the impact of making those assumptions, using for example comparison of adjusted \(R^2\) measures and bootstrap confidence intervals for differences in predicted values from two models.
- In many situations you can use a semiparametric model that makes many fewer assumptions than a parametric model
- See this for more in-depth discussion
1.7 Study Questions
- Can you estimate the effect of increasing age from 21 to 30 without a statistical model?
- What is an example where machine learning users have used “classification” in the wrong sense?
- When is classification (in the proper sense) an appropriate goal?
- Why are so many decisions non-binary?
- How do we normally choose statistical models—from subject matter theory or empirically?
- What is model uncertainty?
- An investigator feels that there are too many variables to analyze so she uses significance testing to select which variables to analyze further. What is wrong with that?