Load packages
library(rstanarm)
options(mc.cores = 1)
library(loo)
library(ggplot2)
library(bayesplot)
theme_set(bayesplot::theme_default(base_family = "sans"))
library(projpred)
library(fivethirtyeight)
SEED=150702646
This notebook was inspired by Joshua Loftus’ two blog posts Model selection bias invalidates significance tests and A conditional approach to inference after model selection.
In this notebook we illustrate Bayesian inference for model selection, including PSIS-LOO (Vehtari, Gelman and Gabry, 2017) and projection predictive approach (Piironen and Vehtari, 2017a; Piironen, Paasiniemi and Vehtari, 2020; McLatchie et al., 2023) which makes decision theoretically justified inference after model selection..
We use candy rankings data from fivethirtyeight package. Dataset was originally used in a fivethirtyeight story.
data(candy_rankings)
df <- candy_rankings %>%
dplyr::select(!competitorname) %>%
mutate_if(is.logical, as.numeric)
head(df)
# A tibble: 6 × 12
chocolate fruity caramel peanutyalmondy nougat crispedricewafer hard bar
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 0 1 0 0 1 0 1
2 1 0 0 0 1 0 0 1
3 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0
5 0 1 0 0 0 0 0 0
6 1 0 0 1 0 0 0 1
# ℹ 4 more variables: pluribus <dbl>, sugarpercent <dbl>, pricepercent <dbl>,
# winpercent <dbl>
We start first analysing a “null” data set, where winpercent has been replaced with random draws from a normal distribution so that covariates do not have any predictive information.
dfr <- df %>% dplyr::select(!winpercent)
n <- nrow(dfr)
p <- ncol(dfr)
prednames <- colnames(dfr)
set.seed(SEED)
ry = rnorm(n)
dfr$ry <- ry
(model_formula <- formula(paste("ry ~", paste(prednames, collapse = " + "))))
ry ~ chocolate + fruity + caramel + peanutyalmondy + nougat +
crispedricewafer + hard + bar + pluribus + sugarpercent +
pricepercent
The rstanarm
package provides stan_glm
which accepts same arguments as glm
, but makes full Bayesian inference using Stan (mc-stan.org). Doing variable selection we are anyway assuming that some of the variables are not relevant, and thus it is sensible to use priors which assume some of the covariate effects are close to zero. We use regularized horseshoe prior (Piironen and Vehtari, 2017b) which has lot of prior mass near 0, but also thick tails allowing relevant effects to not shrunk.
p0 <- 5 # prior guess for the number of relevant variables
tau0 <- p0/(p-p0) * 1/sqrt(n)
hs_prior <- hs(df=1, global_df=1, global_scale=tau0)
t_prior <- student_t(df = 7, location = 0, scale = 2.5)
fitrhs <- stan_glm(model_formula, data = dfr,
prior = hs_prior, prior_intercept = t_prior,
seed=SEED, refresh=0)
Let’s look at the summary:
summary(fitrhs)
Model Info:
function: stan_glm
family: gaussian [identity]
formula: ry ~ chocolate + fruity + caramel + peanutyalmondy + nougat +
crispedricewafer + hard + bar + pluribus + sugarpercent +
pricepercent
algorithm: sampling
sample: 4000 (posterior sample size)
priors: see help('prior_summary')
observations: 85
predictors: 12
Estimates:
mean sd 10% 50% 90%
(Intercept) 0.0 0.2 -0.3 0.0 0.3
chocolate 0.0 0.1 -0.1 0.0 0.2
fruity 0.0 0.1 0.0 0.0 0.2
caramel -0.1 0.2 -0.3 0.0 0.0
peanutyalmondy 0.0 0.1 -0.1 0.0 0.1
nougat 0.0 0.2 -0.2 0.0 0.1
crispedricewafer 0.0 0.2 -0.2 0.0 0.1
hard 0.1 0.2 0.0 0.0 0.3
bar -0.1 0.2 -0.5 0.0 0.0
pluribus 0.0 0.1 -0.1 0.0 0.1
sugarpercent 0.0 0.1 -0.1 0.0 0.1
pricepercent -0.1 0.3 -0.5 0.0 0.0
sigma 1.1 0.1 1.0 1.1 1.2
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD -0.1 0.2 -0.3 -0.1 0.2
The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
MCMC diagnostics
mcse Rhat n_eff
(Intercept) 0.0 1.0 3346
chocolate 0.0 1.0 3185
fruity 0.0 1.0 3104
caramel 0.0 1.0 2792
peanutyalmondy 0.0 1.0 3415
nougat 0.0 1.0 3456
crispedricewafer 0.0 1.0 2931
hard 0.0 1.0 2762
bar 0.0 1.0 1976
pluribus 0.0 1.0 3775
sugarpercent 0.0 1.0 3983
pricepercent 0.0 1.0 2239
sigma 0.0 1.0 3960
mean_PPD 0.0 1.0 4146
log-posterior 0.1 1.0 1265
For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
We didn’t get divergences, Rhat’s are less than 1.1 and n_eff’s are useful (see, e.g., RStan workflow).
mcmc_areas(as.matrix(fitrhs), prob_outer = .95)
All 95% posterior intervals are overlapping 0, regularized horseshoe prior makes the posteriors concentrate near 0, but there is some uncertainty.
We can easily test whether any of the covariates are useful by using cross-validation to compare to a null model,
fit0 <- stan_glm(ry ~ 1, data = dfr, seed=SEED, refresh=0)
(loorhs <- loo(fitrhs))
Computed from 4000 by 85 log-likelihood matrix
Estimate SE
elpd_loo -130.1 6.3
p_loo 4.3 0.8
looic 260.2 12.7
------
Monte Carlo SE of elpd_loo is 0.1.
Pareto k diagnostic values:
Count Pct. Min. n_eff
(-Inf, 0.5] (good) 83 97.6% 2062
(0.5, 0.7] (ok) 2 2.4% 532
(0.7, 1] (bad) 0 0.0% <NA>
(1, Inf) (very bad) 0 0.0% <NA>
All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.
(loo0 <- loo(fit0))
Computed from 4000 by 85 log-likelihood matrix
Estimate SE
elpd_loo -130.1 6.2
p_loo 1.8 0.4
looic 260.2 12.4
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
loo_compare(loo0, loorhs)
elpd_diff se_diff
fit0 0.0 0.0
fitrhs 0.0 1.1
Based on cross-validation covariates together do not contain any useful information, and there is no need to continue with variable selection. This step of checking whether full mode has any predictive power is often ignored especially when non-Bayesian methods are used. If loo (or AIC as Joshua Loftus demonstrated) would be used for stepwise variable selection it is possible that selection process over a large number of models overfits to the data.
To illustrate the robustness of projpred, we make the projective predictive variable selection using the previous model for “null” data. A fast leave-one-out cross-validation approach (Vehtari, Gelman and Gabry, 2017) is used to choose the model size. As the number of observations is large compared to the number of covariates, we estimate the performance using LOO-CV only along the search path (validate_search=FALSE
), as we may assume that the overfitting in search is negligible (see more about this in McLatchie et al. (2023)).
fitrhs_cv <- cv_varsel(fitrhs, method='forward', cv_method='loo', validate_search=FALSE)
We can now look at the estimated predictive performance of smaller models compared to the full model.
plot(fitrhs_cv, stats = c('elpd', 'rmse'), text_angle = 45)
As the estimated predictive performance is not going much above the reference model performance, we know that the use of option validate_search=FALSE
was safe (see more in McLatchie et al. (2023)).
And we get a LOO based recommendation for the model size to choose
(nv <- suggest_size(fitrhs_cv, alpha=0.1))
[1] 0
We see that projpred agrees that no variables have useful information.
Next we form the projected posterior for the chosen model.
projrhs <- project(fitrhs_cv, nv = nv, ns = 4000)
round(colMeans(as.matrix(projrhs)),1)
(Intercept) sigma
-0.1 1.1
round(posterior_interval(as.matrix(projrhs)),1)
5% 95%
(Intercept) -0.3 0.1
sigma 1.0 1.3
This looks good as the true values for “null” data are intercept=0, sigma=1.
Next we repeat the above analysis with original target variable winpercent.
model_formula <- formula(paste("winpercent ~", paste(prednames, collapse = " + ")))
p0 <- 5 # prior guess for the number of relevant variables
tau0 <- p0/(p-p0) * 1/sqrt(n)
hs_prior <- hs(df=1, global_df=1, global_scale=tau0)
t_prior <- student_t(df = 7, location = 0, scale = 2.5)
fitrhs <- stan_glm(model_formula, data = df,
prior = hs_prior, prior_intercept = t_prior,
seed=SEED, refresh=0)
Let’s look at the summary:
summary(fitrhs)
Model Info:
function: stan_glm
family: gaussian [identity]
formula: winpercent ~ chocolate + fruity + caramel + peanutyalmondy +
nougat + crispedricewafer + hard + bar + pluribus + sugarpercent +
pricepercent
algorithm: sampling
sample: 4000 (posterior sample size)
priors: see help('prior_summary')
observations: 85
predictors: 12
Estimates:
mean sd 10% 50% 90%
(Intercept) 40.7 3.6 35.8 41.0 45.2
chocolate 13.6 3.7 8.9 13.5 18.3
fruity 1.9 3.0 -1.0 1.2 6.1
caramel 0.9 2.4 -1.6 0.4 4.1
peanutyalmondy 5.2 3.6 0.2 5.2 9.9
nougat 0.4 2.8 -2.8 0.1 3.9
crispedricewafer 3.3 3.7 -0.5 2.6 8.5
hard -2.7 2.9 -6.7 -2.3 0.4
bar 1.4 2.7 -1.4 0.8 5.2
pluribus -0.7 2.0 -3.3 -0.3 1.5
sugarpercent 3.8 3.7 -0.3 3.3 8.9
pricepercent -0.1 3.0 -3.8 0.0 3.4
sigma 11.2 1.0 10.0 11.2 12.4
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD 50.1 1.7 47.9 50.1 52.2
The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
MCMC diagnostics
mcse Rhat n_eff
(Intercept) 0.1 1.0 3149
chocolate 0.1 1.0 3571
fruity 0.1 1.0 2733
caramel 0.0 1.0 4901
peanutyalmondy 0.1 1.0 2708
nougat 0.0 1.0 4548
crispedricewafer 0.1 1.0 3342
hard 0.1 1.0 3096
bar 0.0 1.0 3443
pluribus 0.0 1.0 4948
sugarpercent 0.1 1.0 2684
pricepercent 0.0 1.0 5178
sigma 0.0 1.0 3928
mean_PPD 0.0 1.0 4308
log-posterior 0.2 1.0 1076
For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
We didn’t get divergences, Rhat’s are less than 1.1 and n_eff’s are useful.
mcmc_areas(as.matrix(fitrhs), prob_outer = .95)
95% posterior interval for chocolateTRUE
is not overlapping 0, so maybe there is something useful here.
In case of collinear variables it is possible that marginal posteriors overlap 0, but the covariates can still useful for prediction. With many variables it will be difficult to analyse joint posterior to see which variables are jointly relevant. We can easily test whether any of the covariates are useful by using cross-validation to compare to a null model,
fit0 <- stan_glm(winpercent ~ 1, data = df, seed=SEED, refresh=0)
(loorhs <- loo(fitrhs))
Computed from 4000 by 85 log-likelihood matrix
Estimate SE
elpd_loo -329.6 5.8
p_loo 7.9 1.1
looic 659.2 11.6
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
(loo0 <- loo(fit0))
Computed from 4000 by 85 log-likelihood matrix
Estimate SE
elpd_loo -350.6 5.5
p_loo 1.7 0.3
looic 701.2 11.1
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
loo_compare(loo0, loorhs)
elpd_diff se_diff
fitrhs 0.0 0.0
fit0 -21.0 4.5
Based on cross-validation covariates together do contain useful information. If we need just the predictions we can stop here, but if we want to learn more about the relevance of the covariates we can continue with variable selection.
We make the projective predictive variable selection using the previous model for “null” data. A fast leave-one-out cross-validation approach is used to choose the model size.
fitrhs_cv <- cv_varsel(fitrhs, method='forward', cv_method='loo', validate_search = FALSE)
We can now look at the estimated predictive performance of smaller models compared to the full model.
plot(fitrhs_cv, stats = c('elpd', 'rmse'), text_angle = 45)
Only one variable seems to be needed to get the same performance as the full model.
And we get a LOO based recommendation for the model size to choose
(nsel <- suggest_size(fitrhs_cv, alpha=0.1))
[1] 1
(vsel <- solution_terms(fitrhs_cv)[1:nsel])
[1] "chocolate"
projpred recommends to use just one variable.
Next we form the projected posterior for the chosen model.
projrhs <- project(fitrhs_cv, nv = nsel, ns = 4000)
projdraws <- as.matrix(projrhs)
colnames(projdraws) <- c("Intercept",vsel,"sigma")
round(colMeans(projdraws),1)
Intercept chocolate sigma
43.0 16.2 11.9
round(posterior_interval(projdraws),1)
5% 95%
Intercept 40.3 45.5
chocolate 12.1 20.7
sigma 10.4 13.7
mcmc_areas(projdraws)
In our loo and projpred analysis, we find the chocolateTRUE
to have predictive information. Other variables may have predictive power, too, but conditionally on chocolateTRUE
other variables do not provide additional information.
McLatchie, Y., Rögnvaldsson, S., Weber, F. and Vehtari, A. (2023) ‘Robust and efficient projection predictive inference’, arXiv preprint arXiv:2306.15581.
Piironen, J., Paasiniemi, M. and Vehtari, A. (2020) ‘Projective inference in high-dimensional problems: Prediction and feature selection’, Electronic Journal of Statistics, 14(1), pp. 2155–2197.
Piironen, J. and Vehtari, A. (2017a) ‘Comparison of Bayesian predictive methods for model selection’, Statistics and Computing, 27(3), pp. 711–735. doi: 10.1007/s11222-016-9649-y.
Piironen, J. and Vehtari, A. (2017b) ‘Sparsity information and regularization in the horseshoe and other shrinkage priors’, Electronic journal of Statistics, 11(2), pp. 5018–5051. doi: 10.1214/17-EJS1337SI.
Vehtari, A., Gelman, A. and Gabry, J. (2017) ‘Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC’, Statistics and Computing, 27(5), pp. 1413–1432. doi: 10.1007/s11222-016-9696-4.
sessionInfo()
R version 4.2.2 Patched (2022-11-10 r83330)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04.3 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=fi_FI.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=fi_FI.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=fi_FI.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=fi_FI.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] splines stats graphics grDevices utils datasets methods
[8] base
other attached packages:
[1] cmdstanr_0.6.1.9000 brms_2.20.1 bridgesampling_1.1-2
[4] ggridges_0.5.4 arm_1.13-1 lme4_1.1-34
[7] Matrix_1.5-1 reliabilitydiag_0.2.1 MASS_7.3-58.2
[10] corrplot_0.92 caret_6.0-94 lattice_0.20-45
[13] GGally_2.1.2 fivethirtyeight_0.6.2 projpred_2.7.0
[16] bayesplot_1.10.0 lubridate_1.9.2 forcats_1.0.0
[19] stringr_1.5.0 dplyr_1.1.3 purrr_1.0.2
[22] readr_2.1.4 tidyr_1.3.0 tibble_3.2.1
[25] ggplot2_3.4.3 tidyverse_2.0.0 loo_2.6.0
[28] rstanarm_2.26.1 Rcpp_1.0.11
loaded via a namespace (and not attached):
[1] backports_1.4.1 plyr_1.8.8 igraph_1.5.1
[4] listenv_0.9.0 crosstalk_1.2.0 rstantools_2.3.1.1
[7] inline_0.3.19 digest_0.6.33 foreach_1.5.2
[10] htmltools_0.5.6 fansi_1.0.4 magrittr_2.0.3
[13] checkmate_2.2.0 tzdb_0.4.0 globals_0.16.2
[16] recipes_1.0.8 gower_1.0.1 RcppParallel_5.1.7
[19] matrixStats_1.0.0 xts_0.13.1 hardhat_1.3.0
[22] timechange_0.2.0 prettyunits_1.1.1 colorspace_2.1-0
[25] xfun_0.40 callr_3.7.3 crayon_1.5.2
[28] jsonlite_1.8.7 survival_3.4-0 zoo_1.8-12
[31] iterators_1.0.14 glue_1.6.2 gtable_0.3.4
[34] ipred_0.9-14 distributional_0.3.2 pkgbuild_1.4.2
[37] rstan_2.26.23 future.apply_1.11.0 abind_1.4-5
[40] scales_1.2.1 mvtnorm_1.2-3 miniUI_0.1.1.1
[43] xtable_1.8-4 progress_1.2.2 lava_1.7.2.1
[46] stats4_4.2.2 StanHeaders_2.26.28 prodlim_2023.08.28
[49] DT_0.29 htmlwidgets_1.6.2 threejs_0.3.3
[52] RColorBrewer_1.1-3 posterior_1.4.1 ellipsis_0.3.2
[55] pkgconfig_2.0.3 reshape_0.8.9 farver_2.1.1
[58] nnet_7.3-18 sass_0.4.7 utf8_1.2.3
[61] tidyselect_1.2.0 labeling_0.4.3 rlang_1.1.1
[64] reshape2_1.4.4 later_1.3.1 munsell_0.5.0
[67] tools_4.2.2 cachem_1.0.8 cli_3.6.1
[70] generics_0.1.3 evaluate_0.21 fastmap_1.1.1
[73] yaml_2.3.7 ModelMetrics_1.2.2.2 processx_3.8.2
[76] knitr_1.43 future_1.33.0 nlme_3.1-162
[79] mime_0.12 compiler_4.2.2 shinythemes_1.2.0
[82] gamm4_0.2-6 bslib_0.5.1 stringi_1.7.12
[85] highr_0.10 ps_1.7.5 Brobdingnag_1.2-9
[88] nloptr_2.0.3 markdown_1.8 shinyjs_2.1.0
[91] tensorA_0.36.2 vctrs_0.6.3 pillar_1.9.0
[94] lifecycle_1.0.3 jquerylib_0.1.4 data.table_1.14.8
[97] httpuv_1.6.11 QuickJSR_1.0.5 R6_2.5.1
[100] promises_1.2.1 gridExtra_2.3 parallelly_1.36.0
[103] codetools_0.2-19 boot_1.3-28 colourpicker_1.3.0
[106] gtools_3.9.4 withr_2.5.0 shinystan_2.6.0
[109] mgcv_1.9-0 parallel_4.2.2 hms_1.1.3
[112] grid_4.2.2 rpart_4.1.19 timeDate_4022.108
[115] coda_0.19-4 class_7.3-21 minqa_1.2.5
[118] rmarkdown_2.24 pROC_1.18.4 shiny_1.7.5
[121] base64enc_0.1-3 dygraphs_1.1.1.6