We work with an example of predicting mathematics and Portuguese exam grades for a sample of high school students in Portugal. The same data was used in Chapter 12 of Regression and Other Stories book to illustrate different models for regression coefficients .
We predict the students’ final-year median exam grade in mathematics (n=407) and Portuguese (n=657) given a large number of potentially relevant predictors: student’s school, student’s sex, student’s age, student’s home address type, family size, parents’ cohabitation status, mother’s education, father’s education, home-to-school travel time, weekly study time, number of past class failures, extra educational support, extra paid classes within the course subject, extra-curricular activities, whether the student attended nursery school, whether the student wants to take higher education, internet access at home, whether the student has a romantic relationship, quality of family relationships, free time after school, going out with friends, weekday alcohol consumption, weekend alcohol consumption, current health status, and number of school absences.
2 Variable selection
If we would care only about the predictive performance, we would not need to do variable selection, but we would use all the variables and a sensible joint prior. Here we are interested in finding the smallest set of variables that provide similar predictive performance as using all the variables (and sensible prior). This helps to improve explainability and to design further studies that could include also interventions. We are not considering causal structure, and the selected variables are unlikely to have direct causal effect, but the selected variables that have high predictive relevance are such that their role in causal graph should be eventually considered.
The data includes 3 grades for both mathematics and Portuguese. To reduce the variability in the outcome we use median grades based on those three exams for each topic. We select only students with non-zero grades.
Before variable selection, we want to build a good model with all covariates. We first illustrate that default priors may be bad when we have many predictors.
By default brms uses uniform (“flat”) priors on regression coefficients.
fitm_u <-brm(Gmat ~ ., data = studentstd_Gmat)
If we compare posterior-R^2 (bayes_R2()) and LOO-R^2 (loo_R2()) (Gelman et al. 2019), we see that the posterior-R^2 is much higher, which means that the posterior estimate for the residual variance is strongly underestimated and the model has overfitted the data.
The common proper prior choice for coefficients is independent wide normal prior. Piranha theorem states that it is not possible that all predictors would be independent and have large coefficients at the same time (Tosh et al. 2024). Thus when we have many predictors we should include prior information that not all coefficients can be large. One option is simply to divide the normal prior variance with the number of predictors which keeps the total variance constant (assuming the predictors are normalized). Other option is to use more elaborate joint priors on coefficients.
6 Implied priors on R^2 and R2D2 prior
In regression analysis cases we might assume that data is noisy and it is unlikely that we would get almost perfect fit. We can measure the proportion of variance explained by the model with R^2. To understand what is the implied prior on R^2 given different priors on coefficients, we can simply sample from the prior and compute Bayesian R^2 using the prior draws. The Bayesian-R^2 depends only on the model parameters, and thus can be used without computing residuals that depend on data.
If we have some prior information about R^2 we can use R2D2 prior (Zhang et al. 2022) to first define a prior directly on R^2, and then the prior is propagated to coefficients so that despite the number of predictors the prior on R^2 stays constant. As R^2 depends also on the residual scale, R2D2 prior is a joint prior on coefficients and residual sigma.
Although we can fit models with uniform prior on coefficients and get proper posterior, we can’t sample from improper unbounded uniform prior. Thus, in the following we consider only models with proper priors. For all the following models we use normal+(0, 3) prior (where normal+ indicates normal distribution constrained to be positive) for residual scale sigma, which is very weak prior as the standard deviation of the whole data is 3.3. We use four different priors for coefficients:
Independent normal(0, 2.5) prior which is a proper prior used as default by rstanarm and considered to be weakly informative for a single coefficient.
Independent scaled normal prior. If we assume that many predictors may each have small relevance, we can scale the independent priors so that the sum of the prior variance stays reasonable. In this case we have 26 predictors and could have a prior guess that proportion of the explained variance is near 0.3. Then a simple approach would be to assign independent priors to regression coefficients with mean 0 and standard deviation \sqrt{0.3/26}\operatorname{sd}(y).
Regularized horseshoe prior (Piironen and Vehtari 2017) which is a joint prior for the coefficients depending also on the residual scale, and it can be used to define sparsity assuming prior with the expected number of relevant coefficients. Here we guess that maybe 6 coefficients are relevant and set the global scale according to that. The result is not sensitive to the exact value for the prior guess of the number of relevant coefficients as it states just the mean for the prior. Regularized horseshoe has good prior predictive behavior when more variables are added.
R2D2 prior, which has the benefit that it first defines the prior directly on R^2, and then the prior is propagated to the coefficients. The R2D2 prior is predictively consistent, so that the prior on R^2 stays constant as the number of predictors increases. We assign the R2D2 prior with mean 1/3 and precision 3, which corresponds to the \Beta(1,2) distribution on R^2 implying that higher R^2 values are less likely. We set the concentration parameter to 1/2, which implies we assume some of the coefficients can be big and some small. R2D2 prior implementation in brms assumes the predictors have been standardized to have unit variance.
# we sample from both posterior and prior# normal(0, 2.5)fitm_n1 <-brm(Gmat ~ ., data = studentstd_Gmat,prior=c(prior(normal(0, 2.5), class = b)),warmup =1000, iter =5000,refresh =0)fitm_n1p <-update(fitm_n1, sample_prior ="only")# we also sample from a truncated prior to get more draws in the# interesting region of R^2<0.5 needed for zoomed plotfitm_n1pt <-update(fitm_n1, sample_prior ="only",prior=c(prior(normal(0, 2.5), class = b),prior(student_t(3, 0, 3), lb=5, class = sigma)),refresh =0)# normal(0, sqrt(0.3/26)*sd(y))scale_b <-sqrt(0.3/26)*sd(studentstd_Gmat$Gmat)fitm_n2 <-brm(Gmat ~ ., data = studentstd_Gmat,prior=c(prior(normal(0, scale_b), class = b)),warmup =1000, iter =5000,stanvars =stanvar(scale_b, name='scale_b'),refresh =0)fitm_n2p <-update(fitm_n2, sample_prior ="only")# horseshoep <-length(predictors)p0 <-6scale_slab <-sd(studentstd_Gmat$Gmat)/sqrt(p0)*sqrt(0.3)scale_global <- p0 / (p - p0) /sqrt(nrow(studentstd_Gmat))fitm_hs <-brm(Gmat ~ ., data = studentstd_Gmat,prior=c(prior(horseshoe(scale_global = scale_global,scale_slab = scale_slab),class = b)),warmup =1000, iter =5000,refresh =0)fitm_hsp <-update(fitm_hs, sample_prior ="only")# R2D2fitm <-brm(Gmat ~ ., data = studentstd_Gmat,prior=c(prior(R2D2(mean_R2 =1/3, prec_R2 =3, cons_D2 =1/2), class=b)),warmup =1000, iter =5000,refresh =0)fitmp <-update(fitm, sample_prior ="only")
Subplot a) shows the implied priors on R^2. We see that independent wide normal prior on coefficients leads to a prior strongly favoring values near 1. Independent scaled normal and R2D2 priors imply relatively flat prior on R^2. Regularized horseshoe prior implies prior favoring values near 0. Looking at the implied priors on the whole range can be misleading, and what is more important is what is the behavior of the priors where the likelihood is not very close to 0. Subplot b) shows the implied priors on R^2 in a shorter range, and subplots c) and d) show posterior-R^2 and LOO-R^2, respectively in the same range. Wide normal prior favors larger R^2 values which then pushes the posterior towards higher R^2 values and makes the model ovefit which causes LOO-R^2 to be lower than with priors that do not favor higher R^2 values. Scaled normal, regularized horseshe and R2D2 priors all slightly favor smaller R^2 values, which is a sensible prior assumption, and the posterior have not been pushed towards higher R^2 values. LOO-R^2 results are quite similar with scaled normal, regularized horseshe and R2D2 priors, although with regularized horseshe and R2D2 priors there is slightly less uncertainty. Regularized horseshoe is favoring smaller R^2 values more than scaled normal and R2D2, but as the likelihood has thinner tail towards smaller R^2 values (is more informative in that direction) there is not much difference in the posteriors.
Although in this case scaled normal, regularized horseshoe, and R2D2 all produce quite similar R^2 results, in general we favor R2D2 prior as it is easiest to define the prior on R^2. Regularized horseshoe can be easier to define when our prior is about the sparsity in the coefficients. We now continue with the R2D2 prior.
In addition of LOO-R^2 shown above, we can compare LOO estimated expected log predictive scores. With R2D2 prior the predictive performance is better than with wide normal prior with probability 0.94. The difference to scaled normal and horseshoe prior is smaller.
We check the bivariate marginal for Fedu and Medu coefficients, and see that while the univariate marginals overlap with 0, jointly there is not much posterior mass near 0. This is due to Fedu and Medu being collinear. Collinearity of predictors, make it difficult to infer the predictor relevance from the marginal posteriors.
mcmc_scatter(drawsm, pars =c("Fedu","Medu"), size=1, alpha=0.1) +vline_0(linetype='dashed') +hline_0(linetype='dashed')
Figure 8
8 Model checking
We’re using a normal observation model, although we know that the exam scores are in a bounded range. The posterior predictive checking shows that we sometimes predict exam scores higher than 20, but the discrepancy is minor.
pp_check(fitm, type="hist", ndraws=5)
Figure 9
LOO-PIT-ECDF plots shows that otherwise the normal model is quite well calibrated.
pp_check(fitm, type="loo_pit_ecdf")
Figure 10
We could use truncated normal as more accurate model, but for example beta-binomial model cannot be used for median exam scores as some of the median scores are not integers. A fancier model could model the three exams hierarchically, but as the normal model is not that bad, we now continue with it.
9 Projection predictive variable selection
We use projection predictive variable selection implemented in projpred R package to find the minimal set of predictors that can provide similar predictive performance as all predictor jointly. By default projpred starts from the intercept only model and uses forward search to find in which order to add predictors to minimize the divergence from the full model predictive distribution.
9.1 Math exam scores
We start with doing fast PSIS-LOO-CV only for the full data search path.
The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. As the search can overfit and we didn’t cross-validate the search, the performance estimates can go above the reference model performance. However, this plot helps as to see that 10 or fewer predictors would be sufficient.
Next we repeat the search, but now cross-validate the search, too. We repeat the search with PSIS-LOO-CV criterion only for nloo=50 folds, and combine the result with the fast PSIS-LOO result using difference estimator (Magnusson et al. 2020). The use of sub-sampling LOO affects only the model size selection and given that is stable, the projection for the selected model is as good as with computationally more expensive search validation. Based on the previous quick result, we search only up to models of size 10. With my laptop and 8 parallel workers, this takes less than 5min.
The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. The order is the same as in the previous plot, but now the predictive performance estimates are taking into account search and have smaller bias. It seems using just four predictors can provide the similar predictive performance as using all the predictors.
The following plot shows the stability of the search over the different LOO-CV folds. The numbers indicate the proportion of folds, the specific predictor was included at latest on the given model size.
plot(cv_proportions(rankm, cumulate=TRUE))
Figure 14
9.2 Portuguese exam scores
We repeat the same, but predicting grade for Portuguese instead of mathematics
Fit a model with R2D2 prior with mean 1/3 and precision 3.
Compare posterior-R^2 and LOO-R^2. We see that Portuguese grade is easier to predict given the predictors (but there is still a lot of unexplained variance).
The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. As there is some overfitting in the search and we didn’t cross-validate the search, the performance estimates scan go above the reference model performance. However, this plot helps as to see that 10 or fewer predictors would be sufficient.
Next we repeat the search, but now cross-validate the search, too. We repeat the search with PSIS-LOO-CV criterion only for nloo=50 folds, and combine the result with the fast PSIS-LOO result using difference estimator (Magnusson et al. 2020). The use of sub-sampling LOO affects only the model size selection and given that is stable, the projection for the selected model is as good as with computationally more expensive search validation. Based on the previous quick result, we search only up to models of size 10. With my laptop and 8 parallel workers, this takes less than 5min.
The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. The order is the same as in the previous plot, but now the predictive performance estimates are taking into account search and have smaller bias. It seems using just seven predictors can provide the similar predictive performance as using all the predictors.
The following plot shows the stability of the search over the different LOO-CV folds. The numbers indicate the proportion of folds, the specific predictor was included at latest on the given model size.
plot(cv_proportions(rankp, cumulate=TRUE))
Figure 19
9.3 Using the selected model
For further predictions we can use the projected draws. Due to how different packages work, sometimes it can be easier to rerun MCMC conditionally on the selected variables. This gives a slightly different result, but when the reference model has been good the difference tends to be small, and the main benefit form using projpred is still that the selection process itself has not caused overfitting and selection of spurious covariates.
References
Gelman, Andrew, Ben Goodrich, Jonah Gabry, and Aki Vehtari. 2019. “R-Squared for Bayesian Regression Models.”The American Statistician 73 (3): 307–9.
Magnusson, Måns, Michael Riis Andersen, Johan Jonasson, and Aki Vehtari. 2020. “Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data.” In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108:341–51. PMLR.
McLatchie, Y., S. Rögnvaldsson, F. Weber, and A. Vehtari. 2025. “Advances in Projection Predictive Inference.”Statistical Science 40: 128–47.
Piironen, Juho, Markus Paasiniemi, and Aki Vehtari. 2020. “Projective Inference in High-Dimensional Problems: Prediction and Feature Selection.”Electronic Journal of Statistics 14 (1): 2155–97.
Piironen, Juho, and Aki Vehtari. 2017. “Sparsity Information and Regularization in the Horseshoe and Other Shrinkage Priors.”Electronic Journal of Statistics 11: 5018–51.
Tosh, C., P. Greengard, B. Goodrich, A. Gelman, A. Vehtari, and D. Hsu. 2024. “The Piranha Problem: Large Effects Swimming in a Small Pond.”Notices of the American Mathematical Society 72: 15–25.
Zhang, Yan Dora, Brian P. Naughton, Howard D. Bondell, and Brian J. Reich. 2022. “Bayesian Regression Using a Prior on the Model Fit: The R2-D2 Shrinkage Prior.”Journal of the American Statistical Association 117: 862–74.