Student grades and variable selection with projpred

Author

Aki Vehtari

Published

2023-12-14

Modified

2024-09-08

1 Portugal students data

We work with an example of predicting mathematics and Portuguese exam grades for a sample of high school students in Portugal. The same data was used in Chapter 12 of Regression and Other Stories book to illustrate different models for regression coefficients .

We predict the students’ final-year median exam grade in mathematics (n=382) and Portuguese (n=657) given a large number of potentially relevant predictors: student’s school, student’s sex, student’s age, student’s home address type, family size, parents’ cohabitation status, mother’s education, father’s education, home-to-school travel time, weekly study time, number of past class failures, extra educational support, extra paid classes within the course subject, extra-curricular activities, whether the student attended nursery school, whether the student wants to take higher education, internet access at home, whether the student has a romantic relationship, quality of family relationships, free time after school, going out with friends, weekday alcohol consumption, weekend alcohol consumption, current health status, and number of school absences.

2 Variable selection

If we would care only about the predictive performance, we would not need to do variable selection, but we would use all the variables and a sensible joint prior. Here we are interested in finding the smallest set of variables that provide similar predictive performance as using all the variables (and sensible prior). This helps to improve explainability and to design further studies that could include also interventions. We are not considering causal structure, and the selected variables are unlikely to have direct causal effect, but the selected variables that have high predictive relevance are such that their role in causal graph should be eventually considered.

We first build models with all predictors, and then use projection predictive variable selection (Piironen, Paasiniemi, and Vehtari 2020; McLatchie et al. 2023) implemented in R package projpred. We also demonstrate use of subsampling LOO with difference estimator (Magnusson et al. 2020) to speed-up model size selection.

Currently the sub-sampling LOO requires projpred installed from a branch:

remotes::install_github("stan-dev/projpred", ref="fix-subsampling", build_vignettes = TRUE)

Load packages

Code
library("brms")
options(brms.backend="cmdstanr")
library(cmdstanr)
options(mc.cores = parallel::detectCores()-2)
library("posterior")
options(digits=2, posterior.digits=2,
        pillar.neg = FALSE, pillar.subtle=FALSE, pillar.sigfig=2)
library("loo")
# need to use prjpred fix-subsampling branch
#remotes::install_github("stan-dev/projpred", ref="fix-subsampling", build_vignettes = TRUE)
library("projpred")
#use the sub-sampling-loo branch
#devtools::load_all('~/proj/projpred')
if (interactive()) {
  library("progressr")
  options(projpred.use_progressr = TRUE)
  handlers(global = TRUE)
}
library("ggplot2")
library("bayesplot")
theme_set(bayesplot::theme_default(base_family = "sans", base_size=16))
set1 <- RColorBrewer::brewer.pal(7, "Set1")
library("tinytable")
options(tinytable_format_num_fmt = "significant_cell", tinytable_format_digits = 2, tinytable_tt_digits=2)
library("dplyr")
library("matrixStats")
library("patchwork")
library("ggdist")
library("doFuture")
library("tictoc")

Set random seed for reproducibility

SEED <- 2132

3 Data preparation and visualization

Get the data from Regression and Other Stories R package.

student <- read.csv(url('https://raw.githubusercontent.com/avehtari/ROS-Examples/master/Student/data/student-merged.csv'))

List the predictors to be used.

predictors <- c("school","sex","age","address","famsize","Pstatus","Medu","Fedu","traveltime","studytime","failures","schoolsup","famsup","paid","activities", "nursery", "higher", "internet", "romantic","famrel","freetime","goout","Dalc","Walc","health","absences")
p <- length(predictors)

The data includes 3 grades for both mathematics and Portuguese. To reduce the variability in the outcome we use median grades based on those three exams for each topic. We select only students with non-zero grades.

grades <- c("G1mat","G2mat","G3mat","G1por","G2por","G3por")
student <- student %>%
  mutate(across(matches("G[1-3]..."), ~na_if(.,0))) %>%
  mutate(Gmat = rowMedians(as.matrix(select(.,matches("G.mat"))), na.rm=TRUE),
         Gpor = rowMedians(as.matrix(select(.,matches("G.por"))), na.rm=TRUE))
student_Gmat <- subset(student, is.finite(Gmat), select=c("Gmat",predictors))
student_Gmat <- student_Gmat[is.finite(rowMeans(student_Gmat)),]
student_Gpor <- subset(student, is.finite(Gpor), select=c("Gpor",predictors))
(nmat <- nrow(student_Gmat))
[1] 382
head(student_Gmat) |> tt()
tinytable_egn5mlfstix0eur5pa3l
Gmat school sex age address famsize Pstatus Medu Fedu traveltime studytime failures schoolsup famsup paid activities nursery higher internet romantic famrel freetime goout Dalc Walc health absences
10 0 0 15 0 0 1 1 1 2 4 1 1 1 1 1 1 1 1 0 3 1 2 1 1 1 2
6 0 0 15 0 0 1 1 1 1 2 2 1 1 0 0 0 1 1 1 3 3 4 2 4 5 2
13 0 0 15 0 0 1 2 2 1 1 0 1 1 1 1 1 1 0 0 4 3 1 1 1 2 8
9 0 0 15 0 0 1 2 4 1 3 0 1 1 1 1 1 1 1 0 4 3 2 1 1 5 2
10 0 0 15 0 0 1 3 3 2 3 2 0 1 1 1 1 1 1 1 4 2 1 2 3 3 8
12 0 0 15 0 0 1 3 4 1 3 0 1 1 1 1 1 1 1 0 4 3 2 1 1 5 2
(npor <- nrow(student_Gpor))
[1] 382
head(student_Gpor) |> tt()
tinytable_y3znakm1aeer1h6lkjfy
Gpor school sex age address famsize Pstatus Medu Fedu traveltime studytime failures schoolsup famsup paid activities nursery higher internet romantic famrel freetime goout Dalc Walc health absences
13 0 0 15 0 0 1 1 1 2 4 1 1 1 1 1 1 1 1 0 3 1 2 1 1 1 2
11 0 0 15 0 0 1 1 1 1 2 2 1 1 0 0 0 1 1 1 3 3 4 2 4 5 2
13 0 0 15 0 0 1 2 2 1 1 0 1 1 1 1 1 1 0 0 4 3 1 1 1 2 8
10 0 0 15 0 0 1 2 4 1 3 0 1 1 1 1 1 1 1 0 4 3 2 1 1 5 2
13 0 0 15 0 0 1 3 3 2 3 2 0 1 1 1 1 1 1 1 4 2 1 2 3 3 8
12 0 0 15 0 0 1 3 4 1 3 0 1 1 1 1 1 1 1 0 4 3 2 1 1 5 2

The following plot shows the distributions of median math and Portuguese exam scores for each student.

p1 <- ggplot(student_Gmat, aes(x=Gmat)) + geom_dots() + labs(x='Median math exam score')
p2 <- ggplot(student_Gpor, aes(x=Gpor)) + geom_dots() + labs(x='Median Portuguese exam score')
(p1 + p2) * scale_x_continuous(lim=c(0,20)) *
  theme(axis.line.y = element_blank(),
        axis.text.y = element_blank(),
        axis.ticks.y = element_blank())
Figure 1

We standardize all non-binary predictors to have standard deviation 1, to make the comparison of relevances easier as discussed in Regression and Other Stories Section 12.1.

studentstd_Gmat <- student_Gmat
Gmatbin<-apply(student_Gmat[,predictors], 2, function(x) {length(unique(x))==2})
studentstd_Gmat[,predictors[!Gmatbin]] <-scale(student_Gmat[,predictors[!Gmatbin]])
studentstd_Gpor <- student_Gpor
Gporbin<-apply(student_Gpor[,predictors], 2, function(x) {length(unique(x))==2})
studentstd_Gpor[,predictors[!Gporbin]] <-scale(student_Gpor[,predictors[!Gporbin]])

4 Default uniform prior on coefficients

Before variable selection, we want to build a good model with all covariates. We first illustrate that default priors may be bad when we have many predictors.

By default brms uses uniform (“flat”) priors on regression coefficients. We use the option normalize=FALSE as we have already normalized the predictors.

fitmu <- brm(Gmat ~ ., data = studentstd_Gmat,
             normalize=FALSE)

If we compare posterior-R^2 (bayes_R2()) and LOO-R^2 (loo_R2()) (Gelman et al. 2019), we see that the posterior-R^2 is much higher, which means that the posterior estimate for the residual variance is strongly underestimated and the model has overfitted the data.

bayes_R2(fitmu) |> as.data.frame() |> tt()
tinytable_5ovvbjoj0r4joll8u5xf
Estimate Est.Error Q2.5 Q97.5
0.32 0.031 0.26 0.38
loo_R2(fitmu) |> as.data.frame() |> tt()
tinytable_p4qzjyr7r65j6oxut1c0
Estimate Est.Error Q2.5 Q97.5
0.2 0.04 0.11 0.27

We plot the marginal posteriors for coefficients. For many coefficients the posterior is quite wide.

drawsmu <- as_draws_df(fitmu, variable=paste0('b_',predictors)) |>
  set_variables(predictors)
p <- mcmc_areas(drawsmu,
                 prob_outer=0.98, area_method = "scaled height") +
  xlim(c(-3,3))
p <- p + scale_y_discrete(limits = rev(levels(p$data$parameter)))
p
Figure 2

5 Piranha theorem

The common proper prior choice for coefficients is independent wide normal prior. Piranha theorem states that it is not possible that all predictors would be independent and have large coefficients at the same time (Tosh et al. 2021). Thus when we have many predictors we should include prior information that not all coefficients can be large. One option is simply to divide the normal prior variance with the number of predictors which keeps the total variance constant (assuming the predictors are normalized). Other option is to use more elaborate so collaed sparsity priors that assume some of the coefficients are close to zero and some can be large.

6 Implied prior on R^2

Regression and Other Stories Chapter 12 example shows prior predictive simulations with 1) the usual independent wide prior normal prior, 2) scaled normal prior (the wide normal scale divided by the number of predictors), and 3) regularized horseshoe prior encoding sparsity assumption, illustrating the implied prior on R^2. For convenience, these figures are shown below, too. Flat priors are improper, and we can’t do prior predictive simulation with them, and there is no corresponding figure.

Wide normal prior

Scaled normal prior

Regularized horseshoe prior
Figure 3

7 R2D2 prior

In this case study we use R2D2 prior (Zhang et al. 2022) which can be used to define prior directly on R^2, and then the prior is propagated to coefficients so that despite the number of predictors the prior on R^2 stays constant. We assign R2D2-prior with mean 1/3 and precision 3, which corresponds to Beta(1,2) distribution on R^2. The consentration parameter controls the prior assumption on sparsity or in other words how much variation is assumed to be between coefficients.

fitm <- brm(Gmat ~ ., data = studentstd_Gmat,
            prior=c(prior(R2D2(mean_R2 = 1/3, prec_R2 = 3, cons_D2 = .3,
                               autoscale = FALSE),class=b),
                    prior(normal(0,1), class=sigma)),
            normalize=FALSE)
fitm <- add_criterion(fitm, criterion='loo')

Posterior-R^2 (bayes_R2()) and LOO-R^2 (loo_R2()) are now more similar indicating that the prior is not pushing the posterior towards higher R^2 values.

bayes_R2(fitm) |> as.data.frame() |> tt()
tinytable_uwx3lnjpytk9pc6jpcy7
Estimate Est.Error Q2.5 Q97.5
0.24 0.036 0.17 0.31
loo_R2(fitm) |> as.data.frame() |> tt()
tinytable_g9anmboffqhw9s8kz7rx
Estimate Est.Error Q2.5 Q97.5
0.21 0.031 0.15 0.27

The following plot shows the Beta(1,2) prior on R^2 and the posterior of R^2.

mcmc_hist(data.frame(Prior=rbeta(4000, 1, 2),
                     Posterior=bayes_R2(fitm, summary=FALSE)),
          breaks=seq(0,1,length.out=100),
          facet_args = list(nrow = 2)) +
  facet_text(size = 13) +
  scale_x_continuous(limits = c(0,1), expand = c(0, 0),
                     labels = c("0","0.25","0.5","0.75","1")) +
  theme(axis.line.y = element_blank()) +
  xlab("Bayesian R^2")
Figure 4

We plot the marginal posteriors for coefficients. For many coefficients the posterior has been shrunk close to 0. Some marginal posteriors are wide.

drawsm <- as_draws_df(fitm, variable=paste0('b_',predictors)) |>
  set_variables(predictors)
p <- mcmc_areas(drawsm,
                 prob_outer=0.98, area_method = "scaled height") +
  xlim(c(-3,3))
p <- p + scale_y_discrete(limits = rev(levels(p$data$parameter)))
p
Figure 5

We check the bivariate marginal for Fedu and Medu coefficients, and see that while the univariate marginals overlap with 0, jointly there is not much posterior mass near 0. This is due to Fedu and Medu being collinear. Collinearity of predictors, make it difficult to infer the predictor relevance from the marginal posteriors.

mcmc_scatter(drawsm, pars = c("Fedu","Medu")) +
  vline_0(linetype='dashed') +
  hline_0(linetype='dashed')
Figure 6

8 Model checking

We’re using a normal observation model, although we know that the exam scores are in a bounded range. The posterior predictive checking shows that we sometimes predict exam scores higher than 20, but the discrepancy is minor.

pp_check(fitm, type="hist", ndraws=5)
Figure 7

PIT-ECDF plots shows that otherwise the normal model is quite well calibrated.

pp_check(fitm, type="pit_ecdf")
Figure 8

We could use truncated normal as more accurate model, but for example beta-binomial model cannot be used for median exam scores as some of the median scores are not integers. A fancier model could model the three exams hierarchically, but as the normal model is not that bad, we now continue with it.

9 Projection predictive variable selection

We use projection predictive variable selection implemented in projpred R package to find the minimal set of predictors that can provide similar predictive performance as all predictor jointly. By default projpred starts from the intercept only model and uses forward search to find in which order to add predictors to minimize the divergence from the full model predictive distribution.

9.1 Math exam scores

We start with doing fast PSIS-LOO-CV only for the full data search path.

vselm_fast <- cv_varsel(fitm, nterms_max = 27, validate_search = FALSE)

The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. As the search can overfit and we didn’t cross-validate the search, the performance estimates can go above the reference model performance. However, this plot helps as to see that 10 or fewer predictors would be sufficient.

plot(vselm_fast, stats=c("elpd", "R2"), deltas=TRUE,
     text_angle = 45, alpha = 0.1, 
     size_position = "primary_x_top", show_cv_proportions=FALSE) +
  geom_vline(xintercept = seq(0, 25, by = 5), colour = "black", alpha = 0.1)
Figure 9

Next we repeat the search, but now cross-validate the search, too. We repeat the search with PSIS-LOO-CV criterion only for nloo=50 folds, and combine the result with the fast PSIS-LOO result using difference estimator (Magnusson et al. 2020). The use of sub-sampling LOO affects only the model size selection and given that is stable, the projection for the selected model is as good as with computationally more expensive search validation. Based on the previous quick result, we search only up to models of size 10. With my laptop and 8 parallel workers, this takes less than 5min.

registerDoFuture()
plan(multisession, workers = 8)
tic()
vselm <- cv_varsel(fitm, nterms_max = 10, validate_search = TRUE,
                   refit_prj = TRUE, nloo = 50,
                   parallel = TRUE, verbose = TRUE)
toc()
plan(sequential)

The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. The order is the same as in the previous plot, but now the predictive performance estimates are taking into account search and have smaller bias. It seems using just four predictors can provide the similar predictive performance as using all the predictors.

plot(vselm, stats=c("elpd","R2"), deltas=TRUE,
     text_angle=45, alpha=0.1, 
     size_position = 'primary_x_top', show_cv_proportions=FALSE) +
  geom_vline(xintercept=seq(0,10,by=5), colour='black', alpha=0.1)
Figure 10

projpred can also provide suggestion for the sufficient model size.

(nselm <- suggest_size(vselm))
[1] 4

Form the projected posterior for the selected model.

rankm <- ranking(vselm, nterms=nselm)
projm <- project(vselm, nterms=nselm)
drawsm_proj <- as_draws_df(projm) |>
  subset_draws(variable=paste0('b_',rankm$fulldata[1:nselm])) |>
  set_variables(variable=rankm$fulldata[1:nselm])

The marginals of the projected posterior are all clearly away from 0.

mcmc_areas(drawsm_proj, prob_outer=0.98, area_method = "scaled height")
Figure 11

The following plot shows the stability of the search over the different LOO-CV folds. The numbers indicate the proportion of folds, the specific predictor was included at latest on the given model size.

plot(cv_proportions(rankm, cumulate=TRUE))
Figure 12

9.2 Portuguese exam scores

We repeat the same, but predicting grade for Portuguese instead of mathematics

Fit a model with R2D2 prior with mean 1/3 and precision 3.

fitp <- brm(Gpor ~ ., data = studentstd_Gpor,
              prior=c(prior(R2D2(mean_R2 = 1/3, prec_R2 = 3, cons_D2 = .2,
                                autoscale = FALSE),class=b)))

Compare posterior-R^2 and LOO-R^2. We see that Portuguese grade is easier to predict given the predictors (but there is still a lot of unexplained variance).

fitp <- add_criterion(fitp, criterion='loo')
bayes_R2(fitp) |> round(2)
   Estimate Est.Error Q2.5 Q97.5
R2     0.31      0.04 0.23  0.38
loo_R2(fitp) |> round(2)
   Estimate Est.Error Q2.5 Q97.5
R2     0.27      0.04  0.2  0.34

Plot marginal posteriors of coefficients

drawsp <- as_draws_df(fitp, variable=paste0('b_',predictors)) |>
  set_variables(predictors)
p <- mcmc_areas(drawsp, prob_outer=0.98, area_method = "scaled height") +
  xlim(c(-3,3))
p <- p + scale_y_discrete(limits = rev(levels(p$data$parameter)))
p
Figure 13

We use projection predictive variable selection with fast LOO-CV only for the full data search path.

vselp_fast <- cv_varsel(fitp, nterms_max=27, validate_search=FALSE)

The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. As there is some overfitting in the search and we didn’t cross-validate the search, the performance estimates scan go above the reference model performance. However, this plot helps as to see that 10 or fewer predictors would be sufficient.

plot(vselp_fast, stats = c("elpd","R2"), deltas=TRUE,
     text_angle=45, alpha=0.1, 
     size_position = 'primary_x_top', show_cv_proportions=FALSE) +
  geom_vline(xintercept=seq(0,25,by=5), colour='black', alpha=0.1)
Figure 14

Next we repeat the search, but now cross-validate the search, too. We repeat the search with PSIS-LOO-CV criterion only for nloo=50 folds, and combine the result with the fast PSIS-LOO result using difference estimator (Magnusson et al. 2020). The use of sub-sampling LOO affects only the model size selection and given that is stable, the projection for the selected model is as good as with computationally more expensive search validation. Based on the previous quick result, we search only up to models of size 10. With my laptop and 8 parallel workers, this takes less than 5min.

registerDoFuture()
plan(multisession, workers = 8)
tic()
vselp <- cv_varsel(fitp, nterms_max=10, validate_search=TRUE,
                   refit_prj=TRUE, nloo=50,
                   parallel=TRUE)
toc()
plan(sequential)

The following plot shows the relevance order of the predictors and estimated predictive performance given those variables. The order is the same as in the previous plot, but now the predictive performance estimates are taking into account search and have smaller bias. It seems using just seven predictors can provide the similar predictive performance as using all the predictors.

plot(vselp, stats=c("elpd", "R2"), deltas=TRUE,
     text_angle=45, alpha=0.1,
     size_position = 'primary_x_top', show_cv_proportions=FALSE) +
   geom_vline(xintercept=seq(0,10,by=5), colour='black', alpha=0.1)
Figure 15

projpred can also provide suggestion for the sufficient model size.

(nselp <- suggest_size(vselp))
[1] 7

Form the projected posterior for the selected model.

rankp <- ranking(vselp, nterms=nselp)
projp <- project(vselp, nterms=nselp)
drawsp_proj <- as_draws_df(projp) |>
  subset_draws(variable=paste0('b_',rankp$fulldata[1:nselp])) |>
  set_variables(variable=rankp$fulldata[1:nselp])

The marginals of the projected posterior are all clearly away from 0.

mcmc_areas(drawsp_proj, prob_outer=0.98, area_method = "scaled height")
Figure 16

The following plot shows the stability of the search over the different LOO-CV folds. The numbers indicate the proportion of folds, the specific predictor was included at latest on the given model size.

plot(cv_proportions(rankp, cumulate=TRUE))
Figure 17

9.3 Using the selected model

For further predictions we can use the projected draws. Due to how different packages work, sometimes it can be easier to rerun MCMC conditionally on the selected variables. This gives a slightly different result, but when the reference model has been good the difference tends to be small, and the main benefit form using projpred is still that the selection process itself has not caused overfitting and selection of spurious covariates.

References

Gelman, Andrew, Ben Goodrich, Jonah Gabry, and Aki Vehtari. 2019. “R-Squared for Bayesian Regression Models.” The American Statistician 73 (3): 307–9.
Magnusson, Måns, Michael Riis Andersen, Johan Jonasson, and Aki Vehtari. 2020. “Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data.” In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108:341–51. PMLR.
McLatchie, Yann, Sölvi Rögnvaldsson, Frank Weber, and Aki Vehtari. 2023. “Robust and Efficient Projection Predictive Inference.” arXiv Preprint arXiv:2306.15581.
Piironen, Juho, Markus Paasiniemi, and Aki Vehtari. 2020. “Projective Inference in High-Dimensional Problems: Prediction and Feature Selection.” Electronic Journal of Statistics 14 (1): 2155–97.
Tosh, Christopher, Philip Greengard, Ben Goodrich, Andrew Gelman, Aki Vehtari, and Daniel Hsu. 2021. “The Piranha Problem: Large Effects Swimming in a Small Pond.” arXiv:2105.13445.
Zhang, Yan Dora, Brian P. Naughton, Howard D. Bondell, and Brian J. Reich. 2022. “Bayesian Regression Using a Prior on the Model Fit: The R2-D2 Shrinkage Prior.” Journal of the American Statistical Association 117: 862–74.

Licenses

  • Code © 2023-2024, Aki Vehtari, licensed under BSD-3.
  • Text © 2023-2024, Aki Vehtari, licensed under CC-BY-NC 4.0.

Original Computing Environment

sessionInfo()
R version 4.4.1 (2024-06-14)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.10.0 
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0

locale:
 [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_DK.UTF-8        LC_COLLATE=en_GB.UTF-8    
 [5] LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
 [7] LC_PAPER=fi_FI.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       

time zone: Europe/Helsinki
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] rstan_2.35.0.9000       StanHeaders_2.35.0.9000 tictoc_1.2.1           
 [4] doFuture_1.0.1          future_1.34.0           foreach_1.5.2          
 [7] ggdist_3.3.2            patchwork_1.2.0         matrixStats_1.4.1      
[10] dplyr_1.1.4             tinytable_0.3.0.5       bayesplot_1.11.1.9000  
[13] ggplot2_3.5.1           projpred_2.8.0.9000     loo_2.8.0              
[16] posterior_1.6.0         cmdstanr_0.8.1.9000     brms_2.21.6            
[19] Rcpp_1.0.13            

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1      farver_2.1.2          fastmap_1.2.0        
 [4] tensorA_0.36.2.1      digest_0.6.36         estimability_1.5.1   
 [7] lifecycle_1.0.4       processx_3.8.4        magrittr_2.0.3       
[10] compiler_4.4.1        rlang_1.1.4           tools_4.4.1          
[13] utf8_1.2.4            yaml_2.3.10           knitr_1.48           
[16] labeling_0.4.3        bridgesampling_1.1-2  htmlwidgets_1.6.4    
[19] pkgbuild_1.4.4        curl_5.2.1            plyr_1.8.9           
[22] RColorBrewer_1.1-3    abind_1.4-5           withr_3.0.1          
[25] grid_4.4.1            stats4_4.4.1          fansi_1.0.6          
[28] xtable_1.8-4          colorspace_2.1-1      inline_0.3.19        
[31] ggridges_0.5.6        globals_0.16.3        emmeans_1.10.3       
[34] scales_1.3.0          iterators_1.0.14      cli_3.6.3            
[37] mvtnorm_1.3-1         rmarkdown_2.27        generics_0.1.3       
[40] RcppParallel_5.1.9    future.apply_1.11.2   reshape2_1.4.4       
[43] stringr_1.5.1         parallel_4.4.1        vctrs_0.6.5          
[46] V8_4.4.2              Matrix_1.7-0          jsonlite_1.8.8       
[49] listenv_0.9.1         glue_1.7.0            parallelly_1.38.0    
[52] codetools_0.2-20      ps_1.7.7              distributional_0.4.0 
[55] stringi_1.8.4         gtable_0.3.5          QuickJSR_1.3.1       
[58] quadprog_1.5-8        munsell_0.5.1         tibble_3.2.1         
[61] pillar_1.9.0          htmltools_0.5.8.1     Brobdingnag_1.2-9    
[64] R6_2.5.1              evaluate_0.24.0       lattice_0.22-6       
[67] backports_1.5.0       rstantools_2.4.0.9000 coda_0.19-4.1        
[70] gridExtra_2.3         nlme_3.1-165          checkmate_2.3.2      
[73] xfun_0.46             pkgconfig_2.0.3