## Talks and tutorials

• On Bayesian Workflow (45min)
Talk given in Finnish Center for Artificial Intelligence (FCAI) Machine learning coffee seminar.
• The pre-print Bayesian workflow.
• Abstract: I discuss some parts of Bayesian workflow with focus on the need and justification for iterative workflow. The talk is partly based on a review paper by Gelman, Vehtari, Simpson, Margossian, Carpenter, Yao, Kennedy, Gabry, Bürkner, and Modrák with the following abstract: "The Bayesian approach to data analysis provides a powerful way to handle uncertainty in all observations, model parameters, and model structure using probability theory. Probabilistic programming languages make it easier to specify and fit Bayesian models, but this still leaves us with many options regarding constructing, evaluating, and using these models, along with many remaining challenges in computation. Using Bayesian inference to solve real-world problems requires not only statistical skills, subject matter knowledge, and programming, but also awareness of the decisions made in the process of data analysis. All of these aspects can be understood as part of a tangled workflow of applied Bayesian statistics. Beyond inference, the workflow also includes iterative model building, model checking, validation and troubleshooting of computational problems, model understanding, and model comparison. We review all these aspects of workflow in the context of several examples, keeping in mind that applied research can involve fitting many models for any given problem, even if only a subset of them are relevant once the analysis is over.

• On Bayesian Workflow (65min)
Talk given in Generable seminar. Longer version of the Bayesian Workflow talk.

• Practical pre-asymptotic diagnostic of Monte Carlo estimates in Bayesian inference and machine learning (50min)
Oxford computational statistics and machine learning seminar talk
• The main reference Pareto smoothed importance sampling.
• Abstract: I discuss the use of the Pareto-$$k$$ diagnostic as a simple and practical approach for estimating both the required minimum sample size and empirical pre-asymptotic convergence rate for Monte Carlo estimates. Even when by construction a Monte Carlo estimate has finite variance the pre-asymptotic behavior and convergence rate can be very different from the asymptotic behavior following the central limit theorem. I demonstrate with practical examples in importance sampling, stochastic optimization, and variational inference, which are commonly used in Bayesian inference and machine learning.

• Uncertainty in Bayesian leave-one-out cross-validation based model comparison (30min)
CMStatistics conference talk.
• The pre-print Uncertainty in Bayesian leave-one-out cross-validation based model comparison.
• Abstract: Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. Estimating the uncertainty of the resulting LOO-CV estimate is a complex task and it is known that the commonly used standard error estimate is often too small. We analyse the frequency properties of the LOO-CV estimator and study the uncertainty related to it. We provide new results of the properties of the uncertainty both theoretically and empirically and discuss the challenges of estimating it. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the sampling distribution and the distribution of the error of the LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations.

• These are a few of my favorite inference diagnostics (59min)
PyMCon2020 talk.
• Abstract: I discuss some old and some more recent inference diagnostics methods for Markov chain Monte Carlo, importance sampling, and variational inference. When the convergence fails, I simply remember my favorite inference diagnostics, and then I don’t feel so bad.
• References:
• Use of reference models in variable selection (80 min)
Talk in Laplace’s demon seminar.
• Model assessment, selection and averaging (125min)
Tutorial given at Centre International de Rencontres Mathématiques.
• Stan and probabilistic programming introduction (39min)
Quick intro given in Machine Learning for Signal Processing conference.

• posteriordb: a database Stan (20min)
StanCon 2020 Developer Talk.
• Regularized Horseshoe (25min)
StanCon 2018 Asilomar talk.
• Tutorial: Model assessment, selection and inference after model selection (78min)
StanCon 2018 Asilomar tutorial.

• Model assessment and selection (70min)
StanCon 2018 Helsinki tutorial.

• Integration over hyperparameters and estimation of predictive performance (71min)
Tutorial in Gaussian process summer school 2017.

• GPSS2017 workshop: On Bayesian model selection and model averaging (67min)
Gaussian processes summer scholl 2017 workshop talk.

• Gaussian processes for survival analysis (44min)
MLPM: Machine Learning for Personalized Medicine Summer School 2015.