Conference Agenda
| Session | ||
TT 9c - Uncertainty Quantification for Neural Networks: Make your model predictions trustworthy
| ||
| Session Abstract | ||
|
Brief Description and Outline: In machine learning, the ability to make reliable predictions is paramount. Yet, standard ML models and pipelines provide only point predictions without accounting for model confidence (or the lack thereof). Uncertainty in model outputs, especially when faced with out-of-distribution (OOD) data, is essential when deploying models in production. This tutorial serves as an introduction to the concepts and techniques for quantifying uncertainty in machine learning models. We will explore the different sources of uncertainty and cover various methods for estimating these uncertainties effectively. Goals: Provide participants with basic knowledge and go-to tools and methods for applying UQ in their Work. Presenters Experience: Steve Schmerler, AI consultant @HZDR: I have hosted an UQ workshop and world cafe table at previous HAICONs. Further, I have been teaching neural net intro courses such as https://github.com/elcorto/2024-thrill-school-machine-learning and developed educational material for Gaussian processes (https://elcorto.github.io/gp_playground). Target Audience: Familiarity with neural nets, basic probability (ML textbook level), Bayesian stats would be ideal but not required. Keywords: uncertainty, approximate Bayesian, calibration |