Aktionenbedienfeld
appliedAI Block Seminar: Uncertainty Quantification Block Seminar #4
We would like to invite you to the “Uncertainty Quantification in ML Block Seminar”, where we discuss six pivotal publications in the series
Zeit und Ort
Datum und Uhrzeit
Veranstaltungsort
Online
Zu diesem Event
The appliedAI Institute for Europe gGmbH would like to invite you to take part in the “Uncertainty Quantification in ML Block Seminar”, where six pivotal publications in the series will be discussed.
An ML model generates a solution based on the training data. However, if the uncertainty in the data and the model parameters is not accounted for, there is a high risk of failure in actual world deployment. Uncertainty quantification techniques play an important role in reducing the effects of uncertainties. That said, quantifying errors and uncertainties in ML algorithms is more complicated than in traditional methods. This comes as a result of having different sources of uncertainty in ML on top of the aleatoric uncertainty associated with noisy data.
In this block seminar, we will be introducing the topic of Uncertainty Quantification in ML and shed light on different techniques used to achieve that outlining their advantages and disadvantages.
The papers which will be covered:
1. Single Shot MC Dropout Approximation on 20.10.2022.
2. Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods on 02.11.2022.
3. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning on 17.11.2022
4. Robustly representing uncertainty through sampling in deep neural networks on 30.11.2022.
5. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles on 24.01.2023
6. Sampling-free Epistemic Uncertainty Estimation Using Approximated Variance Propagation on 07.02.2023
In this talk, Dr. Fabio Peruzzo will be presenting the paper “Robustly representing uncertainty through sampling in deep neural networks”.
In the previous instalments of our seminar series we have explored how bayesian methods can be used to estimate the epistemic uncertainty of a model, i.e. the uncertainty coming from the choice of network’s parameters. In particular, two weeks ago we have seen how variational inference applied to Bernoulli and Gaussian dropout can make Bayesian NNs tractable under certain assumptions. In this week’s seminar we will focus on a few numerical experiments that highlight benefits and limitations of this approach. We will start by giving a brief summary of the main mathematical results that we have seen so far, then discuss the difference between dropout and dropconnect and end with an analysis of their robustness to noise when applied to CNNs trained on the MNIST and Cifar-10 datasets. Throughout the talk we will use the paper “Robustly representing uncertainty through sampling in deep neural networks” as our main reference.
We are looking forward to meeting you there!
Please join via the following link:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_Yzc1YWJhMzAtYTNjMC00YjEzLTk5ZWUtNjA4YWIwNTg2OTYw%40thread.v2/0?context=%7b%22Tid%22%3a%224c60ce49-90f8-4744-bfa3-a22400d9629a%22%2c%22Oid%22%3a%227ec3c098-c1c8-494b-9490-e05ef946edc5%22%7d
Disclaimer:
1. The appliedAI Institute for Europe gGmbH is supported by the KI-Stiftung Heilbronn gGmbH.