Theoretical characterization of uncertainty in high-dimensional machine learning
Event details
| Date | 31.08.2022 |
| Hour | 09:00 › 11:00 |
| Speaker | Lucas Clarte |
| Category | Conferences - Seminars |
EDIC candidacy exam
Exam president: Prof. Nicolas Flammarion
Thesis advisor: Prof. Lenka Zdeborová
Co-examiner: Prof. Matthieu Wyart
Abstract
In modern machine learning, quantifying the uncertainty
of a model’s output is required to obtain reliable predictions,
especially in sensitive applications like medical diagnosis.
Among the three papers presented here, one is concerned with
studying the asymptotic behaviour of the uncertainty of logistic
regression, while the two others introduce approximate Bayesian
methods to improve uncertainty quantification. The goal of our
ongoing and future research is to provide statistical guarantees
for the uncertainty of various algorithms, in the high-dimensional
regime and in solvable models.
Background papers
1) Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification (link: http://proceedings.mlr.press/v139/bai21c/bai21c-supp.pdf )
2) A Simple Baseline for Bayesian Uncertainty in Deep Learning (link : https://proceedings.neurips.cc/paper/2019/hash/118921efba23fc329e6560b27861f0c2-Abstract.html )
3) Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks (link : https://proceedings.mlr.press/v119/kristiadi20a.html )
Exam president: Prof. Nicolas Flammarion
Thesis advisor: Prof. Lenka Zdeborová
Co-examiner: Prof. Matthieu Wyart
Abstract
In modern machine learning, quantifying the uncertainty
of a model’s output is required to obtain reliable predictions,
especially in sensitive applications like medical diagnosis.
Among the three papers presented here, one is concerned with
studying the asymptotic behaviour of the uncertainty of logistic
regression, while the two others introduce approximate Bayesian
methods to improve uncertainty quantification. The goal of our
ongoing and future research is to provide statistical guarantees
for the uncertainty of various algorithms, in the high-dimensional
regime and in solvable models.
Background papers
1) Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification (link: http://proceedings.mlr.press/v139/bai21c/bai21c-supp.pdf )
2) A Simple Baseline for Bayesian Uncertainty in Deep Learning (link : https://proceedings.neurips.cc/paper/2019/hash/118921efba23fc329e6560b27861f0c2-Abstract.html )
3) Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks (link : https://proceedings.mlr.press/v119/kristiadi20a.html )
Practical information
- General public
- Free