On the well-posedness of Bayesian inverse problems
Event details
Date | 11.02.2020 |
Hour | 14:15 › 15:15 |
Speaker | Mr. Jonas Latz |
Location | |
Category | Conferences - Seminars |
Computational Mathematics Seminar
The subject of this talk is the introduction of a new concept of well-posedness of Bayesian inverse problems. The conventional concept of (Lipschitz, Hellinger) well-posedness in [Stuart 2010, Acta Numerica 19, pp. 451-559] is difficult to verify in practice and may be inappropriate in some contexts. Our concept simply replaces the Lipschitz continuity of the posterior measure in the Hellinger distance by continuity in an appropriate distance between probability measures. Aside from the Hellinger distance, we investigate well-posedness with respect to weak convergence, the total variation distance, the Wasserstein distance, and also the Kullback-Leibler divergence. We demonstrate that the weakening to continuity is tolerable and that the generalisation to other distances is important. The main results of this article are proofs of well-posedness with respect to some of the aforementioned distances for large classes of Bayesian inverse problems. Here, little or no information about the underlying model is necessary; making these results particularly interesting for practitioners using black-box models. We illustrate our findings with numerical examples motivated from machine learning and image processing.
The subject of this talk is the introduction of a new concept of well-posedness of Bayesian inverse problems. The conventional concept of (Lipschitz, Hellinger) well-posedness in [Stuart 2010, Acta Numerica 19, pp. 451-559] is difficult to verify in practice and may be inappropriate in some contexts. Our concept simply replaces the Lipschitz continuity of the posterior measure in the Hellinger distance by continuity in an appropriate distance between probability measures. Aside from the Hellinger distance, we investigate well-posedness with respect to weak convergence, the total variation distance, the Wasserstein distance, and also the Kullback-Leibler divergence. We demonstrate that the weakening to continuity is tolerable and that the generalisation to other distances is important. The main results of this article are proofs of well-posedness with respect to some of the aforementioned distances for large classes of Bayesian inverse problems. Here, little or no information about the underlying model is necessary; making these results particularly interesting for practitioners using black-box models. We illustrate our findings with numerical examples motivated from machine learning and image processing.
Practical information
- General public
- Free
Organizer
- Prof. Daniel Kressner
Contact
- Prof. Daniel Kressner