Towards Better Understanding of Likelihood-based Generative Models
Event details
Date | 03.07.2019 |
Hour | 10:15 › 12:15 |
Speaker | Mladen Dimovski |
Location | |
Category | Conferences - Seminars |
EDIC candidacy exam
Exam president: Prof. Olivier Lévêque
Thesis advisor: Prof. Patrick Thiran
Co-examiner: Prof. François Fleuret
Abstract
Likelihood-based modes are an important class of generative models that explicitly define a model density and optimize its parameters by maximizing the likelihood of the observed data. Although the maximum likelihood approach is the method of choice for traditional density estimation, its usage as a training objective for approximating arbitrary probability distributions in high dimension raises some critical questions.
In this talk, we present and analyze two classes of widely influential likelihood-based models used for generative modelling, the flow-based models and the variational auto-encoders. We examine their potential, their intrinsic limitations and the extent to which these can be overcome. Finally, we discuss a work that proposes a novel method for generative model evaluation, an important aspect of research in the domain.
Background papers
Diagnosing and Enhancing VAE Models, by Bin Dai, David Wip. International Conference on Learning Representations, 2019.
Masked Autoregressive Flow for Density Estimation, by George Papamakarios et al. Advances in Neural Information Processing Systems, 2017.
Assessing Generative Models via Precision and Recall, by Mehdi S. M. Sajjadi et al. Advances in Neural Information Processing Systems, 2018.
Exam president: Prof. Olivier Lévêque
Thesis advisor: Prof. Patrick Thiran
Co-examiner: Prof. François Fleuret
Abstract
Likelihood-based modes are an important class of generative models that explicitly define a model density and optimize its parameters by maximizing the likelihood of the observed data. Although the maximum likelihood approach is the method of choice for traditional density estimation, its usage as a training objective for approximating arbitrary probability distributions in high dimension raises some critical questions.
In this talk, we present and analyze two classes of widely influential likelihood-based models used for generative modelling, the flow-based models and the variational auto-encoders. We examine their potential, their intrinsic limitations and the extent to which these can be overcome. Finally, we discuss a work that proposes a novel method for generative model evaluation, an important aspect of research in the domain.
Background papers
Diagnosing and Enhancing VAE Models, by Bin Dai, David Wip. International Conference on Learning Representations, 2019.
Masked Autoregressive Flow for Density Estimation, by George Papamakarios et al. Advances in Neural Information Processing Systems, 2017.
Assessing Generative Models via Precision and Recall, by Mehdi S. M. Sajjadi et al. Advances in Neural Information Processing Systems, 2018.
Practical information
- General public
- Free