Accelerating convergence and reducing variance for Langevin samplers
![Thumbnail](http://memento.epfl.ch/image/5005/1440x810.jpg)
Event details
Date | 17.06.2015 |
Hour | 14:00 › 16:00 |
Speaker | Grigoris Pavliotis |
Location | |
Category | Conferences - Seminars |
Markov Chain Monte Carlo (MCMC) is a standard methodology for sampling from probability distributions (known up to the normalization constant) in high dimensions.
There are (infinitely) many different Markov chains/diffusion processes that can be used to sample from a given distribution. To reduce the computational complexity, it is necessary to consider Markov chains that converge as quickly as possible to the target distribution and that have a small asymptotic variance. In this talk, I will present some recent results on accelerating convergence to equilibrium and on reducing the asymptotic variance for a class of Langevin-based MCMC algorithms.
There are (infinitely) many different Markov chains/diffusion processes that can be used to sample from a given distribution. To reduce the computational complexity, it is necessary to consider Markov chains that converge as quickly as possible to the target distribution and that have a small asymptotic variance. In this talk, I will present some recent results on accelerating convergence to equilibrium and on reducing the asymptotic variance for a class of Langevin-based MCMC algorithms.
Links
Practical information
- General public
- Free
Organizer
- CIB
Contact
- Valérie Krier