From distributional ambiguity to gradient flows: Wasserstein, Fisher-Rao, and kernel approximation
Event details
Date | 28.11.2024 |
Hour | 14:00 › 16:00 |
Speaker |
Dr. Jia-Jie Zhu Weierstrass Institute, Berlin |
Location | |
Category | Conferences - Seminars |
Event Language | English |
Abstract: Recent advances in distributionally robust optimization are distinct in that their ambiguity sets are motivated by the theory of optimal transport and information divergences. The theoretical foundation of those fields has received a major push from the theory of PDE and gradient flows over the last couple of decades. Motivated by several applications in inference and generative models, I will provide a few new results regarding the kernel approximation of Wasserstein and Fisher-Rao gradient flows, such as a hidden link between the flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. Finally, I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.
Bio sketch: Jia-Jie Zhu is a machine learner, applied mathematician, and research group leader at the Weierstrass Institute, Berlin. Previously, he worked as a postdoctoral researcher in machine learning at the Max-Planck-Institute for Intelligent Systems, Tübingen, and received his Ph.D. training in optimization, at the University of Florida, USA. He is interested in the intersection of machine learning, analysis, and optimization, on topics such as gradient flows of probability measures, optimal transport, and robustness of learning and optimization algorithms.
Bio sketch: Jia-Jie Zhu is a machine learner, applied mathematician, and research group leader at the Weierstrass Institute, Berlin. Previously, he worked as a postdoctoral researcher in machine learning at the Max-Planck-Institute for Intelligent Systems, Tübingen, and received his Ph.D. training in optimization, at the University of Florida, USA. He is interested in the intersection of machine learning, analysis, and optimization, on topics such as gradient flows of probability measures, optimal transport, and robustness of learning and optimization algorithms.
Practical information
- Informed public
- Registration required
Organizer
- Prof. Daniel Kuhn