Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows

Thumbnail

Event details

Date 27.11.2024
Hour 16:0017:00
Speaker Prof. JJ Zhu (WIAS Berlin)
Location
Category Conferences - Seminars
Event Language English

Abstract:
Gradient flows have emerged as a powerful framework for analyzing machine learning and statistical inference algorithms. Motivated by several applications in statistical inference, generative models, generalization and robustness of learning algorithms, I will provide a few new results regarding the kernel approximation of gradient flows, such as a hidden link between the gradient flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.

The talk is based on the joint works with Alexander Mielke.

Speaker Bio:
Jia-Jie Zhu (https://jj-zhu.github.io/) is a machine learner, applied mathematician, and research group leader at the Weierstrass Institute, Berlin. Previously, he worked as a postdoctoral researcher in machine learning at the Max-Planck-Institute for Intelligent Systems, Tübingen, and received his Ph.D. training in optimization, at the University of Florida, USA. He is interested in the intersection of machine learning, analysis, and optimization, on topics such as gradient flows of probability measures, optimal transport, and robustness of learning and optimization algorithms.


 

Practical information

  • Expert
  • Free

Organizer

Contact

Tags

Probability and stochastic analysis Seminar

Share