Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows

Thumbnail

Event details

Date 27.11.2025
Hour 16:0017:00
Speaker Dr  JJ Zhu (WIAS Berlin)
Location
Category Conferences - Seminars
Event Language English

Abstract:
Gradient flows have emerged as a powerful framework for analyzing machine learning and statistical inference algorithms. Motivated by several applications in statistical inference, generative models, generalization and robustness of learning algorithms, I will provide a few new results regarding the kernel approximation of gradient flows, such as a hidden link between the gradient flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.

The talk is based on the joint works with Alexander Mielke.

Practical information

  • Expert
  • Free

Organizer

Contact

Tags

Probability and stochastic analysis Seminar

Event broadcasted in

Share