CDM-Seminars - Gromov-Wasserstein distances between finite spaces: Duality, computation, and entropic approximation
Abstract:
In recent years, the study of computational optimal transport (OT) has advanced significantly, driven, in part, by its broad applicability across data science, statistics, economics, and physics. While OT distances, such as the Wasserstein metric, are well suited for comparing distributions on the same space and endow the space of probabilities on a given space with a rich geometry, comparing datasets of different types -- such as text and images -- requires specifying an ad hoc cost function, which may fail to capture a meaningful correspondence between datasets.
To address this limitation, Gromov-Wasserstein (GW) distances have been proposed as a natural extension of the OT framework for comparing metric measure (mm) spaces based only on their intrinsic structure. Notably, GW distances define a metric on the space of all mm spaces and provide a means by which to align them. Despite their broad applicability to comparing heterogeneous datasets, the computational study of GW distances remains limited. In effect, even the entropically regularized GW problem, most commonly used by practitioners as an efficient proxy for the true GW problem, was only recently shown to admit algorithms subject to non-asymptotic convergence rates albeit for Euclidean mm spaces.
This talk will outline recent progress in computational GW. Notably, we furnish a new variational formulation for the GW problem between finite mm spaces which naturally leads to new algorithms for solving this problem with and without entropic regularization. As a consequence of this analysis, we show that, under certain conditions, the iterates of our proposed method coincide with those of a variant of the mirror descent solver, which is the most popular regularized GW solver, allowing us to prove the first non-asymptotic convergence rates for that approach. Furthermore, we establish that solutions of the entropic GW converge to solutions of the standard GW problem at an exponential rate once the underlying quadratic program is concave, but that this rate is as slow as quadratic otherwise.
This is joint work with Ziv Goldfeld, Riccardo Passeggeri, and their students, Venkatkrishna Karumanchi and Joanna Marks.
References:
Short bio: Gabriel is a Chapman fellow in statistics at Imperial College London's Department of Mathematics. He obtained his Ph.D. In Applied Mathematics from Cornell University working under the supervision of Prof. Ziv Goldfeld. His main research interests lie in optimization, mathematical statistics, and probability; he is particularly interested in the interplay between these topics within the context of optimal transport theory. His work is supported in part by a postgraduate fellowship from the National Science and Engineering Research Council of Canada (NSERC).
More information about the speaker
In recent years, the study of computational optimal transport (OT) has advanced significantly, driven, in part, by its broad applicability across data science, statistics, economics, and physics. While OT distances, such as the Wasserstein metric, are well suited for comparing distributions on the same space and endow the space of probabilities on a given space with a rich geometry, comparing datasets of different types -- such as text and images -- requires specifying an ad hoc cost function, which may fail to capture a meaningful correspondence between datasets.
To address this limitation, Gromov-Wasserstein (GW) distances have been proposed as a natural extension of the OT framework for comparing metric measure (mm) spaces based only on their intrinsic structure. Notably, GW distances define a metric on the space of all mm spaces and provide a means by which to align them. Despite their broad applicability to comparing heterogeneous datasets, the computational study of GW distances remains limited. In effect, even the entropically regularized GW problem, most commonly used by practitioners as an efficient proxy for the true GW problem, was only recently shown to admit algorithms subject to non-asymptotic convergence rates albeit for Euclidean mm spaces.
This talk will outline recent progress in computational GW. Notably, we furnish a new variational formulation for the GW problem between finite mm spaces which naturally leads to new algorithms for solving this problem with and without entropic regularization. As a consequence of this analysis, we show that, under certain conditions, the iterates of our proposed method coincide with those of a variant of the mirror descent solver, which is the most popular regularized GW solver, allowing us to prove the first non-asymptotic convergence rates for that approach. Furthermore, we establish that solutions of the entropic GW converge to solutions of the standard GW problem at an exponential rate once the underlying quadratic program is concave, but that this rate is as slow as quadratic otherwise.
This is joint work with Ziv Goldfeld, Riccardo Passeggeri, and their students, Venkatkrishna Karumanchi and Joanna Marks.
References:
Short bio: Gabriel is a Chapman fellow in statistics at Imperial College London's Department of Mathematics. He obtained his Ph.D. In Applied Mathematics from Cornell University working under the supervision of Prof. Ziv Goldfeld. His main research interests lie in optimization, mathematical statistics, and probability; he is particularly interested in the interplay between these topics within the context of optimal transport theory. His work is supported in part by a postgraduate fellowship from the National Science and Engineering Research Council of Canada (NSERC).
More information about the speaker
Links
Practical information
- Informed public
- Registration required
Organizer
- Prof. Andrés Cristi