Non-Euclidean Learning via Optimization Solvers and Stability

Thumbnail

Event details

Date 22.11.2024
Hour 15:1516:15
Speaker Patrick Rebeschini, University of Oxford
Location
Category Conferences - Seminars
Event Language English

Ridge regression and gradient descent are two foundational approaches in Euclidean learning, embodying statistical and algorithmic regularization, respectively. While extending this framework to non-Euclidean geometries has been extensively explored in statistics through methods such as M-estimation, with notable examples like the Lasso, the development of optimization solvers that achieve optimal statistical rates in non-Euclidean contexts remains less advanced.

In this talk, we present recent progress in designing generalized gradient descent methods that attain optimal statistical rates in non-Euclidean settings. This includes linear regression where the ground-truth regressor lies within an ell_p ball, as well as general convex loss functions that are smooth with respect to ell_p norms. In the latter case, we resolve an open question posed by Attia and Koren in 2022 regarding the development of a black-box approach to transform algorithms into uniformly stable ones.

(Based on joint work with Tobias Wegel and Gil Kur, as well as with Simon Vary and David Martínez-Rubio)
 

Practical information

  • Informed public
  • Free

Organizer

  • Rajita Chandak    

Contact

Share