BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Optimal convex M-estimation via score matching
DTSTART:20241115T153000
DTEND:20241115T163000
DTSTAMP:20260315T144704Z
UID:709a4d3a91b528f6a02da54b91dcc31a67f4d14f308d0636d68bfaa6
CATEGORIES:Conferences - Seminars
DESCRIPTION:Oliver Feng\, University of Bath\nIn the context of linear reg
 ression\, we construct a data-driven convex loss function with respect to 
 which empirical risk minimisation yields optimal asymptotic variance in th
 e downstream estimation of the regression coefficients.\nOur semiparametri
 c approach targets the best decreasing approximation of the derivative of 
 the log-density of the noise distribution. At the population level\, this 
 fitting process is a nonparametric extension of score matching\, correspon
 ding to a log-concave projection of the noise distribution with respect to
  the Fisher divergence. The procedure is computationally efficient\, and w
 e prove that our procedure attains the minimal asymptotic covariance among
  all convex M-estimators.\nAs an example of a non-log-concave setting\, fo
 r Cauchy errors\, the optimal convex loss function is Huber-like\, and our
  procedure yields an asymptotic efficiency greater than 0.87 relative to t
 he oracle maximum likelihood estimator of the regression coefficients that
  uses knowledge of this error distribution\; in this sense\, we obtain rob
 ustness without sacrificing much efficiency. Numerical experiments confirm
  the practical merits of our proposal.\nThis is joint work with Yu-Chun Ka
 o\, Min Xu and Richard Samworth
LOCATION:CM 1 517 https://plan.epfl.ch/?room==CM%201%20517
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
