BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:External FLAIR seminar: Spencer Frei
DTSTART:20220930T131500
DTEND:20220930T141500
DTSTAMP:20260501T144422Z
UID:e29b82e24ac9527394ecfa92dc4d677d27891454a5197bf27dcdcc0b
CATEGORIES:Conferences - Seminars
DESCRIPTION:Spencer Frei\nTitle: Implicit bias and benign overfitting for
  neural networks in high dimensions\n\nSpeaker: Spencer Frei (UC Berkeley)
 \n\nAbstract: Benign overfitting\, the phenomenon where interpolating mod
 els generalize well in the presence of noisy data\, was first observed in 
 neural networks trained by gradient descent.  In this talk we go over som
 e recent work towards understanding this surprising phenomenon.   We firs
 t describe an implicit regularization effect of gradient descent in two-la
 yer neural networks when trained on high-dimensional datasets.  We show t
 hat in this setting\, gradient descent finds solutions which have small ra
 nk\, despite the lack of explicit regularization to encourage such structu
 re.  We then consider the generalization error of trained two-layer netwo
 rks when the data comes from a high-dimensional mixture model where a cons
 tant fraction of the training labels are uniformly random labels.  In thi
 s setting\, we show that neural networks indeed exhibit benign overfitting
 : they can be driven to zero training error\, perfectly fitting the noisy 
 training labels\, and simultaneously achieve minimax-optimal test error. 
  In contrast to previous work on benign overfitting that require linear or
  kernel-based predictors\, our analysis holds in a setting where both the 
 model and learning dynamics are fundamentally nonlinear.  Based on previo
 us and upcoming work with Peter Bartlett\, Niladri Chatterji\, Wei Hu\, Na
 ti Srebro\, and Gal Vardi.
LOCATION:GA 3 21 https://plan.epfl.ch/?room==GA%203%2021
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
