BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Implicit Bias of Gradient Methods
DTSTART:20220721T150000
DTEND:20220721T170000
DTSTAMP:20260411T013221Z
UID:6e4772882d9e1b3c406fd33e9dd1f1fd026fa6c22af0f77ff483972a
CATEGORIES:Conferences - Seminars
DESCRIPTION:Aditya Varre\nEDIC candidacy exam\nExam president: Prof. Marti
 n Jaggi\nThesis advisor: Prof. Nicolas Flammarion\nCo-examiner: Prof. Lena
 ic Chizat\n\nAbstract\nIt is becoming increasingly clear that implicit bia
 ses introduced by the optimization algorithm play a crucial role in deep l
 earning and in the generalization ability of the learned models. In this r
 eport\, we examine the implicit bias of gradient algorithms on unregulariz
 ed regression or classification problems. In the case of logistic regressi
 on\, we show how gradient descent converges in the direction of the max-ma
 rgin (hard margin SVM) solution. Finally\, we discuss how this methodology
  can also aid in understanding implicit regularization in more complex mod
 els and with other optimization methods.\n\nBackground papers\na) Underst
 anding deep learning requires rethinking generalization\, https://arxiv.o
 rg/pdf/1611.03530.pdf\,. Zhang\, Chiyuan\, Samy Bengio\, Moritz Hardt\, B
 enjamin Recht\, and Oriol Vinyals. ICLR 2017.\nb) Kernel and rich regime
 s in overparametrized models http://proceedings.mlr.press/v125/woodworth2
 0a/woodworth20a.pdf. Woodworth\, B.\, Gunasekar\, S.\, Lee\, J.D.\, Moros
 hko\, E.\, Savarese\, P.\, Golan\, I.\, Soudry\, D. and Srebro\, COLT 2020
 .\nc) Large learning rate tames homogeneity: Convergence and balancing ef
 fect. https://openreview.net/pdf?id=3tbDrs77LJ5. Wang\, Y.\, Chen\, M.\, 
 Zhao\, T. and Tao\, M.\, 2021. ICLR 2022. \n 
LOCATION:INJ 326 https://plan.epfl.ch/?room==INJ%20326
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
