BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Accelerating Stochastic Gradient Descent
DTSTART:20170615T103000
DTEND:20170615T111500
DTSTAMP:20260407T043842Z
UID:12e471f3676aac0cd74e9c6733a1621233880ddbd3554bc2d8d2c16b
CATEGORIES:Conferences - Seminars
DESCRIPTION:Sham Kakade\, University of Washington\nThere is widespread se
 ntiment that it is not possible to effectively utilize fast gradient metho
 ds (e.g. Nesterov's acceleration\, conjugate gradient\, heavy ball) for th
 e purposes of stochastic optimization due to their instability and error a
 ccumulation\, a notion made precise in dAspremont 2008 and Devolder\, Glin
 eur\, and Nesterov 2014. This work considers the use of 'fast gradient' me
 thods for the special case of stochastic approximation for the least squar
 es regression problem. Our main result refutes the conventional wisdom by 
 showing that acceleration can be made robust to statistical errors. In par
 ticular\, this work introduces an accelerated stochastic gradient method t
 hat provably achieves the minimax optimal statistical risk faster than sto
 chastic gradient descent. Critical to the analysis is a sharp characteriza
 tion of accelerated stochastic gradient descent as a stochastic process.\n
 We hope this characterization gives insights towards the broader question 
 of designing simple and effective accelerated stochastic methods for more 
 general convex and non-convex problems. We will also discuss some prelimin
 ary experimental results in the non-convex setting.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
