BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Deep Learning with Gaussian Inputs and Weights
DTSTART:20170616T103000
DTEND:20170616T111500
DTSTAMP:20260410T114242Z
UID:17cfef7093f85c1db4a7a8ef3c5ce29fba5ae0cf2fb3565b05574c55
CATEGORIES:Conferences - Seminars
DESCRIPTION:Amir Globerson\, Tel Aviv University\nDeep learning models are
  often successfully trained using gradient descent\, despite the worst cas
 e hardness of the underlying non-convex optimization problem. The key ques
 tion is then under what conditions can one prove that optimization will su
 cceed. Here we provide\, for the first time\, a result of this kind for a 
 one hidden layer ConvNet with no overlap and ReLU activation. For this arc
 hitecture we show that learning is hard in the general case\, but that whe
 n the input distribution is Gaussian\, gradient descent converges to the g
 lobal optimum in polynomial time. I will additionally discuss an alternati
 ve approach to sidestepping the complexity of deep learning optimization u
 sing improper learning with Gaussian priors on weights.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
