BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:On the geometry of the landscape underlying deep learning
DTSTART:20181218T120000
DTSTAMP:20260506T015800Z
UID:39d33a6936d52f94d86310df48f2bb8cf381b43b1f551f4611c84091
CATEGORIES:Conferences - Seminars
DESCRIPTION:Prof. Matthieu Wyart\nDeep learning has been immensely success
 ful at a variety of tasks\, ranging from classification to artificial inte
 lligence. Yet why it works is unclear. Learning corresponds to fitting tra
 ining data\, which is implemented by descending a very high-dimensional lo
 ss function.  Two central questions are (i) since the loss is a priori no
 t convex\, why doesn't this descent get stuck in poor minima\, leading to 
 bad performance? (ii) Deep learning works in a regime where the number of 
 parameters can be larger\, even much larger\, than the data to fit. Why do
 es it lead to very predictive models then\, instead of overfitting?\nHere 
 I will discuss an unexpected analogy between the loss landscape in deep le
 arning and the energy landscape of repulsive ellipses\, that supports an e
 xplanation for (i). If times permit I will discuss (ii)\, more specificall
 y the surprising finding  that predictive power continuously improves by 
 adding more parameters.\n\n 
LOCATION:BSP 234 https://plan.epfl.ch/?room==BSP%20234
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
