BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Learning beyond Classical Generalization with Neural Networks
DTSTART:20220728T150000
DTEND:20220728T170000
DTSTAMP:20260406T183330Z
UID:4cedefda8eb61505d24eceb5e8043eb728d5415d3464e15153224b62
CATEGORIES:Conferences - Seminars
DESCRIPTION:Aryo Lotfi Jandaghi\nEDIC candidacy exam\nExam president: Prof
 . Martin Jaggi\nThesis advisor: Prof. Emmanuel Abbé\nCo-examiner: Prof. L
 énaïc Chizat\n\nAbstract\nIn this report\, we focus on the generalizatio
 n ability\nof neural networks. We describe the recent trend of analyzing\n
 neural networks’ reasoning capability in contrast to their memorization 
 ability.\nWe reintroduce the framework of Boolean pointer\nvalue retrieval
  [1] and explain how it can help us understand the\nmechanisms behind the 
 generalization of neural networks. We\npresent three recent papers on: (i)
  the spectral bias of neural\nnetworks\, (ii) learning sparse Boolean func
 tions in the meanfield\nregime\, and (iii) convergence of deep linear neur
 al networks.\nFinally\, we explain how these papers can guide our research
  and\ndescribe our short and long term research plans.\n\nBackground paper
 s\n\n	On the Spectral Bias of Neural Networks (link: https://proceedings.
 mlr.press/v97/rahaman19a/rahaman19a.pdf)\n	The merged-staircase property: 
 a necessary and nearly sufficient condition for SGD learning of sparse fun
 ctions on two-layer neural networks (link: https://proceedings.mlr.press/
 v178/abbe22a/abbe22a.pdf)\n	A Convergence Analysis of Gradient Descent for
  Deep Linear Neural Networks (link: https://openreview.net/pdf?id=SkMQg3C
 5K7)\n
LOCATION:MA B2 485 https://plan.epfl.ch/?room==MA%20B2%20485
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
