BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:The interplay between data structure and neural networks: going be
 yond Gaussian models
DTSTART:20230127T131500
DTEND:20230127T141500
DTSTAMP:20260406T202217Z
UID:ed6dd92f3733918ad630f3cd20084a0dfaf025a9166750eadf5b3ce9
CATEGORIES:Conferences - Seminars
DESCRIPTION:Sebastian Goldt\nNeural networks are powerful feature extracto
 rs - but what do they actually learn from their data? We discuss two recen
 t works on this question\, with a focus on the importance of non-Gaussian 
 statistics for neural networks. We first develop a simple model for image
 s and show that a neural network trained on these images can learn a convo
 lution from scratch. This pattern-formation process is driven by a combina
 tion of translation-invariance of the "images" and the non-Gaussian\, high
 er-order statistics of the inputs. Second\, we conjecture a "distributiona
 l simplicity bias" whereby neural networks learn increasingly complex dist
 ributions of their inputs during training. We present analytical and exper
 imental evidence for this conjecture\, going from a simple perceptron up t
 o deep ResNets and visual transformers.
LOCATION:GA 3 21 https://plan.epfl.ch/?room==GA%203%2021
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
