BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Addressing Computational and Statistical Gaps with Deep Neural Net
 works
DTSTART:20161007T111500
DTEND:20161007T123000
DTSTAMP:20260413T044442Z
UID:db31504552146a046627af8a3322d80fdcf7ee33828e54cb2d49aa4e
CATEGORIES:Conferences - Seminars
DESCRIPTION:Joan BRUNA    \nMany modern statistical questions are plag
 ued with asymptotic regimes that separate our current theoretical understa
 nding with what is possible given finite computational and sample resource
 s. Important examples of such gaps appear in sparse inference\, high-dimen
 sional density estimation and non-convex optimization.\n\nIn the former\, 
 proximal splitting algorithms efficiently solve the l1-relaxed sparse codi
 ng problem\, but their performance is typically evaluated in terms of asym
 ptotic convergence rates. In unsupervised high-dimensional learning\, a ma
 jor challenge is how to appropriately combine prior knowledge in order to 
 beat the curse of dimensionality. Finally\, the prevailing dichotomy betwe
 en convex and non-convex optimization is not adapted to describe the diver
 sity\nof optimization scenarios faced as soon as convexity fails. In this 
 talk we will illustrate how Deep architectures can be used in order to att
 ack such gaps. We will first see how a neural network sparse coding model 
 (LISTA\, Gregor & LeCun’10) can be analyzed in terms of a particular mat
 rix factorization of the\ndictionary\, which leverages diagonalisation wit
 h invariance of the l1 ball\, revealing a phase transition that is consist
 ent with numerical experiments. We will then discuss image and texture gen
 erative modeling and super-resolution\, a prime example of high-dimensiona
 l inverse problem. In that setting\, we will\nexplain how multi-scale conv
 olutional neural networks are equipped to beat the curse of dimensionality
  and provide stable estimation of high frequency information. Finally\, we
  will discuss recent research in which we explore to what extent the non-c
 onvexity of the loss surface arising in deep learning problems is hurting 
 gradient descent algorithms\, by efficiently estimating the number of basi
 ns of attractions.\n\nBio: Joan graduated cum-laude from Universitat Polit
 ècnica de Catalunya in both Mathematics and Telecommunications Engineerin
 g\, before graduating in Applied  Mathematics from ENS Cachan (France). H
 e then became a Sr. Research Engineer in an Image Processing startup\, dev
 eloping real-time video processing  algorithms. He obtained his PhD in Ap
 plied Mathematics at École Polytechnique (France). After a postdoctoral s
 tay at Courant Institute\, NYU\, he became a Postdoctoral fellow at Facebo
 ok AI Research. He is an Assistant Professor at UC Berkeley\, Statistics D
 epartment (on leave)\, and since Fall 2016 he is Assistant Professor at Co
 urant Institute\, NYU (Computer Science\, Center for Data Science and Math
 ematics (courtesy)).\n\nHis research interests include invariant signal re
 presentations\, deep learning\, high-dimensional statistics\, and its appl
 ications to computer vision\, statistical physics and AI.
LOCATION:CM 1 105 http://plan.epfl.ch/?request_locale=fr&room=CM+1+105&dom
 ain=places
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
