BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:CNP Seminar // Matthias Bethge - Brain Intelligence: Continual lea
 rning of representational models after deployment
DTSTART:20220221T160000
DTEND:20220221T170000
DTSTAMP:20260509T115713Z
UID:c1654068366aa7bbfa7edb317fd868350af71a1347482ec4d73eb61f
CATEGORIES:Conferences - Seminars
DESCRIPTION:Matthias Bethge\nNeuroscience and artificial intelligence are 
 linked by the goal to model perception\, cognition and behavior. Machine l
 earning has become a key driver towards this goal. One of the big success
 es of the last decade has been the rise of convolutional neural networks 
 trained on ImageNet and similar computer vision benchmarks. Remarkably\, t
 hese artificial neural networks resemble several properties of real neuro
 ns in the ventral pathway of the mammalian brain\, and may serve as a good
  model of fast reflexive scene analysis such as gist recognition\, object
  detection or semantic segmentation. The feature representations of convol
 utional networks can serve a large variety of tasks as demonstrated with 
 transfer and multi-task learning\, and outperform humans on more and more
  pattern recognition benchmarks. Brain intelligence\, however\, is not rea
 lly captured by the performance on pre-defined benchmarks after large eng
 ineering efforts but manifests itself in the amazing capability to rapidly
  cope with highly variable situations and unstructured environments witho
 ut any expert supervision. In this talk I will propose a research agenda
  at the interface between computational neuroscience and machine learning 
 that seeks to make significant progress on continual learning of represen
 tational models after deployment. Focusing on object recognition I will f
 irst review a few key findings from my lab on the generalization behavior 
 of convolutional neural networks under task transfer\, adversarial attack
 s and common corruptions. These observations indicate that machines learn 
 task-specific shortcuts rather than task-independent representational obj
 ect models. They lack much of the ability to disentangle physical properti
 es of objects (such as shape and texture\, figure-ground\, etc) and to mo
 del the range of possible configural and appearance changes that can be e
 xpected for different objects. I will conclude the talk presenting ongoing
  work on continual object learning from unsupervised motion segmentation 
 (i.e. the Gestalt principle of “common fate”)\, and stimulate discuss
 ion on the interdisciplinary cross-fertilization between neuroscience and 
 AI\, and how continual learning after deployment can become a key paradig
 m for understanding brain intelligence.
LOCATION:https://epfl.zoom.us/j/62815829927?pwd=TDE3OUFxU2NaMkdsNktUTE1YUG
 FLZz09
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
