BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:LCN Seminar: Matthias Bethge
DTSTART:20220203T103000
DTSTAMP:20260415T024313Z
UID:80dc21a307c4d8ae668cc3246474358b2b5be11f0af3bbb4390410cc
CATEGORIES:Conferences - Seminars
DESCRIPTION:Brain Intelligence: Continual learning of representational mod
 els after deployment\n\nNeuroscience and artificial intelligence are linke
 d by the goal to model perception\, cognition and behavior. Machine learni
 ng has become a key driver towards this goal. One of the big successes of
  the last decade has been the rise of convolutional neural networks train
 ed on ImageNet and similar computer vision benchmarks. Remarkably\, these 
 artificial neural networks resemble several properties of real neurons in
  the ventral pathway of the mammalian brain\, and may serve as a good mode
 l of fast reflexive scene analysis such as gist recognition\, object dete
 ction or semantic segmentation. The feature representations of convolution
 al networks can serve a large variety of tasks as demonstrated with trans
 fer and multi-task learning\, and outperform humans on more and more patt
 ern recognition benchmarks. Brain intelligence\, however\, is not really c
 aptured by the performance on pre-defined benchmarks after large engineer
 ing efforts but manifests itself in the amazing capability to rapidly cop
 e with highly variable situations and unstructured environments without an
 y expert supervision. In this talk I will propose a research agenda at t
 he interface between computational neuroscience and machine learning that 
 seeks to make significant progress on continual learning of representatio
 nal models after deployment. Focusing on object recognition I will first 
 review a few key findings from my lab on the generalization behavior of co
 nvolutional neural networks under task transfer\, adversarial attacks and
  common corruptions. These observations indicate that machines learn task-
 specific shortcuts rather than task-independent representational object m
 odels. They lack much of the ability to disentangle physical properties of
  objects (such as shape and texture\, figure-ground\, etc) and to model t
 he range of possible configural and appearance changes that can be expect
 ed for different objects. I will conclude the talk presenting ongoing work
  on continual object learning from unsupervised motion segmentation (i.e.
  the Gestalt principle of “common fate”)\, and stimulate discussion o
 n the interdisciplinary cross-fertilization between neuroscience and AI\,
  and how continual learning after deployment can become a key paradigm fo
 r understanding brain intelligence.
LOCATION:https://epfl.zoom.us/j/68208144523
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
