LCN Seminar: Matthias Bethge
Event details
Date | 03.02.2022 |
Hour | 10:30 |
Location | Online |
Category | Conferences - Seminars |
Event Language | English |
Brain Intelligence: Continual learning of representational models after deployment
Neuroscience and artificial intelligence are linked by the goal to model perception, cognition and behavior. Machine learning has become a key driver towards this goal. One of the big successes of the last decade has been the rise of convolutional neural networks trained on ImageNet and similar computer vision benchmarks. Remarkably, these artificial neural networks resemble several properties of real neurons in the ventral pathway of the mammalian brain, and may serve as a good model of fast reflexive scene analysis such as gist recognition, object detection or semantic segmentation. The feature representations of convolutional networks can serve a large variety of tasks as demonstrated with transfer and multi-task learning, and outperform humans on more and more pattern recognition benchmarks. Brain intelligence, however, is not really captured by the performance on pre-defined benchmarks after large engineering efforts but manifests itself in the amazing capability to rapidly cope with highly variable situations and unstructured environments without any expert supervision. In this talk I will propose a research agenda at the interface between computational neuroscience and machine learning that seeks to make significant progress on continual learning of representational models after deployment. Focusing on object recognition I will first review a few key findings from my lab on the generalization behavior of convolutional neural networks under task transfer, adversarial attacks and common corruptions. These observations indicate that machines learn task-specific shortcuts rather than task-independent representational object models. They lack much of the ability to disentangle physical properties of objects (such as shape and texture, figure-ground, etc) and to model the range of possible configural and appearance changes that can be expected for different objects. I will conclude the talk presenting ongoing work on continual object learning from unsupervised motion segmentation (i.e. the Gestalt principle of “common fate”), and stimulate discussion on the interdisciplinary cross-fertilization between neuroscience and AI, and how continual learning after deployment can become a key paradigm for understanding brain intelligence.
Neuroscience and artificial intelligence are linked by the goal to model perception, cognition and behavior. Machine learning has become a key driver towards this goal. One of the big successes of the last decade has been the rise of convolutional neural networks trained on ImageNet and similar computer vision benchmarks. Remarkably, these artificial neural networks resemble several properties of real neurons in the ventral pathway of the mammalian brain, and may serve as a good model of fast reflexive scene analysis such as gist recognition, object detection or semantic segmentation. The feature representations of convolutional networks can serve a large variety of tasks as demonstrated with transfer and multi-task learning, and outperform humans on more and more pattern recognition benchmarks. Brain intelligence, however, is not really captured by the performance on pre-defined benchmarks after large engineering efforts but manifests itself in the amazing capability to rapidly cope with highly variable situations and unstructured environments without any expert supervision. In this talk I will propose a research agenda at the interface between computational neuroscience and machine learning that seeks to make significant progress on continual learning of representational models after deployment. Focusing on object recognition I will first review a few key findings from my lab on the generalization behavior of convolutional neural networks under task transfer, adversarial attacks and common corruptions. These observations indicate that machines learn task-specific shortcuts rather than task-independent representational object models. They lack much of the ability to disentangle physical properties of objects (such as shape and texture, figure-ground, etc) and to model the range of possible configural and appearance changes that can be expected for different objects. I will conclude the talk presenting ongoing work on continual object learning from unsupervised motion segmentation (i.e. the Gestalt principle of “common fate”), and stimulate discussion on the interdisciplinary cross-fertilization between neuroscience and AI, and how continual learning after deployment can become a key paradigm for understanding brain intelligence.
Practical information
- Informed public
- Free