What to do When Data is Shifting? - Invariance Objectives, Environment Discovery and Expert Augmentations

Thumbnail

Event details

Date 31.05.2022
Hour 11:0012:00
Speaker Dr. Jörn Jacobsen Senior Research Scientist at the Apple Health AI team in Zürich. Previously, he was a postdoc at the Vector Institute and the University of Toronto, supervised by Rich Zemel and closely collaborating with faculty members David Duvenaud, Nicolas Papernot and Roger Grosse. Prior to that, he was a postdoc in the lab of Matthias Bethge in Tübingen and did his Ph.D. at the University of Amsterdam. His work spans a variety of topics: Combining domain knowledge, physical models and machine learning, improving robustness of learned models under distribution shift, building neural networks with mathematical constraints, deriving new generative models and analyzing learned representations.
Location
Category Conferences - Seminars
Event Language English

There are many methods claiming to help complex machine learning models to generalize when data distributions vary greatly between training and testing conditions. However, it is also common knowledge that there is no "one-fits-all-solution" to this problem. One needs to carefully consider what kind of changes in the data can be expected and design methods accordingly. I will give an overview of some invariant learning methods I had the pleasure to co-develop in this space, will contextualize them and elaborate on when I believe they are reasonable to use, but will also highlight their shortcomings. Finally, I will present some recent work on using domain knowledge in the form of mechanistic models to overcome some of these shortcomings and achieve generalization far beyond the training data.

Practical information

  • Informed public
  • Free

Organizer

  • LTS 4 - Prof. Pascal Frossard

Contact

  • Anne De Witte

Event broadcasted in

Share