On the Distributional Robustness of Machine Learning Models

Thumbnail

Event details

Date 30.08.2024
Hour 13:0015:00
Speaker Liangze Jiang
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Nicolas Flammarion
Thesis advisor: Prof. Caglar Gulcehre
Thesis coadvisor: Dr. Damien Teney
Co-examiner: Prof. Devis Tuia

Abstract
coming soonA central goal of machine learning is to build models
that predict robustly under the multitude of distribution shifts,
i.e., to generalize out-of-distribution (OOD). In the first part
of the proposal, we start by overviewing the background and
the mainstream view of OOD generalization. Then we present
a representative OOD method, and discuss its trade-offs with
other OOD methods when facing different distribution shifts.
The plethora of methods and their trade-offs, therefore, indicates
that OOD generalization can be better approached if we know
which methods to use. This motivates our ongoing investigation
of automated algorithm selection for OOD generalization. In
the second part of this proposal, we provide an example of
why distributional robustness (and its methods) still matters
in building modern large language models (LLMs). Finally, we
envision our future research on the intersection of distributional
robustness and LLMs alignment.

Background papers
1. In Search of Lost Domain Generalization (https://arxiv.org/abs/2007.01434)
2. Out-of-Distribution Generalization via Risk Extrapolation (https://arxiv.org/abs/2003.00688)
3. Improving Generalization of Alignment with Human Preferences through Group Invariant Learning (https://arxiv.org/abs/2310.11971)

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share