Adversarially Robust Learning: Identification, Estimation, and Uncertainty Quantification

Thumbnail

Event details

Date 24.05.2024
Hour 14:0015:30
Speaker Prof. Zijian Guo, Rutgers University
Location Online
Category Conferences - Seminars
Event Language English
Abstract: Empirical risk minimization may lead to poor prediction performance when the target distribution differs from the source populations. This talk discusses leveraging data from multiple sources and constructing more generalizable and transportable prediction models. We introduce an adversarially robust prediction model to optimize a worst-case reward concerning a class of target distributions and show that our introduced model is a weighted average of the source populations' conditional outcome models. We leverage this identification result to robustify arbitrary machine learning algorithms, including, for example, high-dimensional regression, random forests, and neural networks.
 
In our adversarial learning framework, we propose a novel sampling method to quantify the uncertainty of the adversarial robust prediction model. Moreover, we introduce guided adversarially robust transfer learning (GART) that uses a small amount of target domain data to guide adversarial learning. We show that GART achieves a faster convergence rate than the model fitted with the target data. Our comprehensive simulation studies suggest that GART can substantially outperform existing transfer learning methods, attaining higher robustness and accuracy.

Short Bio: Zijian Guo is an associate professor at the Department of Statistics at Rutgers University. He obtained a Ph.D. in Statistics in 2017 from Wharton School, University of Pennsylvania. His research interests include causal inference, multi-source and transfer learning, high-dimensional statistics, and nonstandard statistical inference.
 

Practical information

  • Informed public
  • Registration required

Organizer

  • Prof. Daniel Kuhn

Share