Obtaining Robust Classifiers via Regularization and Averaging Schemes.

Thumbnail

Event details

Date 30.08.2018
Hour 11:0013:00
Speaker Fabian Latorre Gomez
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Emre Telatar
Thesis advisor: Prof. Volkan Cevher
Co-examiner: Prof. Daniel Kuhn

Abstract
Neural networks have shown great promise for task automation. However, the
discovery of adversarial examples has upset this promise: small perturbations
of the input data make the misclassification rate greatly increase. Recent
approaches have been shown empirically to provide some level of robustness to
perturbed data, but fail to provide theoretical guarantees for perturbations of
different sizes. We address this shortcoming by introducing
Wasserstein-Lipschitz Regularization (WLR), an optimization objective whose
solution provides theoretical bounds on the robustness against perturbations of
large size. We also study the effect of model averaging on the robustness and
establish sufficient conditions for it to improve performance. Our results
suggest similar regularization schemes can be derived for unsupervised learning
methods such as GANs, as well as regression problems. Further research will
focus on developing the necessary theory and algorithms to control the lipschitz
constant of neural networks, as well as understanding the trade-offs between
the robustness to adversarial examples and the choice of network architecture.

Background papers
Intriguing Properties of Neural Networks https://arxiv.org/pdf/1312.6199.pdf
Towards Deep Learning Models Resistant to Adversarial Attacks https://arxiv.org/abs/1706.06083
Wasserstein Distributional Robustness and regularization in Machine Learning https://arxiv.org/abs/1712.06050
 

Practical information

  • General public
  • Free

Contact

Tags

EDIC candidacy exam

Share