Talk of Professor Fanghui Liu (University of Warwick, UK)
![Thumbnail](/static/img/default.jpg)
Event details
Date | 10.04.2024 |
Hour | 11:00 › 12:00 |
Speaker | Professor Fanghui Liu |
Location | |
Category | Conferences - Seminars |
Event Language | English |
Title: Fundamental limits on robust overfitting of DNNs - an approximation view
Abstract: In this talk, I will discuss whether overfitted DNNs in adversarial training can generalize well from an approximation viewpoint. We prove by construction the existence of infinitely many adversarial training classifiers on over-parameterized DNNs that obtain arbitrarily small adversarial training error (overfitting), whereas achieving good robust generalization error under certain conditions concerning the data quality, well separated, and perturbation level. This construction is optimal and thus points out the fundamental limits of DNNs under adversarial training with statistical guarantees. Part of this talk comes from our recent work [https://arxiv.org/abs/2401.13624].
Bio: Fanghui Liu is an assistant professor at University of Warwick, UK. His research interests focus on machine learning theory as well as theoretical-oriented applications. For his work on learning theory and cooperation, he was chosen for AAAI New Faculty Highlights 2024, Rising Star in AI (KAUST 2023) and presented two tutorials at ICASSP 2023 and CVPR 2023. Prior to his current position, Fanghui worked as a postdoctoral researcher at KU Leuven, Belgium and then at EPFL, Switzerland. He received his PhD from Shanghai Jiao Tong University, China in 2019 and bachelor’s degree from Harbin Institute of Technology in 2014.
Practical information
- Expert
- Free
Organizer
- Professor Volkan Cevher
Contact
- Gosia Baltaian