BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Prof. Fanny YANG: "Surprising failures of standard practices in ML
  when the sample size is small"
DTSTART:20231030T103000
DTEND:20231030T113000
DTSTAMP:20260427T203142Z
UID:c0dc9717d0f401b28279c47c1bd2b7d173a2d55a03b664a0b319a034
CATEGORIES:Conferences - Seminars
DESCRIPTION:Prof. Fanny Yang - ETHZ  \nAbstract:\nIn this talk\, we discu
 ss two failure cases of common practices that are typically believed to im
 prove on vanilla methods: (i) adversarial training can lead to worse robus
 t accuracy than standard training (ii) active learning can lead to a worse
  classifier than a model trained using uniform samples. In particular\, we
  can prove both mathematically and empirically\, that such failures can ha
 ppen in the small-sample regime. We discuss high-level explanations derive
 d from the theory\, that shed light on the causes of these phenomena in pr
 actice.\n\nBio:\nFanny Yang is an Assistant Professor of Computer Science 
 at ETH Zurich. She received her Ph.D. in EECS from the University of Calif
 ornia\, Berkeley in 2018 and was a postdoctoral fellow at Stanford Univers
 ity and ETH-ITS in 2019. Her current research interests include methodolog
 ical and theoretical advances for problems that arise from distribution sh
 ifts and hidden confounding\, and studying the (robust) generalization abi
 lity of overparameterized models for high-dimensional data.\n 
LOCATION:ELA 1 https://plan.epfl.ch/?room==ELA%201
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
