BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:C4DT Distinguished Lecture : Hidden Backdoors in Deep Learning Sys
 tems
DTSTART:20190924T141500
DTSTAMP:20260510T122415Z
UID:ae5b5db31f4e6fb0de5315672b1d04777b861e1131badefeda83e18f
CATEGORIES:Conferences - Seminars
DESCRIPTION:Ben Zhao is the Neubauer Professor of Computer Science at Univ
 ersity of Chicago.  He completed his PhD from Berkeley (2004) and his B
 S from Yale (1997). He is an ACM distinguished scientist\, and recipient 
 of the NSF CAREER award\, MIT Technology Review's TR-35 Award (Young Inno
 vators Under 35)\, ComputerWorld Magazine's Top 40 Tech Innovators award\
 , Google Faculty award\, and IEEE ITC Early Career Award. His work has bee
 n covered by media outlets such as Scientific American\, New York Times\,
  Boston Globe\, LA Times\, MIT Tech \nReview\, and Slashdot. He has publi
 shed more than 160 publications in areas of security and privacy\, networ
 ked systems\, wireless networks\, data-mining and HCI (H-index > 60). He 
 recently served as PC chair for World Wide Web Conference (WWW 2016) and 
 the Internet Measurement Conference (IMC 2018)\, and is a general cochair 
 for Hotnets 2020.\nBy Ben Zhao\, UChicago\n\nThe lack of transparency in t
 oday’s deep learning systems has paved the way for a new type of threats
 \, commonly referred to as backdoor or Trojan attacks.  In a backdoor att
 ack\, a malicious party can corrupt a deep learning model (either at initi
 al training time or later) to embed hidden classification rules that do no
 t interfere with normal classification\, unless an unusual “trigger” i
 s applied to the input\, which would then produce unusual (and likely inco
 rrect) results. For example\, a facial recognition model with a backdoor m
 ight recognize anyone with a pink earring as Elon Musk.  Backdoor attacks
  have been validated in a number of image classification applications\, an
 d are difficult to detect given the black-box nature of most DNN models.\n
 \nIn this talk\, I will describe two recent results on detecting and under
 standing backdoor attacks on deep learning systems. I will first present N
 eural Cleanse (S&P 2019)\, the first robust tool to detect a wide range of
  backdoors in deep learning models. We use the idea of inter-label perturb
 ation distances to detect when a backdoor trigger has created shortcuts to
  misclassification to a particular label.  Second\, I will describe our n
 ew work on Latent Backdoors (CCS 2019)\, a stronger type of backdoor attac
 ks that are more difficult to detect\, and survives retraining in commonly
  used transfer learning systems.  We use experimental validation to show 
 that latent backdoors can be quite robust and stealthy\, even against the 
 latest detection tools (including neural cleanse). There are no known tech
 niques to detect latent backdoors\, but we present alternative techniques 
 to defend against them via disruption.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
