BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:IC Colloquium: Measuring and Enhancing the Security of Machine Lea
 rning
DTSTART:20210201T160000
DTEND:20210201T170000
DTSTAMP:20260414T175513Z
UID:af358d065eaccf0cef702fdbf0d126b11543aafce87460e8fb02ba71
CATEGORIES:Conferences - Seminars
DESCRIPTION:By: Florian Tramèr - Stanford University\nIC Faculty candidat
 e\n\nAbstract\nFailures of machine learning systems can threaten both the 
 security and privacy of their users. My research studies these failures fr
 om an adversarial perspective\, by building new attacks that highlight cri
 tical vulnerabilities in the machine learning pipeline\, and designing new
  defenses that protect users against identified threats.\n\nIn the first p
 art of this talk\, I'll explain why machine learning models are so vulnera
 ble to adversarially chosen inputs. I'll show that many proposed defenses 
 are ineffective and cannot protect models deployed in overtly adversarial 
 settings\, such as for content moderation on the Web.\n\nIn the second par
 t of the talk\, I'll focus on the issue of data privacy in machine learnin
 g systems\, and I'll demonstrate how to enhance privacy by combining techn
 iques from cryptography\, statistics\, and computer security.\n\nBio\nFlor
 ian Tramèr is a PhD student at Stanford University advised by Dan Boneh. 
 His research interests lie in Computer Security\, Cryptography and Machine
  Learning security.  In his current work\, he studies the worst-case beha
 vior of Deep Learning systems from an adversarial perspective\, to underst
 and and mitigate long-term threats to the safety and privacy of users. Flo
 rian is supported by a fellowship from the Swiss National Science Foundati
 on and a gift from the Open Philanthropy Project.\n\nMore information
LOCATION:https://epfl.zoom.us/j/84468396448?pwd=V3lrU29pTW5mM2FHU1RQVm83bX
 hIUT09
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
