Security and Privacy against ML-Equipped Adversaries

Event details
Date | 31.01.2019 |
Hour | 14:30 › 16:30 |
Speaker | Bogdan Kulynych |
Location | |
Category | Conferences - Seminars |
EDIC candidacy exam
Exam president: Prof. Rachid Guerraoui
Thesis advisor: Prof. Carmela Troncoso
Co-examiner: Prof. Martin Jaggi
Abstract
Machine learning (ML) is now widely used in the technological industry and beyond due to the rise in the efficiency of ML methods, data collection, and processing infrastructure. This rise brings benefits to the society, but also allows to build powerful tools for achieving asocial goals, like invading the privacy of people or manipulating their behavior. In the standard setting studied in adversarial ML an adversary attempts to disrupt the operation, backdoor, or learn sensitive information across the ML training and inference pipeline. Such setting mostly concerns with the security of an ML operator, and does not fully reflect the challenges of counteracting or preventing asocial uses of ML as those mentioned above. This calls for an in-depth study of ML-equipped adversaries.
Background papers
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
The Limitations of Deep Learning in Adversarial Settings,
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
Exam president: Prof. Rachid Guerraoui
Thesis advisor: Prof. Carmela Troncoso
Co-examiner: Prof. Martin Jaggi
Abstract
Machine learning (ML) is now widely used in the technological industry and beyond due to the rise in the efficiency of ML methods, data collection, and processing infrastructure. This rise brings benefits to the society, but also allows to build powerful tools for achieving asocial goals, like invading the privacy of people or manipulating their behavior. In the standard setting studied in adversarial ML an adversary attempts to disrupt the operation, backdoor, or learn sensitive information across the ML training and inference pipeline. Such setting mostly concerns with the security of an ML operator, and does not fully reflect the challenges of counteracting or preventing asocial uses of ML as those mentioned above. This calls for an in-depth study of ML-equipped adversaries.
Background papers
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
The Limitations of Deep Learning in Adversarial Settings,
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
Practical information
- General public
- Free
Contact
- EDIC - edic@epfl.ch