Privacy-preserving Machine Learning

Thumbnail

Event details

Date 21.08.2019
Hour 14:0016:00
Speaker Valentin Hartmann
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Carmela Troncoso
Thesis advisor: Prof. Robert West
Co-examiner: Prof. Martin Jaggi

Abstract
In a world where machine learning (ML) with its need for huge training datasets has become ubiquitous, massive data collection has become the norm rather than the exception. With new data breaches being reported on almost daily, and details on the usage of data by companies surfacing, the problem of ensuring privacy is not anymore of interest only for researchers, but also gets the attention of the general public.

It is time for a paradigm shift: Nowadays' ML methods need to be replaced with privacy-preserving ones. In this proposal, we present the first steps that have been made in this direction: (1) the definition of differential privacy (DP), a both strong and practical quantification of privacy; (2) a method for training neural networks with DP; and (3) a method for creating synthetic datasets with DP from sensitive data that are almost as well suited for ML tasks as the original data. We conclude by showing concrete directions for extending these methods to make them applicable to a broader class of ML tasks.

Background papers
Calibrating noise to sensitivity in private data analysis, by Dwork, Cynthia, et al. Theory of cryptography conference. Springer, Berlin, Heidelberg, 2006.
Deep learning with differential privacy, by Abadi, Martin, et al. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
Plausible deniability for privacy-preserving data synthesis, by Bindschaedler, Vincent, Reza Shokri, and Carl A. Gunter. Proceedings of the VLDB Endowment 10.5 (2017): 481-492.

 

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share