Practical Byzantine-Resilient Distributed Machine Learning

Thumbnail

Event details

Date 25.06.2018
Hour 10:3012:30
Speaker Sébastien Rouault
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Martin Jaggi
Thesis advisor: Prof. Rachid Guerraoui
Co-examiner: Prof. Patrick Thiran

Abstract
The resurgence of machine learning in the last two decades has been opening new technological doors, empowering industries and the people who use it.
To make the most of ever-growing datasets and tackle more complex problems, machine learning researchers and practitioners explore larger models.
Such endeavors demand equally substantial computational power and, as of today, the only realistic approach to train such models consists of distributing the computational burden over numerous machines.

But what if an adversarial entity controls some of these machines?
In the standard parameter server framework, a single compromised machine can trivially stymie the learning process.
Worst: with an estimation of the gradients computed by the "honest" machines, this adversary could as easily control the whole learning process.
This fact is concerning given the impact machine learning already has on our society.

I would like to emphasize my research on theoretically sound and practical solutions to this latent issue.

Background papers
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, by Blanchard, P., et al.
TensorFlow: A System for Large-Scale Machine Learning, by Abadi, M., et al.
Federated learning: Strategies for improving communication efficiency, by Konecny, J., et al.

Practical information

  • General public
  • Free

Contact

Tags

EDIC candidacy exam

Share