BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Redesigning decentralized ML algorithms: A systems view
DTSTART:20220120T120000
DTEND:20220120T140000
DTSTAMP:20260406T065709Z
UID:bf5f2cbe1376120cc1e3c107c821272def989e106e84b5fbf77dcf57
CATEGORIES:Conferences - Seminars
DESCRIPTION:Akash Dhasade\nEDIC candidacy exam\nExam president: Prof. Jame
 s Larus\nThesis advisor: Prof. Anne-Marie Kermarrec\nCo-examiner: Prof. Ni
 colas Flammarion\n\nAbstract\nDecentralized Learning and Federated Learnin
 g evolved to address the crucial need of scalability -- by leveraging comp
 ute power at edge and user data privacy -- by sharing only locally trained
  models instead of data. However they are not without issues that arise fr
 om the training setting -- (a) with ever growing sizes of deep models comm
 unication remains expensive for low-end edge devices that participate in t
 raining\; (b) heterogeneous data on client devices significantly slows dow
 n convergence\; (c) systems heterogeneity results in stragglers\, dropouts
  or intermittently available client\; etc. to cite a few. As a novel resea
 rch direction\, we consider systems that fundamentally offer stronger priv
 acy guarantees like Trusted Execution Environments (TEE's) and rethink the
  design of learning algorithms to share raw data instead of models. By let
 ting clients exchange raw data in decentralized settings\, we aim to solve
  several challenges at once from expensive communication to data heterogen
 eity while achieving privacy and scalability. Secondly\, we explore a desi
 gn space of algorithms which are hard to analyze theoretically but could c
 ompensate for stringent systems constraints on the learning process in pra
 ctice. This includes algorithms that guess client model updates to compens
 ate for their unavailability or lack of computation\, algorithms that shar
 e only partial models and reduce communication\, etc.\n\nBackground papers
 \nCommunication-efficient learning of deep networks from decentralized dat
 a\, by McMahan\, H B.\, et al.\nTackling the objective inconsistency probl
 em in heterogeneous federated optimisation\, by Wang\, J.\, et al.\nToward
 s mitigating device heterogeneity in federated learning via adaptive model
  quantization\, by Ahmed M. Abdelmoniem\, Marco Canini
LOCATION:
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
