BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Decentralized Stochastic Optimization
DTSTART:20190826T140000
DTEND:20190826T160000
DTSTAMP:20260408T092628Z
UID:9e931b0af7174aaddf19c85371e71651380e7aec6d7842d093c8b903
CATEGORIES:Conferences - Seminars
DESCRIPTION:Anastasiia Koloskova\nEDIC candidacy exam\nExam president: Pro
 f. Volkan Cevher\nThesis advisor: Prof. Martin Jaggi\nCo-examiner: Prof. A
 li Sayed\n\nAbstract\nDecentralized optimization is a promising direction 
 for optimizing machine learning models. It allows to distribute training o
 ver large amount of computing devices (such as e.g. mobile phones) without
  moving the users data to central servers. Moreover it can give signiﬁca
 nt speedups for training in datacenters over all-reduce SGD\, which is the
  current state-ofthe-art parallel SGD implementation. In this writeup we d
 iscuss some of the recent advances in decentralized optimization and its c
 urrent weaknesses. We ﬁrstly consider communication compression techniqu
 es for speeding up centralized training. The second paper we discuss\, sho
 ws that communication topology does not inﬂuence the leading term in con
 vergence rate in stochastic decentralized optimization\, thus making it co
 mpetitive with centralized approaches. And ﬁnally\, we consider another 
 technique to make communication more efﬁcient in decentralized training
 —time-varying directed network graphs and asynchronous communications.\n
 \nBackground papers\nQSGD: Communication-efficient SGD via gradient quanti
 zation and encoding\, by Alistarh\, D.\, et al. NIPS 2017.\nCan Decentrali
 zed Algorithms Outperform Centralized Algorithms? A Case Study for Decentr
 alized Parallel Stochastic Gradient Descent\, by Lian\, X.\, et al. NIPS 2
 017.\nStochastic Gradient Push for Distributed Deep Learning\, by Assran\,
  M.\, et al. ICML 2019.
LOCATION:BC 010 https://plan.epfl.ch/?room==BC%20010
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
