BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Towards Communication-Efficient Distributed Machine Learning Techn
 iques
DTSTART:20180625T090000
DTEND:20180625T110000
DTSTAMP:20260407T035001Z
UID:f805397ca3b5faf6d320d9d5d8f73a717ff3e1005666060ef5bdac8b
CATEGORIES:Conferences - Seminars
DESCRIPTION:Arsany Guirguis\nEDIC candidacy exam\nExam president: Prof. Pa
 trick Thiran\nThesis advisor: Prof. Rachid Guerraoui\nCo-examiner: Prof. M
 artin Jaggi\n\nAbstract\nMachine Learning (ML) has proven to be powerful i
 n deriving useful information benefiting from the increasing amount of dat
 a available daily on the Internet. To make the best of this massive amount
  of data\, ML models are becoming larger and more complex. Yet\, training 
 such complex models with large datasets is beyond the capabilities of a si
 ngle machine. Hence\, training ML models is becoming distributed. Although
  distributing the learning task improves scalability\, this comes with com
 munication challenges. Existing work has already attempted to address thes
 e challenges in some specific cases\, but there is still a room for advanc
 ing the state-of-the-art solutions.\nIn this proposal\, I am going to pres
 ent Tensorflow\, a popular system for large-scale distributed machine lear
 ning\, and a couple of ideas that were proposed with the goal of enhancing
  the communication layer performance. In my research\, I am interested in 
 looking at communication challenges in different distributed ML environmen
 ts.\n\nBackground papers\nTensorFlow: A System for Large-Scale Machine Lea
 rning​\, by Abadi\, M. et al.\nPoseidon: An Efficient Communication Arch
 itecture for Distributed Deep Learning on GPU Clusters\, by Zhang\, H.\, e
 t al-\nGaia: Geo-Distributed Machine Learning Approaching LAN Speeds\, by 
 Hsieh\, K. et al.\n \n 
LOCATION:BC 329 https://plan.epfl.ch/?room==BC%20329
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
