BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:IC Colloquium : Less Talking\, More Learning: Avoiding Coordinatio
 n In Parallel Machine Learning Algorithms
DTSTART:20160225T101500
DTEND:20160225T113000
DTSTAMP:20260407T061250Z
UID:275fe3ec18247590472b23b85cb8af77b1ca28f07f0d2284d1ac1164
CATEGORIES:Conferences - Seminars
DESCRIPTION:By : Dimitris Papailiopoulos - UC Berkeley\nIC Faculty candida
 teAbstract :\nThe recent success of machine learning (ML) in both science 
 and industry has generated an increasing demand to support ML algorithms a
 t scale. In this talk\, I will discuss strategies to gracefully scale mach
 ine learning on different parallel computational platforms. A common appro
 ach to such scaling is coordination-free parallel algorithms\, where indiv
 idual processors run independently without communication\, thus maximizing
  the time they compute. However\, analyzing the performance of these algor
 ithms can be challenging\, as they often introduce race conditions and syn
 chronization problems.\nIn this talk\, I will introduce a general methodol
 ogy for analyzing asynchronous parallel algorithms. The key idea is to mod
 el the effects of core asynchrony as noise in the algorithmic input.  Thi
 s allows us to understand the performance of several popular asynchronous 
 machine learning approaches\, and to determine when asynchrony effects mig
 ht overwhelm them.  To overcome these effects\, I will propose a new fram
 ework for parallelizing ML algorithms\, where all memory conflicts and rac
 e conditions can be completely avoided. I will discuss the implementation 
 of these ideas in practice\, and demonstrate that they outperform the stat
 e-of-the-art across a large number of ML tasks on gigabyte-scale data sets
 .Bio :\nDimitris Papailiopoulos is a postdoctoral researcher in the Depart
 ment of Electrical Engineering and Computer Sciences at UC Berkeley and a 
 member of the AMPLab. His research interests span machine learning\, codin
 g theory\, and parallel and distributed algorithms\, with a current focus 
 on coordination-free parallel machine learning\, large-scale data and grap
 h analytics\, and the use of codes to speed up distributed computation. Di
 mitris completed his Ph.D. in electrical and computer engineering at UT Au
 stin in 2014. At Austin he worked under the supervision of Alex Dimakis. I
 n 2015\, he received the IEEE Signal Processing Society\, Young Author Bes
 t Paper Award.More information
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
