BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY: Faster Machine Learning on Big Data Sets
DTSTART:20140617T090000
DTSTAMP:20260406T185416Z
UID:4f377c31c8a7e2b8d3d735ceecc4f1207bef9e018f212cd2df877c9d
CATEGORIES:Conferences - Seminars
DESCRIPTION:James KWOK\, The Hong Kong University of Science and Technolog
 y\, Hong Kong.\nOn big data sets\, it is often challenging to learn the pa
 rameters in a machine learning model. A popular technique is the use of st
 ochastic gradient\, which computes the gradient at a single sample instead
  of over the whole data set. Another alternative is distributed processing
 \, which is particularly natural when a single computer cannot store or pr
 ocess the whole data set. In this talk\, some recent extensions will be pr
 esented. For stochastic gradient\, instead of using the information from o
 nly one sample\, we incrementally approximate the full gradient by also us
 ing old gradient values from the other samples. It enjoys the same computa
 tional simplicity as existing stochastic algorithms\, but has faster conve
 rgence. As for existing distributed machine learning algorithms\, they are
  often synchronized and the system can move forward only at the pace of th
 e slowest worker. I will present an asynchronous algorithm which requires 
 only partial synchronization\, and updates from the faster workers can be 
 incorporated more often by the master.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
