BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Accelerating Training of Sparse DNNs
DTSTART:20200323T151500
DTEND:20200323T161500
DTSTAMP:20260407T051236Z
UID:797a92530dc4507adb5d097b74c4cc621689b8494dbb47c1c0d1facd
CATEGORIES:Conferences - Seminars
DESCRIPTION:Mieszko Lis\, Electrical and Computer Engineering faculty at t
 he University of British Columbia\nIt is a truth universally acknowledged 
 that deep neural networks are heavily overparametrized\, and pruning techn
 iques can reduce weight counts by an order of magnitude. Specialized accel
 erator architectures have been proposed to convert this sparsity to energy
  and latency improvements at inference time by not fetching the pruned wei
 ghts and not carrying out the corresponding multiplications.\n\nComparativ
 ely little attention\, however\, has been paid to the problem of efficient
 ly training sparse networks. Typically\, a network is first trained withou
 t pruning\, then gradually pruned and retrained to recover accuracy — a 
 process which requires even more time and energy than training an unpruned
  network.\n\nIn this talk\, we will present a training algorithm that obta
 ins a pruned DNN directly by dynamically following the most productive gra
 dient during optimization\; this results in state-of-the-art pruning ratio
 s without compromising the accuracy of the trained classifier. We will als
 o discuss architectural challenges to accelerating such sparse training al
 gorithms\, and outline an architecture that can train sparse networks in m
 uch less time and energy than an equivalent unpruned DNN.
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CANCELLED
END:VEVENT
END:VCALENDAR
