BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Combining Tile-Wise Sparsity with Quantization for DNN Hardware Ac
 celerators
DTSTART:20220727T100000
DTEND:20220727T120000
DTSTAMP:20260407T091156Z
UID:eaa8e2277d1d342c7a2da035698b29a3b319f3ddcf8348917fab745b
CATEGORIES:Conferences - Seminars
DESCRIPTION:Fatih Yazici\nEDIC candidacy exam\nExam president: Prof. Paolo
  Ienne\nThesis advisor: Prof. Babak Falsafi\nCo-examiner: Prof. Martin Jag
 gi\n\nAbstract\nDeep Neural Networks (DNNs) demonstrated superior accuracy
  over conventional methods in numerous problems including but not limited 
 to image recognition\, speech recognition\, text translation\, and self-dr
 iving cars. However\, DNNs are comprised of billions of parameters\, which
  renders the task of training them a challenge given the traditional compu
 ting platforms like CPUs. In this proposal\, we compare various approaches
  to DNN hardware accelerators. Dense hardware accelerators offer high thro
 ughput but fail to capture the redundant nature of DNNs. Sparse hardware a
 ccelerator solutions utilize this redundancy but face challenges in implem
 entation due to increased architectural complexity. Software solutions pro
 mise a good trade-off to use sparsity while executing on efficient dense h
 ardware.\n\nBackground papers\n[1]  N. P. Jouppi et al.\, "Ten Lessons Fr
 om Three Generations Shaped  Google’s TPUv4i : Industrial Product\," 20
 21 ACM/IEEE 48th Annual International Symposium on Computer Architecture (
 ISCA)\, 2021\, pp. 1-14\,  doi: 10.1109/ISCA52012.2021.00010.\nAvailable 
 on: https://ieeexplore.ieee.org/document/9499913\n\n[2]  E. Qin et al.\, 
 "SIGMA: A Sparse and Irregular GEMM Accelerator  with  Flexible Intercon
 nects for DNN Training\," 2020 IEEE International  Symposium on High Perf
 ormance Computer Architecture (HPCA)\, 2020\, pp.   58-70\, doi: 10.1109
 /HPCA47549.2020.00015.\nAvailable on: https://ieeexplore.ieee.org/document
 /9065523\n\n[3]  C. Guo et al.\, "Accelerating Sparse DNN Models without 
 Hardware-Support  via Tile-Wise Sparsity\," SC20: International Conferenc
 e for High Performance Computing\, Networking\, Storage and Analysis\, 202
 0\,  pp. 1-15\, doi: 10.1109/SC41405.2020.00020.\nAvailable on: https://i
 eeexplore.ieee.org/document/9355304\n 
LOCATION:BC 010 https://plan.epfl.ch/?room==BC%20010
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
