BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Co-design of Deep Neural Networks and Hardware Accelerators
DTSTART:20190826T120000
DTEND:20190826T140000
DTSTAMP:20260427T230508Z
UID:035c5e23a9e85ce1296f4ecdd2b70e2b68b41e1a23f298f03539c38e
CATEGORIES:Conferences - Seminars
DESCRIPTION:Pradeep Thambiah Kathirgamaraja\nEDIC candidacy exam\nExam pre
 sident: Prof. Paolo Ienne\nThesis advisor: Prof. Babak Falsafi\nCo-examine
 r: Prof. Martin Jaggi\n\nAbstract\nDeep neural networks (DNN) have achieve
 d dominant accuracy in computer vision and natural language processing app
 lications. However\, DNNs are compute-intensive and memory-intensive workl
 oad\, and DNN's performance in general-purpose hardware is not sufficient 
 for many use-cases. So\, hardware accelerators for DNN workload have start
 ed to emerge. At the same time\, various hardware techniques are used to b
 uild accelerators. However\, significant energy efficiency and performance
  improvements can be achieved when DNNs and accelerator platforms are code
 signed together. In this proposal\, we will investigate the promising code
 sign techniques which can improve the performance of DNNs execution in har
 dware accelerators and propose an accelerator design framework for fast ex
 ploration of accelerator design space.\n\n\nBackground papers\n1. Deep Com
 pression: Compressing Deep Neural Networks with Pruning\,Trained Quantizat
 ion and Huffman Coding\, by Han\, S.\, et al.\n2. EIE: Efficient Inference
  Engine on Compressed Deep Neural Network (ISCA 2016)\, by Han\, S.\, et a
 l.\n3. Minerva: Enabling Low-Power\, Highly-Accurate Deep Neural Network A
 ccelerators (ISCA 2016)\, by Reagen B.\, et al.\n 
LOCATION:BC 229 https://plan.epfl.ch/?room==BC%20229
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
