Co-design of Deep Neural Networks and Hardware Accelerators

Thumbnail

Event details

Date 26.08.2019
Hour 12:0014:00
Speaker Pradeep Thambiah Kathirgamaraja
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Paolo Ienne
Thesis advisor: Prof. Babak Falsafi
Co-examiner: Prof. Martin Jaggi

Abstract
Deep neural networks (DNN) have achieved dominant accuracy in computer vision and natural language processing applications. However, DNNs are compute-intensive and memory-intensive workload, and DNN's performance in general-purpose hardware is not sufficient for many use-cases. So, hardware accelerators for DNN workload have started to emerge. At the same time, various hardware techniques are used to build accelerators. However, significant energy efficiency and performance improvements can be achieved when DNNs and accelerator platforms are codesigned together. In this proposal, we will investigate the promising codesign techniques which can improve the performance of DNNs execution in hardware accelerators and propose an accelerator design framework for fast exploration of accelerator design space.


Background papers
1. Deep Compression: Compressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding, by Han, S., et al.
2. EIE: Efficient Inference Engine on Compressed Deep Neural Network (ISCA 2016), by Han, S., et al.
3. Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators (ISCA 2016), by Reagen B., et al.
 

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share