Model-specific DNN Accelerators

Thumbnail

Event details

Date 28.05.2019
Hour 15:0017:00
Speaker Ahmet Caner Yüzügüler
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Martin Jaggi
Thesis advisor: Prof. Babak Falsafi
Thesis co-advisor: Prof. Pascal Frossard
Co-examiner: Prof. Paolo Ienne

Abstract
The inefficiencies in the DNN accelerators stem from the generic dataflows, which is intended to support large variety of DNN models. However, the datasets from the same domain usually do not require different DNN model structures. For example, although the datasets for object classification (ImageNet),  face recognition (FaceNet) or cancer diagnostics are inherently very different from each other, the models for all these datasets are successfully trained with the same DNN structure, namely GoogleNet. In this research, we propose model-specific DNN accelerators, which are optimized for a single DNN structure. The goal of this research is to study the properties of DNN structures that make them applicable to the wider range of problems, and then to show how much computational efficiency one can gain when a hardware accelerator is optimized for this specific DNN structure. 

Background papers
Efficient Processing of Deep Neural Networks: A Tutorial and Survey, Sections V, VI and VII from Survey by Sze.
Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep ConvolutionalNeural Networks, by Chen, Y-H., et al.
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks, by Parashar, A., et al.

 

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share