BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Customizing Hardware Parameters to Optimize for Specific DNN Workl
 oads
DTSTART:20210624T140000
DTEND:20210624T160000
DTSTAMP:20260407T084700Z
UID:9de626904a5e6a9a178fbaeec829a465039be4fb8111ae4c2233b053
CATEGORIES:Conferences - Seminars
DESCRIPTION:Canbert Sönmez\nEDIC candidacy exam\nexam president: Prof. Ma
 rtin Jaggi\nthesis advisor: Prof. Babak Falsafi\nco-examiner: Prof. Paolo 
 Ienne\n\nAbstract\nDeep Neural Networks (DNNs) proved to have superior acc
 uracy over traditional methods in a wide range of problems such as image r
 ecognition\, speech recognition\, translation\, self-driving cars\, and pl
 aying computer games. However\, DNNs usually have many parameters\, making
  them computationally expensive. As a result\, mapping the DNN computation
  to conventional computing platforms\, such as CPUs and GPUs\, while remai
 ning within specific latency and power consumption bounds\, becomes challe
 nging. Consequently\, we observe an increasing interest in research on DNN
  hardware acceleration. However\, as each DNN model exhibits a different d
 ataflow pattern\, it is impossible to design an accelerator that suits eve
 ry possible DNN workload\, leading to customized hardware designs targetin
 g specific workloads. In this proposal\, we examine 3 different DNN accele
 rators\, describe how they handle different dataflow patterns\, and compar
 e them. Our analysis allows us to identify how each of these accelerators 
 can be improved. Based on the analysis\, we present our research proposal\
 , which aims to develop a method to automatically design hardware that can
  process a given workload of DNN models optimally. The designed hardware i
 nherits features from these 3 accelerators and it optimizes computation fo
 r the given DNN workload by customizing its interconnect type and computat
 ion unit size.\n\nBackground papers\n\n	N. P. Jouppi et al.\, "In-datacen
 ter performance analysis of a tensor processing unit\," 2017 ACM/IEEE 44t
 h Annual International Symposium on Computer Architecture (ISCA)\, 2017\, 
 pp. 1-12\, doi: 10.1145/3079856.3080246.\n	Hyoukjun Kwon\, Ananda Samajdar
 \, and Tushar Krishna. 2018. MAERI: Enabling Flexible Dataflow Mapping ov
 er DNN Accelerators via Reconfigurable Interconnects. SIGPLAN Not. 53\, 
 2 (February 2018)\, 461–475. DOI:https://doi.org/10.1145/3296957.3173176
 \n	H. T. Kung\, B. McDanel\, S. Q. Zhang\, X. Dong and C. C. Chen\, "Maest
 ro: A Memory-on-Logic Architecture for Coordinated Parallel Use of Many Sy
 stolic Arrays\," 2019 IEEE 30th International Conference on Application-s
 pecific Systems\, Architectures and Processors (ASAP)\, 2019\, pp. 42-50\,
  doi: 10.1109/ASAP.2019.00-31.\n
LOCATION:
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
