BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Exploiting Value Properties to Accelerate Deep Learning in Hardwar
 e
DTSTART:20160415T160000
DTEND:20160415T170000
DTSTAMP:20260503T154251Z
UID:baf963e9a730f8bdcf7d27e671de4ece1eb745268b4050c8f2acf999
CATEGORIES:Conferences - Seminars
DESCRIPTION:Prof. Andreas Moshovos\, Faculty of Applied Science & Engineer
 ing\, University of Toronto\nBio: Andreas Moshovos along with his students
  has been answering the question “what is the best possible digital comp
 utation structure to solve problem X or to run application Y?” where “
 best” is a characteristic (or combination thereof) such as power\, cost\
 , complexity\, etc. Much of his work has been on high-performance processo
 r and memory system design and it has influenced commercial designs. Andre
 as Moshovos has received the Ptyhio and a Master’s in Computer Science f
 rom the University of Crete in 1990 and 1992 and the Ph.D. degree in Compu
 ter Sciences from the University of Wisconsin-Madison in 1998. He has taug
 ht Computer Design at Northwestern University\, USA\, (Assistant Professor
  1998-2000)\,  the Ecole Polytechnique de Laussane\, Switzerland\, (Invit
 ed Professor 2011) and since 2000 at the Electrical and Computer Engineeri
 ng Department of the University of Toronto where he now is a professor.\nA
 ndreas Moshovos has served as the Program Chair for the ACM/IEEE Internati
 onal Symposium on Microarchitecture in 2011  and on numerous technical pr
 ogram committees in the area of Computer Architecture. He is an Associate 
 Editor for the IEEE Computer Architecture Letters and the Elsevier Journal
  on Parallel and Distributed Computing.\nDeep Neural Networks (DNNs) are b
 ecoming ubiquitous thanks to their exceptional capacity to extract meaning
 ful features from complex pieces of information such as text\, images\, or
  voice. DNN concepts are not new\, but DNNs are currently enjoying a renai
 ssance in part as a result of the increase in computing capability availab
 le in commodity computing platforms such as general purpose graphics proce
 ssors. Yet\, DNN progress is severely restricted by the processing capabil
 ity of these commodity platforms\, and this has motivated numerous special
  purpose hardware DNN architectures.\nWe will be presenting our work on a 
 value-based approach to accelerating Deep Learning in hardware\, where we 
 observe that the values produced in DNNs exhibit properties which we can e
 xploit to further improve performance while remaining energy efficient. Sp
 ecifically\, our accelerators exploit two properties:\n1) Many DNNs comput
 ations prove ineffectual\n2) the numerical precision required by DNNs vari
 es across and within DNNs.\nWe will discuss how these properties can be ex
 ploited at the hardware level\, resulting in accelerators that eliminate i
 neffectual computations on-the-fly and whose execution time scales almost 
 linearly with the length of the numerical representation used. Both techni
 ques will be presented as modifications of a state-of-the-art accelerator 
 boosting its performance on average by  about 1.4x and x1.9 respectively.
LOCATION:BC 02 https://plan.epfl.ch/?room==BC%2002
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
