Exploiting Value Properties to Accelerate Deep Learning in Hardware

Thumbnail

Event details

Date 15.04.2016
Hour 16:0017:00
Speaker Prof. Andreas Moshovos, Faculty of Applied Science & Engineering, University of Toronto
Bio: Andreas Moshovos along with his students has been answering the question “what is the best possible digital computation structure to solve problem X or to run application Y?” where “best” is a characteristic (or combination thereof) such as power, cost, complexity, etc. Much of his work has been on high-performance processor and memory system design and it has influenced commercial designs. Andreas Moshovos has received the Ptyhio and a Master’s in Computer Science from the University of Crete in 1990 and 1992 and the Ph.D. degree in Computer Sciences from the University of Wisconsin-Madison in 1998. He has taught Computer Design at Northwestern University, USA, (Assistant Professor 1998-2000),  the Ecole Polytechnique de Laussane, Switzerland, (Invited Professor 2011) and since 2000 at the Electrical and Computer Engineering Department of the University of Toronto where he now is a professor.

Andreas Moshovos has served as the Program Chair for the ACM/IEEE International Symposium on Microarchitecture in 2011  and on numerous technical program committees in the area of Computer Architecture. He is an Associate Editor for the IEEE Computer Architecture Letters and the Elsevier Journal on Parallel and Distributed Computing.
Location
Category Conferences - Seminars
Deep Neural Networks (DNNs) are becoming ubiquitous thanks to their exceptional capacity to extract meaningful features from complex pieces of information such as text, images, or voice. DNN concepts are not new, but DNNs are currently enjoying a renaissance in part as a result of the increase in computing capability available in commodity computing platforms such as general purpose graphics processors. Yet, DNN progress is severely restricted by the processing capability of these commodity platforms, and this has motivated numerous special purpose hardware DNN architectures.

We will be presenting our work on a value-based approach to accelerating Deep Learning in hardware, where we observe that the values produced in DNNs exhibit properties which we can exploit to further improve performance while remaining energy efficient. Specifically, our accelerators exploit two properties:

1) Many DNNs computations prove ineffectual

2) the numerical precision required by DNNs varies across and within DNNs.

We will discuss how these properties can be exploited at the hardware level, resulting in accelerators that eliminate ineffectual computations on-the-fly and whose execution time scales almost linearly with the length of the numerical representation used. Both techniques will be presented as modifications of a state-of-the-art accelerator boosting its performance on average by  about 1.4x and x1.9 respectively.

Practical information

  • Informed public
  • Free

Organizer

  • Babak Falsafi

Contact

  • Stéphanie Baillargues

Event broadcasted in

Share