EE Distinguished Speakers Seminar: In-memory Computing for AI Applications

Event details
Date | 29.11.2019 |
Hour | 13:15 › 14:15 |
Speaker | Evangelos Eleftheriou, received his Ph.D. degree in Electrical Engineering from Carleton University, Ottawa, Canada. He is currently responsible for the neuromorphic computing activities of IBM Research – Zurich. His research interests include signal processing, coding, non-volatile memory technologies and emerging computing paradigms such as neuromorphic and in-memory computing. In 2002, he became a Fellow of the IEEE. He was co-recipient of the 2003 IEEE Communications Society Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed an IBM Fellow. The same year he was also inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE CSS Control Systems Technology Award and of the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, he received an honoris causa professorship from the University of Patras, Greece. In 2018, he was inducted into the US National Academy of Engineering as Foreign Member. |
Location | |
Category | Conferences - Seminars |
Abstract: Performing computations on conventional von Neumann computing systems results in a substantial data being moved back and forth between the physically separated memory and processing units, which creates a performance bottleneck. It is becoming increasingly clear that for application areas such as AI (Artificial Intelligence), we need to transition to computing architectures in which memory and logic coexist in some form. In-memory computing is a novel non-von Neumann approach where certain computational tasks are performed in the memory itself. This is enabled by the physical attributes and state dynamics of memory devices, in particular resistance-based non-volatile memory technology. Several computational tasks such as logical operations, arithmetic operations and even certain machine learning tasks can be implemented in such a computational memory unit. I will present how computational memories accelerate AI applications and will show small- and large-scale experimental demonstrations that perform high-level computational primitives, such as ultra-low-power inference engines, optimization solvers including compressed sensing and sparse coding, linear solvers and temporal correlation detection. Moreover, I will discuss the efficacy of this approach to efficiently address not only inferencing but also training of deep neural networks. The results show that this co-existence of computation and storage at the nanometer scale could be the enabler for new, ultra-dense, low-power, and massively parallel computing systems. Thus, by augmenting conventional computing systems, in-memory computing could help achieve orders of magnitude improvement in performance and efficiency.
Practical information
- General public
- Free
Organizer
- Prof. Elison Matioli
Contact
- [email protected] / +41 21 69 33721