BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:EE Distinguished Speakers Seminar: In-memory Computing for AI Appl
 ications
DTSTART:20191129T131500
DTEND:20191129T141500
DTSTAMP:20260508T053418Z
UID:1ff6dd8b282407459a712c4d2b0ad1ac5339fc9a90e5fda6e0593f9e
CATEGORIES:Conferences - Seminars
DESCRIPTION:Evangelos Eleftheriou\, received his Ph.D. degree in Electrica
 l Engineering from Carleton University\, Ottawa\, Canada. He is currently 
 responsible for the neuromorphic computing activities of IBM Research – 
 Zurich. His research interests include signal processing\, coding\, non-vo
 latile memory technologies and emerging computing paradigms such as neurom
 orphic and in-memory computing. In 2002\, he became a Fellow of the IEEE. 
 He was co-recipient of the 2003 IEEE Communications Society Leonard G. Abr
 aham Prize Paper Award. He was also co-recipient of the 2005 Technology Aw
 ard of the Eduard Rhein Foundation. In 2005\, he was appointed an IBM Fell
 ow. The same year he was also inducted into the IBM Academy of Technology.
  In 2009\, he was co-recipient of the IEEE CSS Control Systems Technology 
 Award and of the IEEE Transactions on Control Systems Technology Outstandi
 ng Paper Award. In 2016\, he received an honoris causa professorship from 
 the University of Patras\, Greece. In 2018\, he was inducted into the US N
 ational Academy of Engineering as Foreign Member. \nAbstract: Performing
  computations on conventional von Neumann computing systems results in a s
 ubstantial data being moved back and forth between the physically separate
 d memory and processing units\, which creates a performance bottleneck. It
  is becoming increasingly clear that for application areas such as AI (Art
 ificial Intelligence)\, we need to transition to computing architectures i
 n which memory and logic coexist in some form. In-memory computing is a no
 vel non-von Neumann approach where certain computational tasks are perform
 ed in the memory itself. This is enabled by the physical attributes and st
 ate dynamics of memory devices\, in particular resistance-based non-volati
 le memory technology. Several computational tasks such as logical operatio
 ns\, arithmetic operations and even certain machine learning tasks can be 
 implemented in such a computational memory unit. I will present how comput
 ational memories accelerate AI applications and will show small- and large
 -scale experimental demonstrations that perform high-level computational p
 rimitives\, such as ultra-low-power inference engines\, optimization solve
 rs including compressed sensing and sparse coding\, linear solvers and tem
 poral correlation detection. Moreover\, I will discuss the efficacy of thi
 s approach to efficiently address not only inferencing but also training o
 f deep neural networks. The results show that this co-existence of computa
 tion and storage at the nanometer scale could be the enabler for new\, ult
 ra-dense\, low-power\, and massively parallel computing systems. Thus\, by
  augmenting conventional computing systems\, in-memory computing could hel
 p achieve orders of magnitude improvement in performance and efficiency.\n
  
LOCATION:MXF 1 https://plan.epfl.ch/?room==MXF%201
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
