BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Enabling Continuous Learning through Synaptic Plasticity in Hardwa
 re
DTSTART:20191101T141500
DTEND:20191101T151500
DTSTAMP:20260406T024638Z
UID:aaedbda848ef6d7aee673757aa4a6e7ef44a2ef1015952143a71aedc
CATEGORIES:Conferences - Seminars
DESCRIPTION:Tushar Krishna\, Assistant Professor in the School of Electric
 al and Computer Engineering at Georgia Tech.\nEver since modern computers 
 were invented\, the dream of creating artificial intelligence (AI) has cap
 tivated humanity. We are fortunate to live in an era when\, thanks to deep
  learning (DL)\, computer programs have paralleled\, and in many cases eve
 n surpassed human level accuracy in tasks like visual perception and speec
 h synthesis. However\, we are still far away from realizing general-purpos
 e AI. The problem lies in the fact that the development of supervised lear
 ning based DL solutions today is mostly open loop.  A typical DL model is
  created by hand-tuning the deep neural network (DNN) topology by a team o
 f experts over multiple iterations\, followed by training over petabytes o
 f labeled data. Once trained\, the DNN provides high accuracy for the task
  at hand\; if the task changes\, however\, the DNN model needs to be re-de
 signed and re-trained before it can be deployed. A general-purpose AI syst
 em\, in contrast\, needs to have the ability to constantly interact with t
 he environment and learn by adding and removing connections within the DNN
  autonomously\, just like our brain does. This is known as synaptic plasti
 city.\n\nIn this talk\, we will present our research efforts towards enabl
 ing general-purpose AI leveraging plasticity in both the algorithm and har
 dware. First\, we will present GeneSys (MICRO 2018)\, a HW-SW prototype of
  a closed loop learning system for continuously evolving the structure and
  weights of a DNN for the task at hand using genetic algorithms\, providin
 g 100-10000x higher performance and energy-efficiency over state-of-the-ar
 t embedded and desktop CPU and GPU systems. Next\, we will present a DNN a
 ccelerator substrate called MAERI (ASPLOS 2018)\, built using light-weight
 \, non-blocking\, reconfigurable interconnects\, that supports efficient m
 apping of regular and irregular DNNs with arbitrary dataflows\, providing 
 ~100% utilization of all compute units\, resulting in 3X speedup and energ
 y-efficiency over our prior work Eyeriss (ISSCC 2016). Finally\, time perm
 itting\, we will describe our research in enabling rapid design-space expl
 oration and prototyping of hardware accelerators using our dataflow DSL + 
 cost-model called MAESTRO (MICRO 2019).\n \nBio: Tushar Krishna is an Ass
 istant Professor in the School of Electrical and Computer Engineering at G
 eorgia Tech. He also holds the ON Semiconductor Junior Professorship. He 
 has a Ph.D. in Electrical Engineering and Computer Science from MIT (2014)
 \, a M.S.E in Electrical Engineering from Princeton University (2009)\, an
 d a B.Tech in Electrical Engineering from the Indian Institute of Technolo
 gy (IIT) Delhi (2007). Before joining Georgia Tech in 2015\, Dr. Krishna 
 spent a year as a post-doctoral researcher at Intel\, Massachusetts.\n\nDr
 . Krishna’s research spans computer architecture\, interconnection ne
 tworks\, networks-on-chip (NoC) and deep learning accelerators - with a f
 ocus on optimizing data movement in modern computing systems. Three of hi
 s papers have been selected for IEEE Micro’s Top Picks from Computer Arc
 hitecture\, one more received an honorable mention\, and two have won best
  paper awards. He received the National Science Foundation (NSF) CRII awar
 d in 2018\, and both a Google Faculty Award and a Facebook Faculty Award i
 n 2019. He also received the “Class of 1940 Course Survey Teaching Effec
 tiveness” Award from Georgia Tech in 2018.\n 
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
