BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Memory Processing Units
DTSTART:20150626T103000
DTEND:20150626T113000
DTSTAMP:20260408T201733Z
UID:55ad048d19b023be63f276bfcf6da4921d1623771e0093381aecb165
CATEGORIES:Conferences - Seminars
DESCRIPTION:Karu Sankaralingam\, University of Wisconsin-Madison\n3D die-s
 tacking of logic+DRAM provides a unique opportunity to revisit  the ideas
  of in-memory processing and eliminate decades of  ``inefficient glue'' l
 ike multi-level cache hierarchies\, OOO  processing\, deep pipelining\, a
 nd speculative execution\, that we have  built to bridge memory and proce
 ssing.  Compared to conventional  DRAMs\, 3D die-stacked DRAM (embodied 
 by standards like HMC and HBM)\,  have almost order of magnitude improvem
 ents in bandwidth and latency  between logic and memory\, and significant
  power reductions as  well. In this talk I will cover our work on a new a
 rchitecture called  Memory Processing Units (MPU)\, which is built on two
  key ideas. On  the programming model and execution model side\, we propo
 se memory  remote-procedure calls to offload entire pieces of computation
  to a  memory+processing unit. On the hardware side\, we argue  energy-e
 fficient small caches\, non-speculative\, low-frequency\,  ultra-short pi
 peline processing cores integrated closely with memory  provide efficient
  processing.  Across a wide domain of workloads  spanning SQL database p
 rocessing\, networking\, and internet search\, we  show the MPU model han
 dily outperforms conventional processors and  emerging low-power ARM serv
 ers. Performance improvements range from  1.9X to 2.7X with energy saving
 s ranging from 6.5X to 18X. 
LOCATION:BC 420 https://plan.epfl.ch/?room==BC%20420
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
