The Energy Challenges of Caching and Moving Data On Your Chip

Thumbnail

Event details

Date 19.03.2015
Hour 16:1517:30
Speaker Arrvindh Shriraman, Assistant Professor in the School of Computing Sciences at Simon Fraser University, Canada
Location
Category Conferences - Seminars
Today, power constraints determine our ability to keep compute units active and busy. Interestingly, storing and moving the data used and produced by the computation consumes more energy than the computation itself. Whether multicores, GPUs or fixed-function accelerators, how we move and feed the computation units has critical impact on the programming model and the compute efficiency. We observe that unlike the latency overhead of the data movement which could potentially be hidden, energy overhead dictates that we need to fundamentally reduce waste in the memory hierarchy.

Our research focuses on cache designs and coherence protocols that improve the energy efficiency of the memory hierarchy by adapting the data storage and movement to the application characteristics. I will particularly focus on the design of a new coherence substrate, Temporal Coherence, that helps build energy efficient cache hierarchies for both GPUs and fixed-function accelerators. I will demonstrate how to realize release consistency on a GPU system at low overhead and discuss the improvements to the GPU programming model. I will also demonstrate how temporal coherence can help offload fine-grain program regions to fixed-function hardware accelerators and help move data efficiently between the accelerators. The overall lessons from our work will highlight the importance of optimizing the memory hierarchy with a focus on energy efficiency.

Practical information

  • Informed public
  • Free

Organizer

  • Babak Falsafi

Contact

  • Stéphanie Baillargues

Event broadcasted in

Share