Dynamic Scheduling at Large Scale

Event details
Date | 25.07.2012 |
Hour | 15:00 |
Speaker |
Christos Kozyrakis Stanford University |
Location | |
Category | Conferences - Seminars |
Multi-core chips will soon include hundreds of cores, support thousands of hardware threads, and feature deep memory hierarchies with non-uniform latency characteristics. In such systems, the latency and energy overheads of remote memory accesses will dwarf the latency and energy overheads of computation. To maximize efficiency, we must schedule parallel computations in a manner that optimizes for locality, while maintaining load balance and minimizing scheduling overheads.
This talk will present the work on dynamic scheduling for large-scale multi-core systems at the Pervasive Parallelism Lab (PPL) in Stanford University. First, the broad potential of locality-aware scheduling and the importance of using high-level information from the developer or the programming model in achieving efficient execution will be discussed. Second, GRAMPS, a scheduling and runtime system for pipeline-parallel programs that optimizes memory behavior while performing fine-grain dynamic load balancing with low overhead, will be presented. Even on today's multi-core chips, GRAMPS outperforms the commonly used scheduling approaches such as task-stealing, GPGPU, and static streaming schedulers. Third, simple hardware support that allows for the development of low-overhead, software-mostly runtime systems for fine-grain parallelism that scale efficiently to hundreds of hardware threads will be presented. Finally, directions for future work in dynamic resource management in large-scale parallel systems will be discussed.
This talk will present the work on dynamic scheduling for large-scale multi-core systems at the Pervasive Parallelism Lab (PPL) in Stanford University. First, the broad potential of locality-aware scheduling and the importance of using high-level information from the developer or the programming model in achieving efficient execution will be discussed. Second, GRAMPS, a scheduling and runtime system for pipeline-parallel programs that optimizes memory behavior while performing fine-grain dynamic load balancing with low overhead, will be presented. Even on today's multi-core chips, GRAMPS outperforms the commonly used scheduling approaches such as task-stealing, GPGPU, and static streaming schedulers. Third, simple hardware support that allows for the development of low-overhead, software-mostly runtime systems for fine-grain parallelism that scale efficiently to hundreds of hardware threads will be presented. Finally, directions for future work in dynamic resource management in large-scale parallel systems will be discussed.
Practical information
- General public
- Free
Organizer
- EcoCloud Center
Contact
- Anne Wiggins, Deputy Director, EcoCloud