IC Colloquium : Correct-by-Construction Multiprocessor Programming

Event details
Date | 29.09.2014 |
Hour | 16:15 › 17:30 |
Location | |
Category | Conferences - Seminars |
By: Albert Cohen - INRIA
Video of his talk
Abstract:
This talk is for people who care about program correctness and performance. About achieving both without resorting to impossible verification problems and target-specific optimizations. We will explore two complementary attempts to regain control of your favorite multi- or many-core system:
- streaming languages (with task-parallel runtimes);
- polyhedral compilation (for DSLs, library generation, portability).
Stream computing is often associated with regular, data-intensive applications, and more specifically with the family of cyclo-static data-flow models. The term also refers to bulk-synchronous data parallelism on SIMD and GPU architectures. Both interpretations are valid but incomplete: streams underline the formal definition of Kahn process networks, a foundation for deterministic concurrent languages and systems with a solid heritage. Stream computing is a semantical framework for parallel languages and a model for pipelined, task-parallel execution. Supporting research on parallel languages with dynamic, nested task creation and first-class streams, we propose a new lock-free algorithm for stalling and waking-up tasks in a user-space scheduler according to changes in the state of the corresponding queues. The algorithm is portable and proven correct against the C11 memory model. We show through experiments that it can serve as a keystone to efficient parallel runtime systems.
Compilers face a never ending race to provide performance portability over a moving computer architecture target. And the compilation problem itself is also changing: programming becomes increasingly synonymous with concurrent or parallel programming, and beyond portability, the need to generate highly optimized, resource-efficient code for common computational tasks is also rejuvenating compiler construction. The polyhedral model of compilation is a powerful framework addressing both applications. Programs are represented using systems of affine (linear) inequalities, allowing to construct and search for advanced loop optimizations. We will study ongoing work to extend the reach of the framework, from dynamic, data-dependent control flow, to the support of domain-specific languages and active libraries. And we will survey a few challenges to the adoption of polyhedral techniques in production compilers.
Bio:
Albert Cohen is a senior research scientist at INRIA and a part-time associate professor at École Polytechnique, Paris, France. He graduated from École Normale Supérieure de Lyon, and received his PhD from the University of Versailles in 1999 (awarded two national prizes). He has been a visiting scholar at the University of Illinois in 2000 and 2001, and an invited professor at Philips Research (then NXP Semiconductors), Eindhoven in 2006 and 2007.
Albert works on optimizing compilers for high-performance and embedded systems, automatic parallelization, data-flow and synchronous programming. He co-authored more than 100 peer-reviewed papers, is or has been the advisor for 21 PhD theses and served in the program committees of the major conferences in the field. Several research projects initiated or led by Albert Cohen resulted in the transfer of advanced compilation techniques to production compilers.
More information
Video of his talk
Abstract:
This talk is for people who care about program correctness and performance. About achieving both without resorting to impossible verification problems and target-specific optimizations. We will explore two complementary attempts to regain control of your favorite multi- or many-core system:
- streaming languages (with task-parallel runtimes);
- polyhedral compilation (for DSLs, library generation, portability).
Stream computing is often associated with regular, data-intensive applications, and more specifically with the family of cyclo-static data-flow models. The term also refers to bulk-synchronous data parallelism on SIMD and GPU architectures. Both interpretations are valid but incomplete: streams underline the formal definition of Kahn process networks, a foundation for deterministic concurrent languages and systems with a solid heritage. Stream computing is a semantical framework for parallel languages and a model for pipelined, task-parallel execution. Supporting research on parallel languages with dynamic, nested task creation and first-class streams, we propose a new lock-free algorithm for stalling and waking-up tasks in a user-space scheduler according to changes in the state of the corresponding queues. The algorithm is portable and proven correct against the C11 memory model. We show through experiments that it can serve as a keystone to efficient parallel runtime systems.
Compilers face a never ending race to provide performance portability over a moving computer architecture target. And the compilation problem itself is also changing: programming becomes increasingly synonymous with concurrent or parallel programming, and beyond portability, the need to generate highly optimized, resource-efficient code for common computational tasks is also rejuvenating compiler construction. The polyhedral model of compilation is a powerful framework addressing both applications. Programs are represented using systems of affine (linear) inequalities, allowing to construct and search for advanced loop optimizations. We will study ongoing work to extend the reach of the framework, from dynamic, data-dependent control flow, to the support of domain-specific languages and active libraries. And we will survey a few challenges to the adoption of polyhedral techniques in production compilers.
Bio:
Albert Cohen is a senior research scientist at INRIA and a part-time associate professor at École Polytechnique, Paris, France. He graduated from École Normale Supérieure de Lyon, and received his PhD from the University of Versailles in 1999 (awarded two national prizes). He has been a visiting scholar at the University of Illinois in 2000 and 2001, and an invited professor at Philips Research (then NXP Semiconductors), Eindhoven in 2006 and 2007.
Albert works on optimizing compilers for high-performance and embedded systems, automatic parallelization, data-flow and synchronous programming. He co-authored more than 100 peer-reviewed papers, is or has been the advisor for 21 PhD theses and served in the program committees of the major conferences in the field. Several research projects initiated or led by Albert Cohen resulted in the transfer of advanced compilation techniques to production compilers.
More information
Practical information
- General public
- Free
- This event is internal
Contact
- Host: Simon Bliudze