IC Colloquium : The Role of Neurification in Building Machines that Think

Event details
Date | 07.11.2016 |
Hour | 16:15 › 17:30 |
Location | |
Category | Conferences - Seminars |
By : Daan Wierstra - Google DeepMind
Abstract :
Building machines that think requires us to think `out of the box', as most state-of-the-art machine learning algorithm families -- such as Deep Learning or those of the probabilistic inference persuasion -- suffer from either high computational cost, brittle assumptions or from insurmountably big data requirements. This prevents the possibility of tractable one-shot learning, rapid adaptability of agents to changing environments, learned planning and the development of scalable exploration strategies and intrinsic motivation. In this talk I will highlight recent research at DeepMind aimed at bridging the gap between fast, data-hungry algorithms and slow data-efficient algorithms with more explicit priors. I'll first concentrate on amortised inference methods that fuse ideas from deep learning and variational inference, and then continue to demonstrate the viability of our `neurification' program, that is, the development of deep-learning-style trainable alternatives to many canonical machine learning algorithms.
Bio :
Daan Wierstra leads the `Frontiers' research team at Google DeepMind, focusing efforts on deep generative models, one-shot learning, tractable measures of uncertainty and deep memory architectures. He did his PhD with Juergen Schmidhuber at IDSIA, the Swiss AI lab in Lugano, and his postdoc with Wulfram Gerstner at EPFL, Lausanne.
More information
Abstract :
Building machines that think requires us to think `out of the box', as most state-of-the-art machine learning algorithm families -- such as Deep Learning or those of the probabilistic inference persuasion -- suffer from either high computational cost, brittle assumptions or from insurmountably big data requirements. This prevents the possibility of tractable one-shot learning, rapid adaptability of agents to changing environments, learned planning and the development of scalable exploration strategies and intrinsic motivation. In this talk I will highlight recent research at DeepMind aimed at bridging the gap between fast, data-hungry algorithms and slow data-efficient algorithms with more explicit priors. I'll first concentrate on amortised inference methods that fuse ideas from deep learning and variational inference, and then continue to demonstrate the viability of our `neurification' program, that is, the development of deep-learning-style trainable alternatives to many canonical machine learning algorithms.
Bio :
Daan Wierstra leads the `Frontiers' research team at Google DeepMind, focusing efforts on deep generative models, one-shot learning, tractable measures of uncertainty and deep memory architectures. He did his PhD with Juergen Schmidhuber at IDSIA, the Swiss AI lab in Lugano, and his postdoc with Wulfram Gerstner at EPFL, Lausanne.
More information
Practical information
- General public
- Free
Contact
- Host : O. Svensson