Learning stable Recurrent Neural Networks for model predictive control
Event details
Date | 31.05.2024 |
Hour | 11:00 › 12:00 |
Speaker | Dr Fabio Bonassi, Uppsala University, Sweden. |
Location | |
Category | Conferences - Seminars |
Event Language | English |
Abstract
This talk aims to explore how stable Recurrent Neural Networks (RNN) can be used for indirect data-driven control.
In particular, we unravel the idea of learning RNN models of nonlinear dynamical systems with Incremental Input-to-State Stability (ISS) certificates, based on which nonlinear Model Predictive Control (MPC) laws with nominal closed-loop properties can be designed. This allows one to take advantage of RNNs’ modeling power while preserving, at the same time, the theoretical properties of MPC schemes.
To this end, it is essential to train provenly-ISS RNNs: we will therefore discuss how to enforce the stability of the GRU and LSTM architectures by an appropriate regularization, and provide an overview of a recent, structurally stable, architecture known as Structured State-Space Model.
Bio
Fabio Bonassi is a postdoctoral researcher on “Machine Learning for Control” at the Uppsala University, Sweden. He received the M.Sc. degree and Ph.D. degree from Politecnico di Milano, Italy, in 2018 and 2023, respectively. He is a recipient of the Dimitris N. Chorafas Ph.D. Award and of the “Claudio Maffezzoni” master thesis award. At the 19th Symposium on System Identification, he received the IFAC Best Young Author Award.
His research interests are neural network architectures for the identification and control of dynamic systems, with a focus on their stability and robustness properties.
This talk aims to explore how stable Recurrent Neural Networks (RNN) can be used for indirect data-driven control.
In particular, we unravel the idea of learning RNN models of nonlinear dynamical systems with Incremental Input-to-State Stability (ISS) certificates, based on which nonlinear Model Predictive Control (MPC) laws with nominal closed-loop properties can be designed. This allows one to take advantage of RNNs’ modeling power while preserving, at the same time, the theoretical properties of MPC schemes.
To this end, it is essential to train provenly-ISS RNNs: we will therefore discuss how to enforce the stability of the GRU and LSTM architectures by an appropriate regularization, and provide an overview of a recent, structurally stable, architecture known as Structured State-Space Model.
Bio
Fabio Bonassi is a postdoctoral researcher on “Machine Learning for Control” at the Uppsala University, Sweden. He received the M.Sc. degree and Ph.D. degree from Politecnico di Milano, Italy, in 2018 and 2023, respectively. He is a recipient of the Dimitris N. Chorafas Ph.D. Award and of the “Claudio Maffezzoni” master thesis award. At the 19th Symposium on System Identification, he received the IFAC Best Young Author Award.
His research interests are neural network architectures for the identification and control of dynamic systems, with a focus on their stability and robustness properties.
Practical information
- General public
- Free
Organizer
- Professor Giancarlo Ferrari Trecate