Learning to Control Unknown Strongly Monotone Games
Event details
| Date | 03.03.2026 |
| Hour | 11:15 › 12:00 |
| Speaker | Siddharth Chandak, Ph.D. Candidate, Stanford University, USA |
| Location | |
| Category | Conferences - Seminars |
| Event Language | English |
Abstract - Large-scale multi-agent systems are often modeled as games, where each player’s reward depends on the joint actions of all agents. In strongly monotone games, players converge to a Nash equilibrium (NE) by optimizing their local objectives, but such equilibria may not align with the global objective. We study a scenario where a game manager, with access only to the global objective and limited control over utility parameters, seeks to steer the system toward better equilibria. In this scenario, the controller adjusts linear coefficients in the players’ utilities to impose linear constraints on the equilibrium. We design a simple two-time-scale stochastic approximation algorithm and show almost sure convergence and a mean square error rate of near-O(t^{-1/4}) for the algorithm.
Bio: Siddharth Chandak is currently a Ph.D. candidate in Electrical Engineering at Stanford University, USA. He received his B.Tech. from IIT Bombay, India, in 2021, where he was awarded the President of India Gold Medal, and his M.S. from Stanford University in 2023. His research interests include game theory, multi-agent learning, stochastic approximation, and its applications in reinforcement learning.
Bio: Siddharth Chandak is currently a Ph.D. candidate in Electrical Engineering at Stanford University, USA. He received his B.Tech. from IIT Bombay, India, in 2021, where he was awarded the President of India Gold Medal, and his M.S. from Stanford University in 2023. His research interests include game theory, multi-agent learning, stochastic approximation, and its applications in reinforcement learning.
Practical information
- General public
- Free
Organizer
- Prof. Maryam Kamgarpour
Contact
- barbara.schenkelepfl.ch