BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Learning to Control Unknown Strongly Monotone Games
DTSTART:20260303T111500
DTEND:20260303T120000
DTSTAMP:20260407T093315Z
UID:2522dcfde0c41278c8cf69b3e513eec346799c89a843ade767ce28c7
CATEGORIES:Conferences - Seminars
DESCRIPTION:Siddharth Chandak\, Ph.D. Candidate\, Stanford University\, US
 A\nAbstract - Large-scale multi-agent systems are often modeled as games\
 , where each player’s reward depends on the joint actions of all agents.
  In strongly monotone games\, players converge to a Nash equilibrium (NE) 
 by optimizing their local objectives\, but such equilibria may not align w
 ith the global objective. We study a scenario where a game manager\, with 
 access only to the global objective and limited control over utility param
 eters\, seeks to steer the system toward better equilibria. In this scenar
 io\, the controller adjusts linear coefficients in the players’ utilitie
 s to impose linear constraints on the equilibrium. We design a simple two-
 time-scale stochastic approximation algorithm and show almost sure converg
 ence and a mean square error rate of near-O(t^{-1/4}) for the algorithm.\n
 \nBio: Siddharth Chandak is currently a Ph.D. candidate in Electrical Engi
 neering at Stanford University\, USA. He received his B.Tech. from IIT Bom
 bay\, India\, in 2021\, where he was awarded the President of India Gold M
 edal\, and his M.S. from Stanford University in 2023. His research interes
 ts include game theory\, multi-agent learning\, stochastic approximation\,
  and its applications in reinforcement learning.
LOCATION:ME C2 405 https://plan.epfl.ch/?room==ME%20C2%20405
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
