Conferences - Seminars
Ad-hoc Coordination in Anonymous Games
By Panayiotis Danassis
EDIC Candidacy Exam
Exam president: Prof. Alcherio Martinoli
Thesis advisor: Prof. Boi Faltings
Co-examiner: Prof. Patrick Thiran
Ad-hoc multi-agent coordination is a relatively new research area, recently introduced in the seminal work of Stone et al. [Stone 2010]. Recent advances in domains such as Autonomous Agents, Intelligent Robotics, Internet of Things and Cyber-Physical Systems resulted in an immense growth of autonomous software and robotic agents. As autonomous agents continue to proliferate, so does the need for them to interact and coordinate efficiently. However, due to the differences between the agents in terms of origin, reasoning, knowledge, and perceptual and actuation capabilities, such teamwork must take place without an a priori defined coordination protocol or perhaps any form of explicit communication. This is in contrast to most of the prior research done on multi-agent teamwork, which requires explicit coordination protocols, and/or shared assumptions. This emphasizes the need to develop novel and robust intelligent agents that are able to engage in ad-hoc teamwork and coordination and efficiently cooperate with previously unencountered agents.
This thesis aims to investigate the problem of ad-hoc coordination in anonymous games. In algorithmic game theory, anonymous games are multi-agent games in which an agent does not distinguish between other agents, meaning that his utility depends on his own strategy as well as the number of agents that choose each of the other strategies (and not to the identities of the other agents). We are interested in such games because of their ability to capture a wide range of real world phenomena, having applications in resource sharing, foraging settings, market settings etc. Ad-hoc problems though suffer from high dimensionality [Barrett 2012]. Hence, contrary to conventional approaches in the domain of ad-hoc coordination which usually involve solving large Partially Observable Markov Decision Processes (POMDPs) or arduous Bayesian Learning, our goal is to develop robust and simple dynamics. Such an example is the decentralized algorithm of Cigler et al. [Cigler 2011] where the agents are able to reach an efficient and fair correlated equilibrium through repeated interactions and multi-agent learning.
Ad hoc autonomous agent teams: Collaboration without pre-coordination by P. Stone, G. A. Kaminka, S. Kraus, and J. S. Rosenschein.
An analysis framework for ad hoc teamwork tasks by S. Barrett and P. Stone.
Reaching correlated equilibria through multi-agent learning by L. Cigler and B. Faltings,
Contact EDIC - firstname.lastname@example.org
Accessibility General public