External FLAIR Seminar: Prof. David Gamarnik

Event details
Date | 21.04.2022 |
Hour | 13:15 › 14:15 |
Speaker | David Gamarnik |
Location | |
Category | Conferences - Seminars |
Title: A curious case of symmetric binary perceptron model. Algorithms and algorithmic barriers.
Speaker: David Gamarnik (MIT)
Abstract: Symmetric binary perceptron model exhibits an interesting solution landscape property: the existence of polynomial time algorithms coincides with the presence of an extreme form of the clustering property. The latter means that at every strictly positive constraints to variables ratio, most of the satisfying solutions are singletons separated by large distances.
In order to understand this conundrum and gain an insight into the algorithmic tractability of this problem, we conduct a different landscape analysis. We establish that the model exhibits a multi overlap gap property (m-OGP), and we show that the onset of this property asymptotically matches the performance of the best algorithms. Next we establish that m-OGP is a barrier to stable algorithms, appropriately defined, and conjecture that the onset of m-OGP marks in fact the onset of the true algorithmic hardness. The proof of this barrier is uses Ramsey theory from extremal combinatorics. We furthermore show that one of the algorithm for the perceptron model, known as Kim-Roche algorithm is indeed stable.
Joint work with Eren Kizildag, Will Perkins and Changji Xu.
Mainling list: The external FLAIR seminars about the theory of machine learning are also announced by email. To receive them, please subscribe to the mailing list here: https://forms.gle/3z6WwwKmN1XZzAxp8
Speaker: David Gamarnik (MIT)
Abstract: Symmetric binary perceptron model exhibits an interesting solution landscape property: the existence of polynomial time algorithms coincides with the presence of an extreme form of the clustering property. The latter means that at every strictly positive constraints to variables ratio, most of the satisfying solutions are singletons separated by large distances.
In order to understand this conundrum and gain an insight into the algorithmic tractability of this problem, we conduct a different landscape analysis. We establish that the model exhibits a multi overlap gap property (m-OGP), and we show that the onset of this property asymptotically matches the performance of the best algorithms. Next we establish that m-OGP is a barrier to stable algorithms, appropriately defined, and conjecture that the onset of m-OGP marks in fact the onset of the true algorithmic hardness. The proof of this barrier is uses Ramsey theory from extremal combinatorics. We furthermore show that one of the algorithm for the perceptron model, known as Kim-Roche algorithm is indeed stable.
Joint work with Eren Kizildag, Will Perkins and Changji Xu.
Mainling list: The external FLAIR seminars about the theory of machine learning are also announced by email. To receive them, please subscribe to the mailing list here: https://forms.gle/3z6WwwKmN1XZzAxp8
Practical information
- Informed public
- Free
Contact
- Lénaïc Chizat: [email protected], François Ged: [email protected]