Matrix completion with adversarial noise
Low-rank matrix recovery from structured measurements has been a topic of intense study in the last decades. An instance of this problem that is of particular interest due to its relevance for many applications such as recommender systems is the matrix completion problem, where the observations consist of randomly selected matrix entries. An important benchmark method to solve this problem is to minimize the nuclear norm, a convex proxy for the rank. A common approach to establish recovery guarantees for this convex program relies on the construction of a so-called approximate dual certificate. However, this approach provides only limited insight in various respects. In particular, the best-known bounds for the reconstruction error under adversarial noise involve seemingly spurious dimensional factors.
In this talk, we analyze the problem from a geometric perspective and show that these dimension factors in the noise bounds are not an artefact of the proof, but cannot be avoided in the framework commonly applied for the analysis, which aims to establish a linear scaling of the error in terms of the noise level. At the same time, we establish that these factors only arise for very small noise levels, and if one accepts a square root scaling, the constants can be chosen independently of the dimension and depending only mildly on the rank.
This is joint work with Yulia Kostina (TUM) and Dominik Stöger (KU Eichstätt/Ingolstadt).
Practical information
- Informed public
- Free
Organizer
- Tomas Masak
Contact
- Maroussia Schaffner