Optimal Learning

Thumbnail

Event details

Date 12.07.2022
Hour 16:1517:15
Speaker Andrea Bonito
Location
Category Conferences - Seminars
Event Language English

This paper studies the problem of learning an unknown function f from given data about f. The learning problem is to give an approximation f^ to f that predicts the values of f away from the data. There are numerous settings for this learning problem depending on (i) what additional information we have about f (known as a model class assumption), (ii) how we measure the accuracy of how well f^ predicts f, (iii) what is known about the data and data sites, (iv) whether the data observations are polluted by noise. A mathematical description of the optimal performance possible (the smallest possible error of recovery) is known in the presence of a model class assumption. Under standard model class assumptions, it is shown in this paper that a near optimal f^ can be found by solving a certain discrete over-parameterized optimization problem with a penalty term. Here, near optimal means that the error is bounded by a fixed constant times the optimal error. This explains the advantage of over-parameterization which is commonly used in modern machine learning. The main results of this paper prove that over-parameterized learning with an appropriate loss function gives a near optimal approximation f^ of the function f from which the data is collected. Quantitative bounds are given for how much over-parameterization needs to be employed and how the penalization needs to be scaled in order to guarantee a near optimal recovery of f. An extension of these results to the case where the data is polluted by additive deterministic noise is also given.

Practical information

  • General public
  • Free

Organizer

  • Marco Picasso, Nicolas Boumal

Contact

  • Marco Picasso

Event broadcasted in

Share