BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Optimal Learning
DTSTART;VALUE=DATE-TIME:20220712T161500
DTEND;VALUE=DATE-TIME:20220712T171500
UID:915fe30318da51b0fba83cc0dc8a434d3e1fcbf36b5874ee22348791
CATEGORIES:Conferences - Seminars
DESCRIPTION:Andrea Bonito\nThis paper studies the problem of learning an u
nknown function f from given data about f. The learning problem is to give
an approximation f^ to f that predicts the values of f away from the data
. There are numerous settings for this learning problem depending on (i) w
hat additional information we have about f (known as a model class assumpt
ion)\, (ii) how we measure the accuracy of how well f^ predicts f\, (iii)
what is known about the data and data sites\, (iv) whether the data observ
ations are polluted by noise. A mathematical description of the optimal pe
rformance possible (the smallest possible error of recovery) is known in t
he presence of a model class assumption. Under standard model class assump
tions\, it is shown in this paper that a near optimal f^ can be found by s
olving a certain discrete over-parameterized optimization problem with a p
enalty term. Here\, near optimal means that the error is bounded by a fixe
d constant times the optimal error. This explains the advantage of over-pa
rameterization which is commonly used in modern machine learning. The main
results of this paper prove that over-parameterized learning with an appr
opriate loss function gives a near optimal approximation f^ of the functio
n f from which the data is collected. Quantitative bounds are given for ho
w much over-parameterization needs to be employed and how the penalization
needs to be scaled in order to guarantee a near optimal recovery of f. An
extension of these results to the case where the data is polluted by addi
tive deterministic noise is also given.
LOCATION:MA B1 11 https://plan.epfl.ch/?room==MA%20B1%2011
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR