BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Minimum Description Length Interpolation Learning
DTSTART:20230609T111500
DTEND:20230609T121500
DTSTAMP:20260407T215553Z
UID:ff9a4b20fecedc100c555347a3a92a154ce444284f06a579313839c4
CATEGORIES:Conferences - Seminars
DESCRIPTION:Nati Srebro (TTI Chicago)\nWe consider learning by using the s
 hortest program that perfectly fits the training data\, even in situations
  where labels are noisy and there is no way of exactly predicting the labe
 ls on the population. Classical theory tells us that in such situations we
  should balance program length with training error\, in which case we can 
 compete with any (unknown) program with sample complexity proportional to 
 the length of the program. But in the spirit of recent work on benign over
 fitting\, we ignore this wisdom and insist on zero training error even in 
 noisy situations. We study the generalization property of the shortest pro
 gram interpolator\, and ask how it performs compared to the balanced appro
 ach\, and how much we suffer\, if at all\, due to such overfitting.
LOCATION:CM 1 4 https://plan.epfl.ch/?room==CM%201%204
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
