Limits to (Machine) Learning

Thumbnail
Cancelled

Event details

Date 18.11.2025
Hour 12:1513:15
Speaker Onur Demiray - PhD student, SFI@EPFL
Location
UNIL, Extranef, room 126
Category Conferences - Seminars
Event Language English

Machine learning (ML) models are highly flexible but face fundamental limits when learning from finite data. We characterize a universal lower bound—the Limits-to-Learning Gap (LLG)—that quantifies the unavoidable difference between the performance of the true (population) best model and ridge-penalized ML models. Empirical estimates in financial applications reveal sizable gaps, implying that standard machine learning methods may substantially understate predictability. We also derive LLG corrections to the classic Hansen and Jagannathan (1991) bounds and explore their implications for parameter learning in general equilibrium models.