Pick to Learn: state-of-the-art safety guarantees for machine learning and control
Abstract: AI models are increasingly embedded in scientific research and industrial production, where they inform predictions and guide decision-making. Yet, in safety-critical settings such as autonomous driving and medical diagnostics, deploying these models requires rigorous safety and performance guarantees — a need that has spurred significant recent work at the intersection of statistical learning, optimization, and control.
However, existing approaches including conformal prediction, test-set methods, and PAC-Bayes bounds, face two major limitations: they either require setting aside part of the dataset to generate guarantees — possibly degrading the quality of the learned model — or they yield bounds that are often loose, i.e., that do not reflect the true model performance.
In this talk, I will present a recent line of work that overcomes these limitations by enabling the use of all available data to jointly train models and equip them with tight safety or performance guarantees. The core technical idea is to embed any black-box learner into a suitably constructed meta-algorithm, Pick-to-Learn, which transforms the original algorithm into a sample compression scheme from which sharp guarantees can be derived. I will then illustrate how, across a range of applications in machine learning and data-driven control, Pick-to-Learn delivers both better-performing models and tighter certificates than the state of the art, underscoring its potential for broad practical impact.
However, existing approaches including conformal prediction, test-set methods, and PAC-Bayes bounds, face two major limitations: they either require setting aside part of the dataset to generate guarantees — possibly degrading the quality of the learned model — or they yield bounds that are often loose, i.e., that do not reflect the true model performance.
In this talk, I will present a recent line of work that overcomes these limitations by enabling the use of all available data to jointly train models and equip them with tight safety or performance guarantees. The core technical idea is to embed any black-box learner into a suitably constructed meta-algorithm, Pick-to-Learn, which transforms the original algorithm into a sample compression scheme from which sharp guarantees can be derived. I will then illustrate how, across a range of applications in machine learning and data-driven control, Pick-to-Learn delivers both better-performing models and tighter certificates than the state of the art, underscoring its potential for broad practical impact.
Links
Practical information
- Informed public
- Registration required
Organizer
- Daniel Kuhn