Enhancing the Safety of Neural Networks via Verification of Global Properties

Thumbnail

Event details

Date 17.07.2025
Hour 14:0015:00
Location Online
Category Conferences - Seminars
Event Language English
By Dana Drachsler Cohen

Abstract
Deep learning has achieved remarkable success across a wide range of applications.  However, neural networks remain vulnerable to various threats, including adversarial perturbations and privacy breaches. In response, several mitigation strategies have been proposed, such as adversarial training, randomization, and post-training repair. Yet, achieving both high clean accuracy and strong global safety guarantees remains an open challenge.
In this talk, I will present how the verification of global properties can guide principled post-training repairs that enhance network safety with minimal changes to the model’s predictions. I will focus on three classes of properties: robustness to adversarial perturbations, privacy, and quantization consistency. For each, I will introduce verification techniques that make global guarantees tractable. I will conclude with open questions and directions for future research.

Bio
Dana Drachsler-Cohen is an Assistant Professor at the Technion, where she leads the SAFE Lab. Her research focuses on formal methods for securing and verifying deep learning models.

More information

Practical information

  • General public
  • Free

Contact

  • Host: Laboratory for Automated Reasoning and Analysis, http://lara.epfl.ch

Event broadcasted in

Share