Scalable constrained optimization

Thumbnail

Event details

Date 26.08.2019
Hour 16:0018:00
Speaker Maria-Luiza Vladarean
Location
Category Conferences - Seminars
EDIC candidacy exam
Exam president: Prof. Ali H. Sayed
Thesis advisor: Prof. Volkan Cevher
Co-examiner: Prof. Martin Jaggi

Abstract
The recent surge of data availability poses a significant burden on optimization algorithms. Methods are required to be robust in the presence of noise and scalable, i.e. provably fast and space-efficient. In the particular case of constrained optimization, the latter property comes in the form of projection-free methods, stochastic constraints, or distributable schemes.

This proposal discusses three conceptually different approaches to constrained minimization problems. We first present a unified convergence analysis for classical Augmented Lagrangian schemes. We then study a method that leverages a probabilistic framework to satisfy linear inclusion constraints - possibly infinitely many - almost surely. Finally, we discuss the zeroth-order optimization perspective on constrained problems and the scalability challenges faced in the absence of gradient information. We conclude this proposal by outlining some preliminary results obtained during our study of almost-sure constraints under the conditional gradient framework.

Background papers
Almost surely constrained convex optimization, by Olivier Fercoq, Ahmet Alacaoglu, Ion Necoara, Volkan Cevher.
Lagrangian methods for composite optimization, by Shoham Sabach, Marc Teboulle.
Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points, by Krishnakumar Balasubramanian and Saeed Ghadimi.
 
 

Practical information

  • General public
  • Free

Tags

EDIC candidacy exam

Share