Seminar by Prof. Anthony Man Cho So, Chinese University of Hong Kong
Event details
Date | 22.06.2017 |
Hour | 12:00 › 13:30 |
Speaker | Prof. Anthony Man-Cho So, Chinese University of Hong Kong |
Location | |
Category | Conferences - Seminars |
"Error Bounds for Structured Optimization Problems and Convergence Analysis of First-Order Methods in the Absence of Strong Convexity"
Abstract
In recent years, we have witnessed a widespread use of first-order methods (FOMs) to solve large-scale structured convex optimization problems in machine learning and signal processing. One fundamental issue in this line of study is the convergence properties of the FOMs. Under strong convexity assumptions on the objective, the linear convergence of various FOMs can be established in a fairly standard and straightforward manner. However, such assumptions are not satisfied in most applications of interest. In this talk, we will present a framework for analyzing the convergence rates of FOMs. A key component of this framework is a so-called error bound condition, which stipulates that the distance from any candidate solution to the optimal solution set of the problem at hand can be estimated by certain easily computable optimality measure. We will show that many structured convex optimization problems in machine learning satisfy the error bound condition, even though they do not have strongly convex objectives. Consequently, we are able to show that many FOMs have a linear rate of convergence when applied to those problems. If time permits, we shall also discuss the application of our framework to certain structured non-convex optimization problems.
Abstract
In recent years, we have witnessed a widespread use of first-order methods (FOMs) to solve large-scale structured convex optimization problems in machine learning and signal processing. One fundamental issue in this line of study is the convergence properties of the FOMs. Under strong convexity assumptions on the objective, the linear convergence of various FOMs can be established in a fairly standard and straightforward manner. However, such assumptions are not satisfied in most applications of interest. In this talk, we will present a framework for analyzing the convergence rates of FOMs. A key component of this framework is a so-called error bound condition, which stipulates that the distance from any candidate solution to the optimal solution set of the problem at hand can be estimated by certain easily computable optimality measure. We will show that many structured convex optimization problems in machine learning satisfy the error bound condition, even though they do not have strongly convex objectives. Consequently, we are able to show that many FOMs have a linear rate of convergence when applied to those problems. If time permits, we shall also discuss the application of our framework to certain structured non-convex optimization problems.
Practical information
- General public
- Free
Organizer
- College of Management of Technology