title:
Robust algorithms for large sparse semidefinite programming (SDP)

abstract

Semidefinite Programming, SDP, refers to optimization problems where the vector variable is a symmetric matrix which is required to be positive semidefinite. Though SDPs (under various names) have been studied as far back as the 1940s, the interest has grown tremendously during the last fifteen years. This is partly due to the many diverse applications in e.g. engineering, combinatorial optimization, and statistics. Part of the interest is due to the great advances in efficient solutions for these types of problems.

Current paradigms for search directions for primal-dual interior-point methods for SDP use: (i) symmetrize the linearization of the optimality conditions at the current estimate; (ii) form and solve the Schur complement equation for the dual variable dy; (iii) back solve to complete the search direction. These steps result in loss of sparsity and ill-conditioning/instability, in particular when one takes long steps and gets close to the boundary of the positive semidefinite cone. This has resulted in the exclusive use of direct, rather than iterative methods, for the linear system.

We look at alternative paradigms based on least squares, an inexact Gauss-Newton approach, and a matrix-free preconditioned conjugate gradient method. This avoids the ill-conditioning in the nondegenerate case. We emphasize exploiting structure in large sparse problems. In particular, we look at LP and SDP relaxations of the: Max-Cut; Quadratic Assignment; Theta function; Nearest Correlation Matrix; and Nearest Euclidean Distance Matrix problems.