Title of presentation:
abstract:
This talk provides a simple approach for solving a semidefinite program,
SDP. As is common with many other
approaches, we apply a primal-dual
method that uses the perturbed optimality equations
for SDP, Fu(X,y,Z)=0,
where X,Z are n x n symmetric matrices
and y is in \Ren.
However, as in
KRUK, MURAMATSU, RENDL, VANDERBEI, and WOLKOWICZ,
The Gauss-Newton direction in linear and semidefinite programming,
in Optimization Methods and Software, Volume 15(1):1--27, 2001,
we look at this as an overdetermined
system of nonlinear (bilinear) equations on vectors X,y,Z which has
a zero residual at optimality. We do not use any symmetrization on
this system.
That the vectors X,Z are symmetric matrices is ignored.
What is different in this paper is a
preprocessing step that results in one single,
well-conditioned, overdetermined bilinear equation.
We do not form a normal equation for the linearized
system.
In addition, asymptotic q-quadratic convergence is obtained with a
crossover approach that switches to affine scaling without maintaining
the positive (semi)definiteness
of X and Z. This results in a marked reduction in the number and
the complexity of the iterations.
We use the long accepted method for nonlinear least squares problems,
the Gauss-Newton method. For large sparse data, we use an
inexact Gauss-Newton approach with a
preconditioned conjugate gradient method applied directly on the
linearized equations in matrix space.
This does not require any operator formations into a Schur
complement system or matrix representations of linear operators.
The cost of an iteration consists almost entirely in
the solution of a (generally well-conditioned) least squares problem of size
n2 x (n+1 choose 2). This system
consists of one large (sparse) block with n choose 2 columns
(one CG iteration cost is one sparse matrix multiplication)
and one small (dense) block with n columns
(one CG iteration cost is one matrix row scaling).
Exact primal and dual feasibility are guaranteed throughout the
iterations.
To illustrate the approach, we apply it to the well known
SDP relaxation of the Max-Cut problem. This includes the derivation of
the optimal diagonal preconditioner. The numerical tests show that
the algorithm exploits sparsity and obtains q-quadratic convergence.